Success Metrics
There are two formatting options available. The traditional desired outcome statement is a structure used in the Outcome-Driven Innovation methodology. Since many stakeholders - especially when involved with marketing or UX teams - push back on the awkward nature of desired outcomes statements since people don’t talk like that, the alternative is a natural language structure that gets to the heart of the outcome and tries to avoid tasks and activities where feasible.
This catalog contains 20 potential metrics using each formatting option. You will likely need to reduce this set for a survey. The number of statements that have been generated is arbitrary and can be expanded to accommodate your needs.
Desired Outcome Statements (ODI)
- Minimize the time it takes to gather user feedback on the solution, e.g., satisfaction surveys, direct interviews, etc.
- Minimize the time it takes to analyze user feedback for actionable insights, e.g., satisfaction levels, improvement suggestions, etc.
- Minimize the time it takes to identify common themes in user feedback, e.g., usability issues, performance problems, etc.
- Minimize the time it takes to prioritize actions based on user feedback, e.g., critical bugs, desired features, etc.
- Minimize the time it takes to communicate planned improvements to users, e.g., update schedules, expected benefits, etc.
- Minimize the time it takes to implement changes based on user feedback, e.g., software updates, hardware adjustments, etc.
- Minimize the time it takes to verify the impact of changes on user satisfaction, e.g., follow-up surveys, repeat interviews, etc.
- Minimize the time it takes to adjust solutions based on ongoing user feedback, e.g., iterative design changes, feature enhancements, etc.
- Minimize the time it takes to document user feedback and responses, e.g., support tickets, change logs, etc.
- Minimize the time it takes to train users on new features or changes, e.g., instructional videos, user manuals, etc.
- Minimize the time it takes to ensure users are aware of available support resources, e.g., helpdesk contacts, FAQ pages, etc.
- Minimize the time it takes to establish a continuous feedback loop with users, e.g., online forums, feedback widgets, etc.
- Minimize the time it takes to assess user satisfaction across different user segments, e.g., by role, by usage frequency, etc.
- Minimize the time it takes to benchmark user satisfaction against industry standards, e.g., NPS scores, customer satisfaction indices, etc.
- Minimize the time it takes to identify barriers to user satisfaction, e.g., complex interfaces, lack of training, etc.
- Minimize the time it takes to develop strategies to improve user satisfaction, e.g., user experience improvements, customer service enhancements, etc.
- Minimize the time it takes to measure the effectiveness of satisfaction improvement initiatives, e.g., before-and-after studies, control groups, etc.
- Minimize the time it takes to solicit user testimonials or case studies, e.g., written endorsements, video interviews, etc.
- Minimize the likelihood of user feedback being overlooked or ignored, e.g., due to volume, prioritization issues, etc.
- Minimize the likelihood of misinterpreting user feedback, e.g., cultural nuances, ambiguous responses, etc.
Customer Success Statements (PJTBD)
- Gather user feedback on the solution, e.g., satisfaction surveys, direct interviews, etc.
- Analyze user feedback for actionable insights, e.g., satisfaction levels, improvement suggestions, etc.
- Identify common themes in user feedback, e.g., usability issues, performance problems, etc.
- Prioritize actions based on user feedback, e.g., critical bugs, desired features, etc.
- Communicate planned improvements to users, e.g., update schedules, expected benefits, etc.
- Implement changes based on user feedback, e.g., software updates, hardware adjustments, etc.
- Verify the impact of changes on user satisfaction, e.g., follow-up surveys, repeat interviews, etc.
- Adjust solutions based on ongoing user feedback, e.g., iterative design changes, feature enhancements, etc.
- Document user feedback and responses, e.g., support tickets, change logs, etc.
- Train users on new features or changes, e.g., instructional videos, user manuals, etc.
- Ensure users are aware of available support resources, e.g., helpdesk contacts, FAQ pages, etc.
- Establish a continuous feedback loop with users, e.g., online forums, feedback widgets, etc.
- Assess user satisfaction across different user segments, e.g., by role, by usage frequency, etc.
- Benchmark user satisfaction against industry standards, e.g., NPS scores, customer satisfaction indices, etc.
- Identify barriers to user satisfaction, e.g., complex interfaces, lack of training, etc.
- Develop strategies to improve user satisfaction, e.g., user experience improvements, customer service enhancements, etc.
- Measure the effectiveness of satisfaction improvement initiatives, e.g., before-and-after studies, control groups, etc.
- Solicit user testimonials or case studies, e.g., written endorsements, video interviews, etc.
- Avoid user feedback being overlooked or ignored, e.g., due to volume, prioritization issues, etc.
- Avoid misinterpreting user feedback, e.g., cultural nuances, ambiguous responses, etc.
Test Fit Structure
Apply this to Customer Success Statements only. Everything should fit together nicely. Here’s an article where I introduced the concept. Feel free to devise your own version for Desired Outcome Statements as this does not apply to their format directly.
As a(n) [end user] + who is + [Job] you're trying to [success statement] + "faster and more accurately" so that you can successfully [Job Step]