Success Metrics
There are two formatting options available. The traditional desired outcome statement is a structure used in the Outcome-Driven Innovation methodology. Since many stakeholders - especially when involved with marketing or UX teams - push back on the awkward nature of desired outcomes statements since people don’t talk like that, the alternative is a natural language structure that gets to the heart of the outcome and tries to avoid tasks and activities where feasible.
This catalog contains 20 potential metrics using each formatting option. You will likely need to reduce this set for a survey. The number of statements that have been generated is arbitrary and can be expanded to accommodate your needs.
Desired Outcome Statements (ODI)
- Minimize the time it takes to identify key performance indicators for each solution, e.g., speed, reliability, etc.
- Minimize the time it takes to determine the compatibility of each solution with existing systems, e.g., software, hardware, etc.
- Minimize the time it takes to understand the scalability of each solution, e.g., user capacity, data volume, etc.
- Minimize the time it takes to evaluate the security features of each solution, e.g., encryption, user authentication, etc.
- Minimize the time it takes to assess the cost-effectiveness of each solution, e.g., initial investment, maintenance costs, etc.
- Minimize the time it takes to determine the ease of integration of each solution, e.g., APIs, compatibility with existing systems, etc.
- Minimize the time it takes to evaluate the user-friendliness of each solution, e.g., user interface, ease of use, etc.
- Minimize the time it takes to assess the reliability of each solution, e.g., uptime, error rates, etc.
- Minimize the time it takes to understand the support and maintenance requirements of each solution, e.g., technical support, updates, etc.
- Minimize the likelihood of overlooking a critical evaluation criterion, e.g., data privacy, compliance, etc.
- Minimize the time it takes to compare the performance of each solution against the set criteria, e.g., benchmarking, side-by-side comparison, etc.
- Minimize the time it takes to gather feedback from potential users of each solution, e.g., surveys, user testing, etc.
- Minimize the time it takes to understand the long-term viability of each solution, e.g., vendor stability, upgrade path, etc.
- Minimize the time it takes to evaluate the training requirements for each solution, e.g., user training, technical training, etc.
- Minimize the likelihood of selecting a solution that fails to meet business objectives, e.g., cost savings, efficiency, etc.
- Minimize the time it takes to assess the customization possibilities of each solution, e.g., add-ons, plugins, etc.
- Minimize the time it takes to understand the implementation timeline for each solution, e.g., setup time, migration time, etc.
- Minimize the likelihood of overlooking potential conflicts between solutions, e.g., software conflicts, resource allocation, etc.
- Minimize the time it takes to evaluate the return on investment for each solution, e.g., cost savings, increased productivity, etc.
- Minimize the likelihood of choosing a solution that fails to meet user needs, e.g., functionality, usability, etc.
Customer Success Statements (PJTBD)
- Identify key performance indicators for each solution, e.g., speed, reliability, etc.
- Determine the compatibility of each solution with existing systems, e.g., software, hardware, etc.
- Understand the scalability of each solution, e.g., user capacity, data volume, etc.
- Evaluate the security features of each solution, e.g., encryption, user authentication, etc.
- Assess the cost-effectiveness of each solution, e.g., initial investment, maintenance costs, etc.
- Determine the ease of integration of each solution, e.g., APIs, compatibility with existing systems, etc.
- Evaluate the user-friendliness of each solution, e.g., user interface, ease of use, etc.
- Assess the reliability of each solution, e.g., uptime, error rates, etc.
- Understand the support and maintenance requirements of each solution, e.g., technical support, updates, etc.
- Avoid overlooking a critical evaluation criterion, e.g., data privacy, compliance, etc.
- Compare the performance of each solution against the set criteria, e.g., benchmarking, side-by-side comparison, etc.
- Gather feedback from potential users of each solution, e.g., surveys, user testing, etc.
- Understand the long-term viability of each solution, e.g., vendor stability, upgrade path, etc.
- Evaluate the training requirements for each solution, e.g., user training, technical training, etc.
- Avoid selecting a solution that fails to meet business objectives, e.g., cost savings, efficiency, etc.
- Assess the customization possibilities of each solution, e.g., add-ons, plugins, etc.
- Understand the implementation timeline for each solution, e.g., setup time, migration time, etc.
- Avoid overlooking potential conflicts between solutions, e.g., software conflicts, resource allocation, etc.
- Evaluate the return on investment for each solution, e.g., cost savings, increased productivity, etc.
- Avoid choosing a solution that fails to meet user needs, e.g., functionality, usability, etc.
Test Fit Structure
Apply this to Customer Success Statements only. Everything should fit together nicely. Here’s an article where I introduced the concept. Feel free to devise your own version for Desired Outcome Statements as this does not apply to their format directly.
As a(n) [end user] + who is + [Job] you're trying to [success statement] + "faster and more accurately" so that you can successfully [Job Step]