Success Metrics
There are two formatting options available. The traditional desired outcome statement is a structure used in the Outcome-Driven Innovation methodology. Since many stakeholders - especially when involved with marketing or UX teams - push back on the awkward nature of desired outcomes statements since people don’t talk like that, the alternative is a natural language structure that gets to the heart of the outcome and tries to avoid tasks and activities where feasible.
This catalog contains 20 potential metrics using each formatting option. You will likely need to reduce this set for a survey. The number of statements that have been generated is arbitrary and can be expanded to accommodate your needs.
Desired Outcome Statements (ODI)
- Minimize the time it takes to identify any performance issues, e.g., slow response, frequent crashes, etc.
- Minimize the time it takes to verify the product is functioning as expected, e.g., all features are working, no unexpected behaviors, etc.
- Minimize the likelihood of missing critical performance issues, e.g., intermittent problems, hidden defects, etc.
- Minimize the time it takes to compare the product's performance with the advertised specifications, e.g., speed, capacity, etc.
- Minimize the likelihood of overlooking performance degradation over time, e.g., slower speed, reduced efficiency, etc.
- Minimize the time it takes to determine if the product meets your needs and expectations, e.g., functionality, reliability, etc.
- Minimize the likelihood of experiencing unexpected product failures, e.g., crashes, data loss, etc.
- Minimize the time it takes to assess the product's impact on your workflow or processes, e.g., increased productivity, reduced errors, etc.
- Minimize the likelihood of product's performance being affected by external factors, e.g., network issues, power fluctuations, etc.
- Minimize the time it takes to identify any necessary adjustments or settings for optimal performance, e.g., configuration changes, updates, etc.
- Minimize the likelihood of product causing disruptions or inefficiencies in your operations, e.g., downtime, rework, etc.
- Minimize the time it takes to evaluate the product's performance under different conditions, e.g., high load, low power, etc.
- Minimize the likelihood of product not meeting the required standards or regulations, e.g., safety, environmental, etc.
- Minimize the time it takes to understand the product's performance metrics, e.g., usage statistics, error rates, etc.
- Minimize the likelihood of product's performance not aligning with your goals or objectives, e.g., cost savings, quality improvement, etc.
- Minimize the time it takes to determine the product's impact on user satisfaction, e.g., ease of use, functionality, etc.
- Minimize the likelihood of product's performance causing user dissatisfaction or complaints, e.g., difficult to use, slow, etc.
- Minimize the time it takes to identify opportunities for product improvement or enhancement, e.g., new features, upgrades, etc.
- Minimize the likelihood of product's performance leading to additional costs or resources, e.g., maintenance, repairs, etc.
- Minimize the time it takes to assess the product's value for money based on its performance, e.g., cost vs benefits, return on investment, etc.
Customer Success Statements (PJTBD)
- Identify any performance issues, e.g., slow response, frequent crashes, etc.
- Verify the product is functioning as expected, e.g., all features are working, no unexpected behaviors, etc.
- Avoid missing critical performance issues, e.g., intermittent problems, hidden defects, etc.
- Compare the product's performance with the advertised specifications, e.g., speed, capacity, etc.
- Avoid overlooking performance degradation over time, e.g., slower speed, reduced efficiency, etc.
- Determine if the product meets your needs and expectations, e.g., functionality, reliability, etc.
- Avoid experiencing unexpected product failures, e.g., crashes, data loss, etc.
- Assess the product's impact on your workflow or processes, e.g., increased productivity, reduced errors, etc.
- Avoid product's performance being affected by external factors, e.g., network issues, power fluctuations, etc.
- Identify any necessary adjustments or settings for optimal performance, e.g., configuration changes, updates, etc.
- Avoid product causing disruptions or inefficiencies in your operations, e.g., downtime, rework, etc.
- Evaluate the product's performance under different conditions, e.g., high load, low power, etc.
- Avoid product not meeting the required standards or regulations, e.g., safety, environmental, etc.
- Understand the product's performance metrics, e.g., usage statistics, error rates, etc.
- Avoid product's performance not aligning with your goals or objectives, e.g., cost savings, quality improvement, etc.
- Determine the product's impact on user satisfaction, e.g., ease of use, functionality, etc.
- Avoid product's performance causing user dissatisfaction or complaints, e.g., difficult to use, slow, etc.
- Identify opportunities for product improvement or enhancement, e.g., new features, upgrades, etc.
- Avoid product's performance leading to additional costs or resources, e.g., maintenance, repairs, etc.
- Assess the product's value for money based on its performance, e.g., cost vs benefits, return on investment, etc.
Test Fit Structure
Apply this to Customer Success Statements only. Everything should fit together nicely. Here’s an article where I introduced the concept. Feel free to devise your own version for Desired Outcome Statements as this does not apply to their format directly.
As a(n) [end user] + who is + [Job] you're trying to [success statement] + "faster and more accurately" so that you can successfully [Job Step]