Success Metrics
There are two formatting options available. The traditional desired outcome statement is a structure used in the Outcome-Driven Innovation methodology. Since many stakeholders - especially when involved with marketing or UX teams - push back on the awkward nature of desired outcomes statements since people don’t talk like that, the alternative is a natural language structure that gets to the heart of the outcome and tries to avoid tasks and activities where feasible.
This catalog contains 20 potential metrics using each formatting option. You will likely need to reduce this set for a survey. The number of statements that have been generated is arbitrary and can be expanded to accommodate your needs.
Desired Outcome Statements (ODI)
- Minimize the time it takes to verify the product's functionality, e.g., performance tests, user interface tests, etc.
- Minimize the time it takes to confirm the product's compatibility with existing systems, e.g., software compatibility, hardware compatibility, etc.
- Minimize the time it takes to check the product's adherence to specifications, e.g., size, weight, color, etc.
- Minimize the time it takes to validate the product's safety features, e.g., safety locks, emergency stop, etc.
- Minimize the time it takes to assess the product's performance under different conditions, e.g., high load, low power, etc.
- Minimize the time it takes to evaluate the product's ease of use, e.g., user-friendly interface, clear instructions, etc.
- Minimize the time it takes to determine the product's durability, e.g., stress tests, wear and tear, etc.
- Minimize the time it takes to confirm the product's efficiency, e.g., energy consumption, processing speed, etc.
- Minimize the likelihood of product malfunctions during testing, e.g., software crashes, hardware failures, etc.
- Minimize the likelihood of missing critical product flaws during testing, e.g., overlooked bugs, unnoticed performance issues, etc.
- Minimize the time it takes to document the product testing results, e.g., test reports, issue tracking, etc.
- Minimize the time it takes to communicate the product testing results to relevant parties, e.g., development team, management, etc.
- Minimize the likelihood of misinterpreting the product testing results, e.g., false positives, false negatives, etc.
- Minimize the time it takes to plan for retesting after product modifications, e.g., scheduling, resource allocation, etc.
- Minimize the time it takes to verify the effectiveness of product modifications, e.g., retesting, comparison with previous results, etc.
- Minimize the likelihood of product testing causing damage to the product, e.g., physical damage, data corruption, etc.
- Minimize the time it takes to confirm the product's compliance with industry standards, e.g., ISO, CE, etc.
- Minimize the time it takes to evaluate the product's performance against competitors, e.g., benchmarking, feature comparison, etc.
- Minimize the likelihood of product testing causing delays in product release, e.g., extended testing periods, retesting, etc.
- Minimize the time it takes to determine the product's potential for future upgrades or enhancements, e.g., scalability, compatibility with future technologies, etc.
Customer Success Statements (PJTBD)
- Verify the product's functionality, e.g., performance tests, user interface tests, etc.
- Confirm the product's compatibility with existing systems, e.g., software compatibility, hardware compatibility, etc.
- Check the product's adherence to specifications, e.g., size, weight, color, etc.
- Validate the product's safety features, e.g., safety locks, emergency stop, etc.
- Assess the product's performance under different conditions, e.g., high load, low power, etc.
- Evaluate the product's ease of use, e.g., user-friendly interface, clear instructions, etc.
- Determine the product's durability, e.g., stress tests, wear and tear, etc.
- Confirm the product's efficiency, e.g., energy consumption, processing speed, etc.
- Avoid product malfunctions during testing, e.g., software crashes, hardware failures, etc.
- Avoid missing critical product flaws during testing, e.g., overlooked bugs, unnoticed performance issues, etc.
- Document the product testing results, e.g., test reports, issue tracking, etc.
- Communicate the product testing results to relevant parties, e.g., development team, management, etc.
- Avoid misinterpreting the product testing results, e.g., false positives, false negatives, etc.
- Plan for retesting after product modifications, e.g., scheduling, resource allocation, etc.
- Verify the effectiveness of product modifications, e.g., retesting, comparison with previous results, etc.
- Avoid product testing causing damage to the product, e.g., physical damage, data corruption, etc.
- Confirm the product's compliance with industry standards, e.g., ISO, CE, etc.
- Evaluate the product's performance against competitors, e.g., benchmarking, feature comparison, etc.
- Avoid product testing causing delays in product release, e.g., extended testing periods, retesting, etc.
- Determine the product's potential for future upgrades or enhancements, e.g., scalability, compatibility with future technologies, etc.
Test Fit Structure
Apply this to Customer Success Statements only. Everything should fit together nicely. Here’s an article where I introduced the concept. Feel free to devise your own version for Desired Outcome Statements as this does not apply to their format directly.
As a(n) [end user] + who is + [Job] you're trying to [success statement] + "faster and more accurately" so that you can successfully [Job Step]