Success Metrics
There are two formatting options available. The traditional desired outcome statement is a structure used in the Outcome-Driven Innovation methodology. Since many stakeholders - especially when involved with marketing or UX teams - push back on the awkward nature of desired outcomes statements since people don’t talk like that, the alternative is a natural language structure that gets to the heart of the outcome and tries to avoid tasks and activities where feasible.
This catalog contains 20 potential metrics using each formatting option. You will likely need to reduce this set for a survey. The number of statements that have been generated is arbitrary and can be expanded to accommodate your needs.
Desired Outcome Statements (ODI)
- Minimize the time it takes to identify potential issues with the product, e.g., software bugs, hardware malfunctions, etc.
- Minimize the time it takes to gather comprehensive test data, e.g., user interactions, system performance, etc.
- Minimize the time it takes to simulate real-world usage scenarios, e.g., high traffic, multitasking, etc.
- Minimize the time it takes to analyze test results for actionable insights, e.g., error patterns, performance bottlenecks, etc.
- Minimize the time it takes to prioritize issues based on their impact, e.g., critical bugs, minor glitches, etc.
- Minimize the time it takes to communicate findings to the development team, e.g., bug reports, performance metrics, etc.
- Minimize the time it takes to verify fixes and improvements, e.g., retesting patched areas, evaluating performance enhancements, etc.
- Minimize the time it takes to update test cases and criteria, e.g., new features, changed functionalities, etc.
- Minimize the time it takes to ensure test coverage across all platforms and devices, e.g., mobile, desktop, browsers, etc.
- Minimize the time it takes to validate user experience and interface consistency, e.g., layout, navigation, responsiveness, etc.
- Minimize the time it takes to assess product compliance with regulatory standards, e.g., GDPR, CCPA, accessibility, etc.
- Minimize the time it takes to establish a baseline for product performance, e.g., load times, response times, etc.
- Minimize the time it takes to track improvements or regressions over time, e.g., version comparisons, benchmarking, etc.
- Minimize the time it takes to engage with end-users for beta testing feedback, e.g., surveys, interviews, usability tests, etc.
- Minimize the time it takes to integrate automated testing tools and frameworks, e.g., Selenium, Jenkins, etc.
- Minimize the time it takes to coordinate cross-functional testing efforts, e.g., with security, network, database teams, etc.
- Minimize the time it takes to document test plans and outcomes for future reference, e.g., test strategies, defect logs, etc.
- Minimize the time it takes to identify areas for testing efficiency improvements, e.g., automation, test case reuse, etc.
- Minimize the likelihood of overlooking critical bugs, e.g., security vulnerabilities, data loss scenarios, etc.
- Minimize the likelihood of user dissatisfaction due to unresolved issues, e.g., crashes, slow performance, etc.
Customer Success Statements (PJTBD)
- Identify potential issues with the product, e.g., software bugs, hardware malfunctions, etc.
- Gather comprehensive test data, e.g., user interactions, system performance, etc.
- Simulate real-world usage scenarios, e.g., high traffic, multitasking, etc.
- Analyze test results for actionable insights, e.g., error patterns, performance bottlenecks, etc.
- Prioritize issues based on their impact, e.g., critical bugs, minor glitches, etc.
- Communicate findings to the development team, e.g., bug reports, performance metrics, etc.
- Verify fixes and improvements, e.g., retesting patched areas, evaluating performance enhancements, etc.
- Update test cases and criteria, e.g., new features, changed functionalities, etc.
- Ensure test coverage across all platforms and devices, e.g., mobile, desktop, browsers, etc.
- Validate user experience and interface consistency, e.g., layout, navigation, responsiveness, etc.
- Assess product compliance with regulatory standards, e.g., GDPR, CCPA, accessibility, etc.
- Establish a baseline for product performance, e.g., load times, response times, etc.
- Track improvements or regressions over time, e.g., version comparisons, benchmarking, etc.
- Engage with end-users for beta testing feedback, e.g., surveys, interviews, usability tests, etc.
- Integrate automated testing tools and frameworks, e.g., Selenium, Jenkins, etc.
- Coordinate cross-functional testing efforts, e.g., with security, network, database teams, etc.
- Document test plans and outcomes for future reference, e.g., test strategies, defect logs, etc.
- Identify areas for testing efficiency improvements, e.g., automation, test case reuse, etc.
- Avoid overlooking critical bugs, e.g., security vulnerabilities, data loss scenarios, etc.
- Avoid user dissatisfaction due to unresolved issues, e.g., crashes, slow performance, etc.
Test Fit Structure
Apply this to Customer Success Statements only. Everything should fit together nicely. Here’s an article where I introduced the concept. Feel free to devise your own version for Desired Outcome Statements as this does not apply to their format directly.
As a(n) [end user] + who is + [Job] you're trying to [success statement] + "faster and more accurately" so that you can successfully [Job Step]