Success Metrics
There are two formatting options available. The traditional desired outcome statement is a structure used in the Outcome-Driven Innovation methodology. Since many stakeholders - especially when involved with marketing or UX teams - push back on the awkward nature of desired outcomes statements since people don’t talk like that, the alternative is a natural language structure that gets to the heart of the outcome and tries to avoid tasks and activities where feasible.
This catalog contains 20 potential metrics using each formatting option. You will likely need to reduce this set for a survey. The number of statements that have been generated is arbitrary and can be expanded to accommodate your needs.
Desired Outcome Statements (ODI)
- Minimize the time it takes to identify potential integration issues, e.g., compatibility problems, data inconsistencies, etc.
- Minimize the time it takes to verify the functionality of integrated solutions, e.g., data flow, system performance, etc.
- Minimize the time it takes to confirm the interoperability of different systems, e.g., data exchange, communication protocols, etc.
- Minimize the likelihood of overlooking critical integration tests, e.g., stress tests, load tests, etc.
- Minimize the time it takes to analyze test results for potential improvements, e.g., performance enhancements, bug fixes, etc.
- Minimize the likelihood of missing important test scenarios, e.g., edge cases, unusual user behaviors, etc.
- Minimize the time it takes to document test findings for future reference, e.g., test reports, bug tracking, etc.
- Minimize the likelihood of failing to address identified issues before deployment, e.g., unresolved bugs, performance issues, etc.
- Minimize the time it takes to communicate test results to stakeholders, e.g., project managers, developers, clients, etc.
- Minimize the likelihood of experiencing unexpected system behaviors after deployment, e.g., crashes, data loss, etc.
- Minimize the time it takes to plan for contingency measures in case of integration failures, e.g., rollback plans, backup strategies, etc.
- Minimize the likelihood of neglecting to consider user experience during testing, e.g., usability, accessibility, etc.
- Minimize the time it takes to retest solutions after making adjustments, e.g., bug fixes, performance tuning, etc.
- Minimize the likelihood of overlooking the need for system maintenance post-integration, e.g., updates, patches, etc.
- Minimize the time it takes to evaluate the scalability of integrated solutions, e.g., load capacity, growth potential, etc.
- Minimize the likelihood of failing to consider long-term sustainability of integrated solutions, e.g., tech support, updates, etc.
- Minimize the time it takes to assess the security implications of integrated solutions, e.g., data privacy, system vulnerabilities, etc.
- Minimize the likelihood of ignoring potential legal and compliance issues related to integration, e.g., data handling, software licensing, etc.
- Minimize the time it takes to prepare for potential system downtime during integration testing, e.g., backup operations, user notifications, etc.
- Minimize the likelihood of underestimating the complexity of integration testing, e.g., time requirements, resource allocation, etc.
Customer Success Statements (PJTBD)
- Identify potential integration issues, e.g., compatibility problems, data inconsistencies, etc.
- Verify the functionality of integrated solutions, e.g., data flow, system performance, etc.
- Confirm the interoperability of different systems, e.g., data exchange, communication protocols, etc.
- Avoid overlooking critical integration tests, e.g., stress tests, load tests, etc.
- Analyze test results for potential improvements, e.g., performance enhancements, bug fixes, etc.
- Avoid missing important test scenarios, e.g., edge cases, unusual user behaviors, etc.
- Document test findings for future reference, e.g., test reports, bug tracking, etc.
- Avoid failing to address identified issues before deployment, e.g., unresolved bugs, performance issues, etc.
- Communicate test results to stakeholders, e.g., project managers, developers, clients, etc.
- Avoid experiencing unexpected system behaviors after deployment, e.g., crashes, data loss, etc.
- Plan for contingency measures in case of integration failures, e.g., rollback plans, backup strategies, etc.
- Avoid neglecting to consider user experience during testing, e.g., usability, accessibility, etc.
- Retest solutions after making adjustments, e.g., bug fixes, performance tuning, etc.
- Avoid overlooking the need for system maintenance post-integration, e.g., updates, patches, etc.
- Evaluate the scalability of integrated solutions, e.g., load capacity, growth potential, etc.
- Avoid failing to consider long-term sustainability of integrated solutions, e.g., tech support, updates, etc.
- Assess the security implications of integrated solutions, e.g., data privacy, system vulnerabilities, etc.
- Avoid ignoring potential legal and compliance issues related to integration, e.g., data handling, software licensing, etc.
- Prepare for potential system downtime during integration testing, e.g., backup operations, user notifications, etc.
- Avoid underestimating the complexity of integration testing, e.g., time requirements, resource allocation, etc.
Test Fit Structure
Apply this to Customer Success Statements only. Everything should fit together nicely. Here’s an article where I introduced the concept. Feel free to devise your own version for Desired Outcome Statements as this does not apply to their format directly.
As a(n) [end user] + who is + [Job] you're trying to [success statement] + "faster and more accurately" so that you can successfully [Job Step]