Success Metrics
There are two formatting options available. The traditional desired outcome statement is a structure used in the Outcome-Driven Innovation methodology. Since many stakeholders - especially when involved with marketing or UX teams - push back on the awkward nature of desired outcomes statements since people don’t talk like that, the alternative is a natural language structure that gets to the heart of the outcome and tries to avoid tasks and activities where feasible.
This catalog contains 20 potential metrics using each formatting option. You will likely need to reduce this set for a survey. The number of statements that have been generated is arbitrary and can be expanded to accommodate your needs.
Desired Outcome Statements (ODI)
- Minimize the time it takes to recognize symptoms indicating a malfunction, e.g., unusual noises, error messages, unexpected shutdowns, etc.
- Minimize the time it takes to differentiate between hardware and software issues, e.g., physical damage, software crashes, etc.
- Minimize the time it takes to consult documentation for troubleshooting steps, e.g., user manuals, online forums, help articles, etc.
- Minimize the time it takes to gather relevant product information, e.g., model numbers, serial numbers, warranty status, etc.
- Minimize the time it takes to determine if the issue is recurring or isolated, e.g., frequency of occurrence, patterns in malfunctions, etc.
- Minimize the time it takes to assess the severity of the problem, e.g., minor inconvenience, major malfunction, complete failure, etc.
- Minimize the time it takes to identify potential causes of the problem, e.g., user error, software bug, hardware failure, etc.
- Minimize the time it takes to prioritize issues based on impact and urgency, e.g., critical operations affected, safety concerns, etc.
- Minimize the time it takes to verify the problem through testing or replication, e.g., diagnostic tests, reproducing error conditions, etc.
- Minimize the time it takes to log incidents for future reference, e.g., creating support tickets, maintaining a problem log, etc.
- Minimize the time it takes to communicate the issue to relevant parties, e.g., support teams, product manufacturers, IT department, etc.
- Minimize the time it takes to search for known solutions or workarounds, e.g., patches, updates, temporary fixes, etc.
- Minimize the time it takes to evaluate the need for professional assistance, e.g., technical support, repair services, etc.
- Minimize the time it takes to determine the impact on related systems or products, e.g., network connectivity, integrated devices, etc.
- Minimize the time it takes to estimate the downtime or out-of-service period, e.g., repair time, replacement delivery, etc.
- Minimize the time it takes to assess warranty or service contract coverage, e.g., eligibility for free repair, service fees, etc.
- Minimize the time it takes to decide on a course of action for resolution, e.g., self-repair, professional repair, replacement, etc.
- Minimize the time it takes to prepare for a potential data loss scenario, e.g., backing up data, documenting configurations, etc.
- Minimize the time it takes to identify safety precautions before attempting repairs, e.g., disconnecting power, using protective gear, etc.
- Minimize the time it takes to understand the implications of the problem on future use, e.g., reduced functionality, shorter lifespan, etc.
Customer Success Statements (PJTBD)
- Recognize symptoms indicating a malfunction, e.g., unusual noises, error messages, unexpected shutdowns, etc.
- Differentiate between hardware and software issues, e.g., physical damage, software crashes, etc.
- Consult documentation for troubleshooting steps, e.g., user manuals, online forums, help articles, etc.
- Gather relevant product information, e.g., model numbers, serial numbers, warranty status, etc.
- Determine if the issue is recurring or isolated, e.g., frequency of occurrence, patterns in malfunctions, etc.
- Assess the severity of the problem, e.g., minor inconvenience, major malfunction, complete failure, etc.
- Identify potential causes of the problem, e.g., user error, software bug, hardware failure, etc.
- Prioritize issues based on impact and urgency, e.g., critical operations affected, safety concerns, etc.
- Verify the problem through testing or replication, e.g., diagnostic tests, reproducing error conditions, etc.
- Log incidents for future reference, e.g., creating support tickets, maintaining a problem log, etc.
- Communicate the issue to relevant parties, e.g., support teams, product manufacturers, IT department, etc.
- Search for known solutions or workarounds, e.g., patches, updates, temporary fixes, etc.
- Evaluate the need for professional assistance, e.g., technical support, repair services, etc.
- Determine the impact on related systems or products, e.g., network connectivity, integrated devices, etc.
- Estimate the downtime or out-of-service period, e.g., repair time, replacement delivery, etc.
- Assess warranty or service contract coverage, e.g., eligibility for free repair, service fees, etc.
- Decide on a course of action for resolution, e.g., self-repair, professional repair, replacement, etc.
- Prepare for a potential data loss scenario, e.g., backing up data, documenting configurations, etc.
- Identify safety precautions before attempting repairs, e.g., disconnecting power, using protective gear, etc.
- Understand the implications of the problem on future use, e.g., reduced functionality, shorter lifespan, etc.
Test Fit Structure
Apply this to Customer Success Statements only. Everything should fit together nicely. Here’s an article where I introduced the concept. Feel free to devise your own version for Desired Outcome Statements as this does not apply to their format directly.
As a(n) [end user] + who is + [Job] you're trying to [success statement] + "faster and more accurately" so that you can successfully [Job Step]