Success Metrics
There are two formatting options available. The traditional desired outcome statement is a structure used in the Outcome-Driven Innovation methodology. Since many stakeholders - especially when involved with marketing or UX teams - push back on the awkward nature of desired outcomes statements since people don’t talk like that, the alternative is a natural language structure that gets to the heart of the outcome and tries to avoid tasks and activities where feasible.
This catalog contains 20 potential metrics using each formatting option. You will likely need to reduce this set for a survey. The number of statements that have been generated is arbitrary and can be expanded to accommodate your needs.
Desired Outcome Statements (ODI)
- Minimize the time it takes to detect irregularities in system performance, e.g., unusual noises, temperature fluctuations, etc.
- Minimize the time it takes to recognize signs of wear and tear, e.g., corrosion, abrasions, etc.
- Minimize the time it takes to notice operational inefficiencies, e.g., increased energy consumption, slow processing, etc.
- Minimize the time it takes to identify safety hazards, e.g., exposed wires, leakages, etc.
- Minimize the time it takes to spot discrepancies in output quality, e.g., inconsistency in product dimensions, surface finish, etc.
- Minimize the time it takes to observe changes in user interaction, e.g., increased error messages, difficulty in operation, etc.
- Minimize the time it takes to monitor system diagnostics, e.g., error codes, warning lights, etc.
- Minimize the time it takes to track wear levels of consumable components, e.g., filters, lubricants, etc.
- Minimize the time it takes to evaluate the condition of mechanical parts, e.g., gears, bearings, etc.
- Minimize the time it takes to assess environmental factors affecting performance, e.g., humidity, temperature, dust levels, etc.
- Minimize the time it takes to determine the need for software updates or patches, e.g., outdated security features, compatibility issues, etc.
- Minimize the time it takes to identify anomalies in data logs or records, e.g., unexpected patterns, data breaches, etc.
- Minimize the time it takes to detect network connectivity issues, e.g., intermittent connections, slow data transfer, etc.
- Minimize the time it takes to recognize changes in customer feedback or complaints, e.g., frequent breakdowns, reduced satisfaction, etc.
- Minimize the time it takes to notice deviations from standard operating procedures, e.g., skipped steps, incorrect settings, etc.
- Minimize the likelihood of missing critical updates from manufacturers, e.g., recall notices, new guidelines, etc.
- Minimize the likelihood of overlooking necessary regulatory compliance checks, e.g., safety standards, environmental regulations, etc.
- Minimize the likelihood of ignoring signs of cyber threats or vulnerabilities, e.g., unusual network activity, unauthorized access attempts, etc.
- Minimize the likelihood of failing to account for user error or misuse, e.g., incorrect operation, overloading, etc.
- Minimize the likelihood of neglecting periodic maintenance schedules, e.g., calibration, cleaning, replacement cycles, etc.
Customer Success Statements (PJTBD)
- Detect irregularities in system performance, e.g., unusual noises, temperature fluctuations, etc.
- Recognize signs of wear and tear, e.g., corrosion, abrasions, etc.
- Notice operational inefficiencies, e.g., increased energy consumption, slow processing, etc.
- Identify safety hazards, e.g., exposed wires, leakages, etc.
- Spot discrepancies in output quality, e.g., inconsistency in product dimensions, surface finish, etc.
- Observe changes in user interaction, e.g., increased error messages, difficulty in operation, etc.
- Monitor system diagnostics, e.g., error codes, warning lights, etc.
- Track wear levels of consumable components, e.g., filters, lubricants, etc.
- Evaluate the condition of mechanical parts, e.g., gears, bearings, etc.
- Assess environmental factors affecting performance, e.g., humidity, temperature, dust levels, etc.
- Determine the need for software updates or patches, e.g., outdated security features, compatibility issues, etc.
- Identify anomalies in data logs or records, e.g., unexpected patterns, data breaches, etc.
- Detect network connectivity issues, e.g., intermittent connections, slow data transfer, etc.
- Recognize changes in customer feedback or complaints, e.g., frequent breakdowns, reduced satisfaction, etc.
- Notice deviations from standard operating procedures, e.g., skipped steps, incorrect settings, etc.
- Avoid missing critical updates from manufacturers, e.g., recall notices, new guidelines, etc.
- Avoid overlooking necessary regulatory compliance checks, e.g., safety standards, environmental regulations, etc.
- Avoid ignoring signs of cyber threats or vulnerabilities, e.g., unusual network activity, unauthorized access attempts, etc.
- Avoid failing to account for user error or misuse, e.g., incorrect operation, overloading, etc.
- Avoid neglecting periodic maintenance schedules, e.g., calibration, cleaning, replacement cycles, etc.
Test Fit Structure
Apply this to Customer Success Statements only. Everything should fit together nicely. Here’s an article where I introduced the concept. Feel free to devise your own version for Desired Outcome Statements as this does not apply to their format directly.
As a(n) [end user] + who is + [Job] you're trying to [success statement] + "faster and more accurately" so that you can successfully [Job Step]