Chapter 11: Translating Strategy into Execution

In the previous chapter, we looked at the limitations of the Growth Strategy Matrix. We established that while ODI offers one approach to identifying and quantifying customer needs, it is just one input. A matrix on a slide might tell you which market segment is underserved, but it does not tell you how to navigate technical debt, internal politics, or the conflicting priorities of a product roadmap.

This brings us to a common challenge in the innovation process.

The problem is rarely having enough data. The problem is the artifact gap. The output of a JTBD research project is usually a complex spreadsheet or a dense report. However, the input required by a product team is a backlog of user stories, a set of technical constraints, a design brief, and alignment with senior stakeholders.

If you simply hand the spreadsheet to a product manager or technical team, it will likely be ignored. It is not in a format they can use. To make the data useful, you have to translate the abstract needs/outcomes into the tactical artifacts that drive daily work.

We do this by applying three core principles: Contextualization, Triangulation, and Operationalization.

Principle 1: Contextualize

Principle One: Contextualize
Principle One: Contextualize

The first barrier to execution is usually the organization's existing mental models. Most companies have already invested in customer segmentation. They have personas, verticals, buyer types, or account tiers. These frameworks are not necessarily wrong or poorly constructed. Many organizations have thoughtful, well-researched personas that serve their intended purpose effectively.

The problem is not quality. The problem is that these artifacts have become part of the organization's shared language. Business leaders reference them in meetings. Sales teams use them to structure their territories. Marketing builds campaigns around them. When people talk about "the customer," they are often picturing a specific persona they have internalized over years of use.

When you finish a full JTBD research project, you will have a set of needs-based segments. A common error is to walk into a meeting and present these as the new, correct way to understand your customers, implying that the existing frameworks should be replaced. This creates immediate resistance.

The resistance is not usually about the validity of your research. It is about introducing new terminology into a system that already has a working vocabulary. You are asking people to unlearn how they talk about customers and adopt a different framework. Even if your segments are more precise, the organizational cost of this switch can be prohibitive. Instead, apply the Principle of Contextualization. Do not replace their existing frameworks. Overlay your insights onto them.

Mapping Segments to Personas

The key insight is that different frameworks serve different purposes. Demographics and personas help teams find and communicate with customers. Needs-based segments help teams build products that serve those customers well. These are complementary lenses, not competing truths. Your job is to map your new insights onto the existing vocabulary, adding resolution where it is needed without discarding what already works.

Consider a team that has a "Small Business Owner" persona. They treat this group as a single segment. But your research reveals that half this group wants advanced features and customization while the other half wants simplicity and guidance. The product team is currently building a compromise product that is too complex for one group and too simple for the other. Rather than arguing about personas, you explain that "Small Business Owner" remains the correct marketing bucket. But inside the product, there are two distinct modes. You introduce "The Stabilizer" (who wants automation and simplicity) and "The Scaler" (who wants control and flexibility). The marketing team keeps their targeting. The product team creates a "Simple Mode" and an "Advanced Mode" within the software. Both frameworks coexist because they serve different purposes.

The overlay can also work in the opposite direction, revealing similarities beneath apparent differences. Consider a sales team that treats "Healthcare Administrators" and "Financial Auditors" as completely different verticals. They have different sales decks and different feature requests. Engineering is being asked to build two separate reporting tools, which splits resources.

Your research shows that both groups share the exact same underlying needs around risk management. They both need audit trails, permissioning, and rollback capabilities. You validate a platform strategy. You build one Compliance Engine that serves both verticals, with only the front-end terminology changing. The sales team keeps their vertical positioning. Engineering builds one solution instead of two.

In both cases, you have not asked anyone to abandon their existing framework. You have added a layer of insight that makes their framework more effective.

A Practical Tip for Data Analysis

As we discussed in Chapter 9, there are many ways to slice segmentation data. If the statistical validity holds up and the cluster analysis allows for it, try to align the number of your needs-based segments with the organization's existing structure.

For example, if your sales team is already organized into four vertical industries, and your data shows four distinct clusters of needs that roughly align with those verticals, use that four-cluster solution. It lowers the friction of adoption.

However, do not force this if the math does not work. Needs-based segments are solution-agnostic. They cover the entire market, not just the people currently buying your product. Because of this, it is common to uncover more complexity than the organization currently recognizes. You might find six distinct needs clusters even if the marketing team only uses three personas. Do not oversimplify the data to make it fit, but do seek alignment where the statistics allow.

Principle 2: Triangulate

Principle Two: Triangulate
Principle Two: Triangulate

Stakeholders sometimes naturally skeptical. If you rely on a single chart to justify a roadmap change, especially one that contradicts leadership's intuition, you are likely to hit resistance. Quantitative data alone feels abstract. It tells you what is winning without explaining why it matters to real people trying to get real jobs done.

To turn a data point into a decision, you need to triangulate. This means cross-referencing your findings with other evidence sources to build a reliable case. Think of it as constructing a three-legged stool where each leg draws from a different aspect of your research.

Leg 1: The Forced Trade-Off Evidence

Start with your MaxDiff data. This is your strongest quantitative foundation because it reflects genuine prioritization, not inflated ratings.

Frame it explicitly as a trade-off finding. You might explain to stakeholders that you surveyed a large sample of users and forced them to make hard choices between competing needs. You did not ask them to rate everything highly. You made them choose. When forced to decide, customers consistently prioritized certain needs over others. This was not a mild preference. It was a clear hierarchy.

This framing matters because it addresses the "everyone says everything is important" objection. You have evidence of what customers chose when they could not have everything.

Leg 2: The Job Step Validation

Next, validate that the stated preference matches actual behavior during job execution. This is where you return to the job map you created during qualitative research.

If a particular need ranked highly, look at what happens during that job step. Pull product analytics for the relevant workflows. Check usage patterns, error rates, and time-on-task data. Review support tickets tagged to that area.

You can also validate against the job steps you mapped in qualitative research. If your interview participants described a struggle during a particular step, and your quantitative data shows that same step contains the highest-ranked unmet needs, you have convergent evidence. The qualitative told you where the pain was. The quantitative told you how widespread it is. The behavioral data confirms it is real.

Leg 3: The Emotional and Social Job Context

Finally, provide the human context that makes the numbers meaningful. This is where you draw from the emotional and social jobs you uncovered during discovery.

Remember that functional jobs rarely exist in isolation. In Chapter 2, we discussed how functional jobs are accompanied by emotional jobs (how customers want to feel) and social jobs (how customers want to be perceived). Your triangulation should reconnect the prioritized functional needs to these emotional and social dimensions.

Pull quotes from your qualitative interviews that reveal the emotional stakes. A user describing anxiety, frustration, or fear is not just reporting a functional problem. They are revealing the emotional weight behind the numbers. This context explains why certain needs rank so highly and shapes how you should address them.

You can also layer in market context. If competitors are launching campaigns around the same themes, or if industry analysts are highlighting similar priorities, it validates that your findings reflect a broader market shift rather than just your sample's idiosyncrasies.

The Combined Effect

When you present all three legs together, you transform the conversation. You are no longer saying "I think we should prioritize this." You are presenting a factual case: a large sample ranked it first when forced to choose, product logs confirm the behavior is real, qualitative interviews reveal the emotional stakes, and the competitive landscape is moving in the same direction. The decision becomes easier to support.

This triangulation approach also protects you from the limitations of any single method. If your quantitative methodology has weaknesses, the behavioral and qualitative evidence provides a check. If your qualitative sample was small, the quantitative rankings provide scale. Each leg compensates for the others' blind spots.

Principle 3: Operationalize

Principle Three: Operationalize
Principle Three: Operationalize

The final principle is tactical. A major reason many initiatives fail is that they live in slide decks while the actual work happens elsewhere. Engineers live in Jira. Designers live in Figma. Product managers live in requirement documents. If your strategy does not translate into those tools, it will not get built.

You need to embed your strategy into the artifacts your teams already use. This means adapting your language to fit their existing workflows rather than forcing them to learn a new methodology.

From Feature Stories to Outcome Stories

The key translation is converting JTBD insights into user stories that engineering teams can act on. User stories are the standard format in most agile organizations, so learning to express JTBD findings in this format ensures your research actually influences what gets built.

User stories in agile development follow a standard format [44]:

As a [user type], I want [goal/desire] so that [benefit/outcome]

The opportunity is in how you fill in this template. A feature-focused story puts a specific solution in the "I want" slot: "As a user, I want a search bar so that I can find content." This assumes a search bar is the right approach and frames success as the feature's existence.

An outcome-focused story puts the customer's desired situation in the "I want" slot and a measurable result in the "so that" slot. This leaves room for the team to determine the best solution while keeping everyone aligned on what success looks like.

A Five-Step Transformation Process

Here is how to translate a need statement into an outcome-focused user story.

Example 1: DevOps SRE Monitoring

Step 1: Start with the need statement. Using the syntax from Chapter 6, the underlying need might be: Minimize the time it takes to determine whether a third-party API degradation is affecting customer transactions.

Step 2: Identify the user and their context. The job executor is an SRE (Site Reliability Engineer). Their context is responding to potential incidents during on-call rotations.

Step 3: Reframe the need as a desired situation. What situation does the SRE want to be in? They want to immediately understand impact, not just see data. They want to make confident decisions, not just monitor dashboards.

Step 4: Connect to the business outcome. Why does this matter? Triggering unnecessary incident responses wastes team resources and creates alert fatigue. Missing real incidents harms customers and revenue.

Step 5: Write the outcome-focused user story.

As an SRE on call, I want to see a direct correlation between external API latency spikes and our checkout conversion rate so that I only trigger an incident response when revenue is actually at risk.

Compare this to the feature-focused version: "As an SRE, I want a dashboard showing third-party API status so that I can monitor external dependencies." The feature-focused story prescribes a solution (a dashboard) and defines success as visibility (monitoring). The outcome-focused story describes the insight the SRE needs and defines success as decision quality—triggering responses only when appropriate.

Example 2: Professional Services Firm

Let's apply the same process to the Clarify example.

Step 1: Start with the need statement. Minimize the time it takes to identify at-risk engagements before they become client-reported issues.

Step 2: Identify the user and their context. The job executor is a Partner at a professional services firm. Their context is managing a portfolio of client engagements while handling business development responsibilities.

Step 3: Reframe the need as a desired situation. Partners do not want to check dashboards. They want to be alerted proactively. They want to intervene before clients notice problems, not after.

Step 4: Connect to the business outcome. Client-reported issues damage relationships, hurt referrals, and create reactive firefighting that consumes partner time.

Step 5: Write the outcome-focused user story.

As a partner, I want to be automatically alerted when an engagement shows early warning signs of trouble, before the client notices anything is wrong, so that I can intervene proactively rather than manage crises.

Side-by-Side Comparison

ComponentFeature-Focused StoryOutcome-Focused Story
User

As an SRE

As an SRE on call

Want

I want a dashboard showing third-party API status

I want to see a direct correlation between external API latency and checkout conversion

So that

so that I can monitor external dependencies

so that I only trigger incident response when revenue is actually at risk

Implied solution

Dashboard with API status indicators

Open—could be dashboard, alert system, or correlation engine

Success criteria

Dashboard exists and shows data

Incident responses correlate with actual revenue impact

Redefining "Done"

Outcome stories also change how you define success [46]. Instead of "the feature exists and works," the definition of done becomes a measurable outcome tied to the original need statement.

For the DevOps example, the definition of done might be: "The system alerts the on-call engineer specifically when a third-party error rate causes a greater than 5% drop in completed transactions."

For the Clarify example: "Partners receive alerts for at-risk engagements at least 48 hours before any client-reported issue, with a false positive rate below 20%."

These definitions are measurable, tied to business outcomes, and focused on the job rather than the feature. The team can now evaluate different technical approaches against these success criteria rather than debating implementation details in the abstract.

Working with Design Teams

For design teams, the challenge is slightly different. Designers need context about the user's environment, emotional state, and constraints during the moment of struggle. A ranked list of needs does not provide this.

The solution is to move beyond handing off data tables and instead provide contextual narratives. Use your qualitative research to paint a picture of the situation. When does this job step happen? What is the user's mental state? What has just happened before, and what needs to happen after? What are they afraid of?

For example, do not just tell the design team that "Exporting Data" is a high-priority unmet need. Use your interview notes to explain the context. Users typically perform this action at the end of a long work session when they are tired and anxious to finish. They have spent hours on the document and are terrified of losing their work. The export often happens right before a deadline.

This narrative changes how the team designs the solution. If they know the user is fatigued and time-pressured, they will not bury the export function inside multiple menu layers. They will make it prominent and foolproof. The context shapes the solution in ways the ranked data alone cannot.

Putting It Together: A Worked Example

Let me walk through how these principles combine in practice. This example uses a fictional company but draws on patterns common across real engagements.

The Company and Their Challenge

Clarify is a B2B SaaS platform that helps professional services firms manage client engagements. Their core product handles project tracking, time capture, document management, and client communication. Their customers include accounting firms, consultants, and financial advisors.

The product team was stuck in a familiar pattern. Every quarter, the roadmap discussion devolved into competing priorities. The sales team pushed for deeper integrations with accounting software because prospects kept asking about it. Customer success advocated for better onboarding flows because new users struggled in their first thirty days. Engineering wanted to rebuild the notification system because the current architecture created technical debt. The CEO had just returned from a conference convinced that AI-powered insights were the future of the industry.

Everyone had examples. The sales team could point to three lost deals where integration gaps were cited. Customer success had churn data showing first-month drop-off. Engineering had incident reports tied to notification failures. The CEO had competitor announcements to reference.

What nobody had was a systematic understanding of what customers actually prioritized when forced to choose. The team decided to run a JTBD study to break through the stalled progress.

Defining the Scope

Before designing the MaxDiff study, the team needed to define what they were researching. Following the principles from earlier chapters, they started by articulating the core functional job.

After reviewing support tickets, sales call recordings, and conducting eight preliminary interviews, they landed on: "Manage client engagements from initiation through completion while maintaining profitability and client satisfaction."

This job was broad enough to capture the full scope of what customers hired Clarify to do, but specific enough to exclude adjacent jobs like "win new clients" or "manage internal firm operations" that were not central to the product's value proposition.

They then mapped the key job steps: scope the engagement and set expectations, assign team members and allocate resources, track progress against milestones and budget, capture time and expenses accurately, communicate status to clients and internal stakeholders, identify and resolve issues before they escalate, complete deliverables and hand off to the client, and invoice and close the engagement.

From qualitative interviews and internal data review, they generated 43 initial need statements across these job steps. After consolidating duplicates and removing statements that were too solution-specific, they narrowed the list to 20 needs for the MaxDiff study.

The 20 Needs Tested

Here are the need statements they included:

  1. Quickly identify scope changes before they impact profitability
  2. Ensure all team members understand their responsibilities from day one
  3. Minimize time spent figuring out who is available for new assignments
  4. Avoid assigning team members who lack required expertise
  5. Know immediately when a project falls behind schedule
  6. See accurate profitability status without manual calculations
  7. Identify which tasks are blocking overall progress
  8. Reduce time spent chasing team members for missing time entries
  9. Catch billing errors before they reach the client
  10. Update clients on status without manually compiling reports
  11. Ensure internal stakeholders see issues before clients raise them
  12. Reduce time spent in status meetings
  13. Identify at-risk engagements before they become emergencies
  14. Quickly determine the root cause when something goes wrong
  15. Ensure nothing falls through the cracks during final delivery
  16. Avoid last-minute scrambles to locate documents for the client
  17. Generate accurate invoices without reconciliation delays
  18. Quickly identify which completed work has not been billed
  19. Access engagement information from anywhere without VPN hassles
  20. Trust that client data remains secure and compliant

Study Design

The team chose the combined framing approach for their MaxDiff question. Rather than asking about importance or satisfaction separately, they asked:

"When managing client engagements, which of these unmet needs would make the biggest difference to your firm if solved?"

This framing captured both dimensions in a single question. The need had to matter (importance) and not already be solved (satisfaction gap) to rank highly.

They configured the study with 5 items per choice set and 12 sets per respondent. With 20 total items, this design ensured each need appeared multiple times for each respondent and generated sufficient data for reliable estimation.

They recruited 340 respondents through their customer database, targeting engagement managers and partners at firms with 10 to 200 employees. They offered a $50 gift card incentive and achieved a 72% completion rate among those who started the survey.

The MaxDiff Results

Maxdiff Utility Scores for Project Needs
Maxdiff Utility Scores for Project Needs

After running hierarchical Bayes estimation, they produced utility scores rescaled to sum to 100 across all items. Here is what they found:

RankNeedUtility Score

1

Identify at-risk engagements before they become emergencies

9.2

2

Know immediately when a project falls behind schedule

8.7

3

Reduce time spent chasing team members for missing time entries

8.1

4

Quickly identify scope changes before they impact profitability

7.8

5

See accurate profitability status without manual calculations

7.4

6

Ensure nothing falls through the cracks during final delivery

6.9

7

Catch billing errors before they reach the client

6.3

8

Identify which tasks are blocking overall progress

5.8

9

Quickly identify which completed work has not been billed

5.4

10

Ensure internal stakeholders see issues before clients raise them

4.9

11

Avoid last-minute scrambles to locate documents for the client

4.3

12

Quickly determine the root cause when something goes wrong

3.8

13

Update clients on status without manually compiling reports

3.5

14

Minimize time spent figuring out who is available for new assignments

3.2

15

Ensure all team members understand their responsibilities from day one

2.9

16

Generate accurate invoices without reconciliation delays

2.6

17

Reduce time spent in status meetings

2.3

18

Avoid assigning team members who lack required expertise

1.9

19

Trust that client data remains secure and compliant

0.8

20

Access engagement information from anywhere without VPN hassles

0.5

Reading the Results: What the Hierarchy Reveals

The first insight was what rose to the top. The highest-ranked needs clustered around a single theme: early warning and visibility into problems. Customers wanted to know when engagements were going off track before the situation became critical. "Identify at-risk engagements," "know immediately when a project falls behind," and "identify scope changes before they impact profitability" all ranked in the top four.

The second insight was what did not rank highly. Notice where integrations landed. They did not appear in the top half. The sales team had been pushing for deeper accounting software integrations, but "access engagement information from anywhere" ranked last. "Generate accurate invoices without reconciliation delays" ranked sixteenth.

This did not mean integrations were worthless. It meant that when forced to choose, customers prioritized early warning systems over data connectivity. They would rather know about problems sooner than have smoother data flows.

The third insight was the natural break points. There was a meaningful gap between the top cluster (ranks 1 through 5, all above 7.4) and the middle tier (ranks 6 through 10, between 4.9 and 6.9). These natural breaks suggested where to draw priority lines.

Security and remote access ranked at the bottom. This initially surprised the team. Were those not table stakes? But remember the question framing: "unmet needs that would make the biggest difference if solved." Low scores here likely meant these needs were already adequately served, not that they were unimportant. This is the trade-off of combined framing. You cannot distinguish "unimportant" from "already satisfied" without additional data.

A Common Pitfall: Stopping at the MaxDiff Results

At this point, it would be tempting to simply take the top five needs and start writing user stories. The ranking seems clear. The data looks definitive. Why not execute?

This is the most common mistake teams make with quantitative needs research. MaxDiff tells you what customers chose when forced to prioritize. It does not tell you why they made those choices, whether their stated priorities match their actual behavior, or how these needs manifest differently across customer segments.

If Clarify had stopped here, they would have missed critical context that shaped how they ultimately addressed these needs. They needed to triangulate.

Triangulating with Behavioral Data

The team pulled product analytics to see whether the MaxDiff rankings aligned with actual user behavior.

The top-ranked need was "identify at-risk engagements before they become emergencies." Product logs showed that users who had access to Clarify's basic health scoring feature checked it an average of 4.2 times per week, more than any other dashboard view. But the same logs showed that 67% of users who checked the health score then navigated to three or more other screens, suggesting the score alone was not giving them what they needed. They were hunting for more information.

Support tickets corroborated this. The team tagged and reviewed six months of tickets and found that "engagement health" or "project status" appeared in 23% of all support conversations, the highest concentration for any topic.

The third-ranked need was "reduce time spent chasing team members for missing time entries." The team surveyed a subset of customers about their workflows and found that engagement managers spent an average of 3.2 hours per week on time entry follow-up. This was not just an annoyance. It was a measurable productivity drain.

Product logs showed that the "missing time entries" report was the second most frequently accessed report in the entire system. Users were already trying to solve this problem with existing tools. The tools were not solving it well enough.

Security and remote access ranked last in the MaxDiff. But were they unimportant, or already solved? Customer health scores showed that users who had experienced a security incident, even a minor one like a password reset issue, were 3.4 times more likely to churn within six months. Security was not unimportant. It was table stakes. The low MaxDiff ranking reflected satisfaction with current performance, not indifference to the need.

The team made a note: do not deprioritize security maintenance because it did not rank as an "unmet need." The combined framing revealed opportunities, not the full picture of what to protect.

Triangulating with Qualitative Context

Numbers told part of the story. But to translate needs into solutions, the team needed to understand the emotional and contextual dimensions.

They returned to their qualitative interview transcripts and pulled quotes that illuminated the top-ranked needs.

On identifying at-risk engagements, one partner at a mid-sized accounting firm had said: "I lie awake at night wondering which engagements are about to blow up. By the time I find out there is a problem, it is already a crisis. The client is upset, the team is stressed, and I am doing damage control instead of prevention."

Another engagement manager described it differently: "I know something is wrong when I start getting more emails from the client. But by then, the relationship is already strained. I wish I could see the warning signs before the client feels them."

These quotes revealed that the need was not just functional. It was deeply emotional. Any solution would need to address both the visibility gap and the anxiety it created.

On chasing time entries, a senior consultant explained: "I hate being the bad guy. Every week I am sending nagging emails to my team about time entries. It makes me feel like a babysitter, and it creates tension. They are professionals. They should not need reminders. But if I do not chase them, we cannot bill accurately."

This quote reframed the need. It was not just about efficiency. It was about role identity and team dynamics. A solution that simply automated the nagging might not solve the underlying problem. It might shift who was doing the nagging.

On scope changes, a partner at a consulting firm said: "Scope creep is how we lose money. The client asks for one more thing and my team says yes because they want to be helpful. By the time I find out, we have already done the work. I cannot bill for it without looking like I am nickel-and-diming, but I cannot eat the cost either."

This revealed that the scope visibility problem was not just about tracking. It was about the moment of decision. The partner needed to know about scope changes before the team committed, not after the work was done.

Triangulating with Market Context

Finally, the team looked outside their own data to validate that these priorities reflected broader market trends.

They found that two competitors had recently launched "engagement health" features with prominent marketing campaigns. Industry analysts had published reports highlighting "proactive risk management" as a top trend in professional services technology. A major accounting industry conference had added a track on "client relationship early warning systems."

This convergent evidence suggested Clarify was not just seeing patterns in their own customers. They were identifying a market-wide shift in priorities.

Contextualization: Mapping to Existing Segments

Clarify's marketing team had three existing personas. "Growth-Mode Gina" represented partners at firms actively expanding, focused on winning new clients and scaling operations. "Efficiency-Focused Eduardo" represented engagement managers at established firms, focused on profitability and utilization. "Compliance-Conscious Carla" represented firms in regulated industries with heavy documentation and audit requirements.

Rather than replacing these personas, the team analyzed the MaxDiff data by segment to see how priorities differed.

The finding was that the top five needs were consistent across all three personas. Every segment prioritized early warning and visibility into problems. The difference was in the why and the consequences.

For Growth-Mode Gina, an at-risk engagement meant reputational damage that could hurt new business development. She worried about word-of-mouth in her market.

For Efficiency-Focused Eduardo, an at-risk engagement meant margin erosion and utilization problems. He worried about the financial impact and resource allocation.

For Compliance-Conscious Carla, an at-risk engagement meant potential regulatory exposure and documentation gaps. She worried about audit trails and liability.

This insight shaped how the team would build and message the solution. The core functionality could be shared. But the specific signals that indicated "at risk," the dashboards that displayed status, and the messaging that promoted the feature could all be tailored to each persona's concerns.

What They Decided Not to Build

Just as significant was what the research told them to deprioritize.

The sales team's push for deeper integrations was tabled. The MaxDiff data showed integration-adjacent needs ranking in the bottom quartile. The triangulation did not surface any behavioral or qualitative evidence that contradicted this. The team acknowledged that integrations might matter for new customer acquisition, but for existing customers trying to do their jobs, it was not a priority. They decided to revisit integrations after addressing the top-tier needs.

The CEO's AI initiative was reframed rather than abandoned. None of the top-ranked needs explicitly called for AI. But the team realized that AI could potentially serve several of the top needs, such as predicting at-risk engagements, detecting scope creep patterns, or automating time entry reminders intelligently. Rather than building "AI features" as a category, they would evaluate AI as a potential solution approach for the needs customers prioritized.

The notification system rebuild that engineering wanted was approved, but reframed. Engineering had pitched it as technical debt reduction. The research revealed it was also a customer need. Several of the top-ranked needs required better notification infrastructure to solve. The rebuild was approved not as a maintenance project but as a foundation for the highest-priority customer outcomes.

Lessons from This Example

The MaxDiff ranking is the starting point, not the answer. The ranking told Clarify where to focus. It did not tell them how to solve the problems or what solutions would work. Triangulation with behavioral data and qualitative context was essential for translating priorities into effective solutions.

Combined framing reveals opportunities but hides table stakes. Security ranked last in the MaxDiff, but churn data showed it was critical. The low ranking reflected satisfaction, not indifference. Teams using combined framing need supplementary data to identify what they must protect, not just what they should build.

Contextualization beats replacement. Rather than telling marketing their personas were wrong, the team showed how the research added resolution to existing frameworks. The personas remained useful for targeting. The needs data made them useful for product development.

User stories should describe outcomes, not features. Every story the team wrote focused on the progress customers wanted to make, not the specific solution. This opened up solution possibilities and kept teams focused on the job rather than their first implementation idea.

Even good research requires iteration. The time entry solution underperformed despite being based on solid research. The research correctly identified the need. The first solution did not address it effectively. This is not a failure of the methodology. It is a reminder that research reduces risk rather than eliminating it.

Chapter 11 Conclusion

We began this book with a core value proposition, to give you a practical playbook for JTBD research that acknowledges both the benefits and the limitations of the methodology. This final chapter has focused on the gap that determines whether research influences decisions or gets ignored.

The Outcome Driven Innovation framework, as originally conceived, offers a systematic approach to understanding customer needs. But as we explored in Chapters 7 and 8, that system has real problems. Survey fatigue reduces data quality. The opportunity algorithm double-weights importance in ways that may not reflect actual priorities. The quantification creates false precision that can mislead strategic decisions.

These critiques do not mean you should abandon it to understand and quantify customer needs. They mean you should do it more carefully. The MaxDiff approach addresses the methodological weaknesses while preserving the core insight: customers have jobs they are trying to accomplish, and your product either helps them make progress or it does not.

But even rigorous research fails without translation. The three principles in this chapter are how you bridge the gap between insight and impact.

Contextualization means meeting your organization where it is. You will rarely have the luxury of replacing existing frameworks entirely. The skill is in overlaying new insights onto existing mental models in ways that add resolution without creating resistance.

Triangulation means building cases that withstand scrutiny. A single data source, no matter how rigorous, rarely survives contact with organizational politics. When three different data sources point the same direction, stakeholders stop arguing methodology and start debating what to do about it.

Operationalization means injecting your insights into the artifacts that drive work. Slide decks do not ship products. User stories, design briefs, and backlog priorities do. If your research does not translate into those formats, it will not influence what gets built.

Throughout this book, I have tried to be honest about what JTBD and ODI can and cannot do. This methodology will not guarantee product success. It will not tell you how to design the solution, how to price it, or how to market it. It will not account for competitive responses, regulatory changes, or shifts in technology.

What it will do, when applied thoughtfully, is reduce the risk that you build something nobody wants. It will give you a language for discussing customer needs that goes beyond feature requests and demographic assumptions. It will provide a framework for prioritization that is grounded in evidence rather than opinion.

That is not everything. But for teams stuck between conflicting stakeholder demands, unclear priorities, and the pressure to ship features that may not matter, it is substantial.

The research is one input. The strategy requires synthesis across many inputs. The execution requires translating strategy into the daily work of building products. If this book has helped you navigate any part of that journey with more confidence and rigor, it has accomplished its purpose.

Now go build something that helps customers make progress on the jobs that matter to them.

Chapter 11 References

[44] Cohn, M. (2004). User Stories Applied: For Agile Software Development. Addison-Wesley Professional. Retrieved from https://www.amazon.com/User-Stories-Applied-Software-Development/dp/0321205685

[45] Patton, J. (2014). User Story Mapping: Discover the Whole Story, Build the Right Product. O'Reilly Media. Retrieved from https://www.amazon.com/User-Story-Mapping-Discover-Product/dp/1491904909

[46] Schwaber, K., & Sutherland, J. (2020). The Scrum Guide. Scrum.org. Retrieved from https://scrumguides.org/