The Four Risks

Every business model transformation involves uncertainty. These four risk categories help you systematically identify what you don't know, design experiments to reduce uncertainty, and make evidence-based decisions about whether to proceed.

?

Desirability

Will customers want this?

Feasibility

Can we build and deliver this?

$

Viability

Will this make money?

Adaptability

Can this survive change?

Evidence Strength: From Opinion to Proof

Opinion
"I think customers want this"
Anecdote
"A few people told me..."
Survey
"50% said they would buy"
Commitment
"100 people signed up"
Revenue
"50 people paid money"
Weak Evidence
Strong Evidence

Risk One

Desirability Risk

"Will customers actually want this?"

Desirability risk addresses whether you're solving a real problem that customers care enough about to change their behavior. It's the most common killer of new business models—teams build something nobody wants. Testing desirability before building anything substantial protects against this waste.

Desirability isn't just "do they like the idea?" It's "will they switch from their current solution?" and "will they pay for it?" Many ideas sound appealing in conversation but fail when customers face actual decisions.

Canvas Blocks at Risk

Customer Segments Value Proposition Channels Customer Relationships

Key Questions to Test

Do customers recognize they have the problem we're solving?
How are they currently solving this problem (or living with it)?
What would make them switch from their current solution?
Is this problem painful enough to pay for a solution?
Can we reach these customers through accessible channels?
Will they engage in the relationship model we're proposing?

Testing Methods: Weak to Strong Evidence

Customer Interviews

Deep conversations exploring the customer's world, problems, and current solutions. Focus on behavior, not hypotheticals.

Fast Cheap Weak evidence

Evidence type: Qualitative insight about problem existence and priority

Landing Page Test

Create a page describing the value proposition. Measure who clicks, who signs up for more info, who provides email addresses.

Quantifiable Moderate cost Interest ≠ purchase

Evidence type: Conversion rates showing interest level

Pre-Order / Waitlist

Ask customers to commit before the product exists—deposit money, sign a letter of intent, or join a paid waitlist.

Strong evidence Real commitment Slower

Evidence type: Actual financial or contractual commitment

Hypothesis Template for Desirability

We believe that [customer segment] will [take action / adopt behavior] because [reason / motivation].

We will test this by [experiment] and measure [metric].

We are right if [threshold].
"We believe that logistics managers at mid-size shippers will pay $500/month for real-time cargo risk alerts because they currently lose 2-3% of shipments to preventable damage. We'll test with 20 qualified prospects and measure letter of intent signatures. We're right if 5+ sign."
"We believe that construction companies will switch from tool ownership to fleet subscription because equipment downtime costs them $1,000+/day. We'll test with 10 pilot customers and measure renewal rate. We're right if 70%+ renew after trial."
Example Slack

The Hypothesis: Teams want a better way to communicate than email—something faster, more organized, and searchable.

The Challenge: Slack was originally an internal tool built for a gaming company. Before pivoting the entire company to this new product, they needed evidence that other teams would want it too.

The Experiment: Slack recruited friendly companies to use the tool before public launch. They gave it to about 10 companies and watched how they used it. They gathered feedback obsessively. Then expanded to 40 companies, then thousands on a waitlist.

The Evidence: Teams used Slack more than expected. Usage data showed people opening it 10+ times per day. Teams who tried it didn't want to stop. The preview companies became references and advocates. Strong evidence of desirability before major investment.

Evidence Gathered

Preview companies

40+

Daily active use

10+ opens/day

Waitlist at launch

8,000 companies

Decision supported

Pivot entire company

Risk Two

Feasibility Risk

"Can we actually build and deliver this?"

Feasibility risk addresses whether you have (or can acquire) the capabilities to deliver the value proposition. It's about resources, activities, partnerships, and technology. Many business models fail not because customers don't want them, but because the team can't execute.

Feasibility isn't binary. The question isn't just "can we do it?" but "can we do it at the required quality, scale, and cost?" A solution that works in a lab or pilot may not work at commercial scale.

Canvas Blocks at Risk

Key Resources Key Activities Key Partners

Key Questions to Test

Do we have (or can we build/acquire) the required technology?
Do we have the talent and expertise needed?
Can we perform the required activities at scale?
Can we secure the necessary partnerships?
Do we have access to required intellectual property or data?
Can we meet regulatory and compliance requirements?

Testing Methods: Weak to Strong Evidence

Expert Assessment

Bring in technical experts or consultants to evaluate whether the proposed solution can be built with available technology and resources.

Fast Informed opinion Still opinion

Evidence type: Expert judgment on technical possibility

Technical Prototype

Build a minimum version that proves the core technical challenge can be solved. Focus on the hardest part, not the complete system.

Real proof Surfaces unknowns Costs time/money

Evidence type: Working demonstration of technical capability

Pilot Operation

Run a small-scale version of the complete operation to test all activities and partnerships work together at limited scale.

Tests full system Strong evidence Most expensive

Evidence type: Operational metrics from real delivery

Hypothesis Template for Feasibility

We believe that we can [perform activity / build capability] using [resources / technology / partners] at [quality / scale / cost level].

We will test this by [prototype / pilot / partnership negotiation].

We are right if [technical threshold / operational metric].
"We believe we can achieve 99% accuracy in cargo damage prediction using IoT sensor data and machine learning. We'll test with 1,000 shipments over 3 months. We're right if prediction accuracy exceeds 95%."
"We believe we can secure partnerships with 3 major ports for sensor retrieval operations. We'll test by approaching 5 ports with pilot proposals. We're right if 3+ agree to pilot terms."
Example Flexport

The Hypothesis: We can coordinate international freight forwarding through a software platform, managing carriers and customs without owning ships or warehouses.

The Challenge: Building a full freight management platform would require millions in technology investment and regulatory compliance. How could Flexport test if the operational model would work before building it?

The Experiment: Founder Ryan Petersen started by manually coordinating shipments using spreadsheets and phone calls. He acted as the software—tracking shipments, managing documentation, communicating with carriers. The process was labor-intensive but proved the concept.

The Evidence: Shippers paid for the service. Shipments arrived on time. The manual process revealed what the software needed to do. Only after proving the operations worked did Flexport build the technology platform that now automates these processes.

Evidence Gathered

Core assumption tested

Operations work

Infrastructure required

Spreadsheets + phone

Risk if wrong

Minimal investment

Decision supported

Build platform ($8B valuation)

Risk Three

Viability Risk

"Will this generate sustainable profit?"

Viability risk addresses whether the business model math works. Even if customers want it and you can build it, can you make money doing so? This involves revenue streams, cost structure, pricing, and unit economics.

Many businesses that customers love and that technically work still fail because they can't achieve profitable unit economics. The costs of acquiring customers, delivering value, and maintaining operations exceed what customers are willing to pay.

Canvas Blocks at Risk

Revenue Streams Cost Structure

Key Questions to Test

What are customers actually willing to pay?
What does it cost to acquire a customer?
What does it cost to serve a customer over time?
What's the customer lifetime value vs. acquisition cost?
At what scale do unit economics become positive?
What margins are achievable at scale?

Testing Methods: Weak to Strong Evidence

Willingness-to-Pay Research

Use techniques like Van Westendorp pricing or conjoint analysis to understand what customers would pay. Better than asking directly.

Informative Relatively cheap Hypothetical

Evidence type: Price sensitivity data

Price Testing

Show different prices to different customer groups and measure conversion rates. A/B test pricing on landing pages or in sales conversations.

Real behavior Quantifiable Needs traffic

Evidence type: Conversion rates at different price points

Pilot Economics

Run a small-scale pilot and meticulously track all costs and revenues. Calculate actual unit economics from real operations.

Real numbers Strong evidence Pilot ≠ scale

Evidence type: Actual revenue and cost data

Hypothesis Template for Viability

We believe that customers will pay [price] for [value proposition] and that we can deliver it at a cost of [cost], resulting in [margin].

We will test this by [pricing experiment / pilot].

We are right if [economic threshold].
"We believe logistics companies will pay $2,000/month for predictive risk intelligence. Our cost to serve is $400/month at 100 customers. We'll test with 3 price points ($1,500, $2,000, $2,500) across 50 prospects. We're right if we achieve 10%+ conversion at $2,000+."
"We believe we can achieve a customer acquisition cost under $500 through content marketing and referrals. We'll test with $10,000 marketing spend. We're right if we acquire 20+ qualified leads."
Example Buffer

The Hypothesis: People will pay for a tool to schedule social media posts, and we can price it at a level that supports a sustainable business.

The Challenge: Founder Joel Gascoigne didn't know what price to charge or whether the math would work. Building first and figuring out pricing later is risky.

The Experiment: Before building the product, Joel created a landing page with a pricing page showing three tiers: Free, $5/month, and $20/month. Visitors who clicked "Plans and Pricing" saw the options. Visitors who clicked a paid tier were told the product wasn't ready yet and asked for their email.

The Evidence: People clicked through to pricing and then clicked on paid plans. This proved not just desirability ("I want this") but viability ("I'd pay $5-20/month for this"). Joel knew before writing code that the business model could work.

Evidence Gathered

Pricing validated

$5-20/month

Type of evidence

Intent to pay

Code written

Zero

Decision supported

Build at validated price

Risk Four

Adaptability Risk

"Can this survive and evolve as conditions change?"

Adaptability risk addresses whether the business model can survive changes in the external environment: competitor moves, technological disruption, regulatory shifts, economic cycles, and changing customer preferences. It's about resilience and future-proofing.

A business model that works today may be disrupted tomorrow. Testing adaptability means stress-testing the model against plausible futures and building in mechanisms for evolution. This is the least commonly tested risk—and often the one that kills successful businesses.

Canvas Blocks at Risk

Entire Canvas External forces affect all blocks

Key Questions to Test

What would happen if a well-funded competitor copied this?
How would technology changes (AI, automation) affect this model?
What regulatory changes could threaten or enable this?
How does this model perform in economic downturns?
What if customer preferences shift significantly?
What early warning signals should we monitor?

Testing Methods: Weak to Strong Evidence

Scenario Planning

Develop 3-4 plausible future scenarios. Test how your business model performs in each. Identify vulnerabilities and adaptation strategies.

Surfaces risks Strategic value Speculative

Evidence type: Identified vulnerabilities and contingencies

Competitive Simulation

Have a team role-play as a competitor trying to disrupt you. What would they do? How would you respond? Red team your own model.

Reveals blind spots Actionable Hypothetical

Evidence type: Defensive strategies and moat analysis

Stress Testing

Model financial and operational performance under adverse conditions: 30% revenue drop, key partner loss, technology shift, regulatory change.

Quantified risk Preparedness Model limitations

Evidence type: Break-even points and survival thresholds

Hypothesis Template for Adaptability

We believe that our business model can survive [specific threat or change] because [defensive mechanism or adaptation strategy].

We will test this by [scenario analysis / stress test / competitive simulation].

We are resilient if [survival threshold or adaptation capability].
"We believe our subscription model can survive a 25% revenue drop because our fixed costs are only 40% of revenue. We'll stress-test our financial model. We're resilient if we remain cash-flow positive at -25% revenue."
"We believe our data moat protects against new entrants because they'd need 5+ years of shipment data to match our predictions. We'll test by analyzing minimum data requirements for 90% accuracy. We're protected if threshold exceeds 3 years."
Example Microsoft Azure

The Situation: By 2010, Microsoft dominated enterprise software with Windows and Office. But cloud computing threatened this model—Amazon Web Services was enabling competitors to build without Microsoft infrastructure. Google Apps challenged Office.

The Question: What happens if enterprises stop buying perpetual licenses and move to cloud-native solutions? Can Microsoft's business model survive?

The Stress Test: Microsoft modeled scenarios where cloud adoption accelerated. The analysis showed that clinging to licenses created existential risk. They would become irrelevant as competitors offered cloud-native alternatives.

The Adaptation: Under Satya Nadella (CEO 2014), Microsoft pivoted to "mobile-first, cloud-first." Azure became the strategic priority. Office 365 replaced perpetual licenses. They cannibalized their own Windows revenue to build cloud dominance. By 2024, Azure revenue exceeded Windows for the first time.

Risk Identification

Vulnerability identified

License dependency

Adaptation strategy

Cloud-first pivot

Investment required

$Billions in Azure

Outcome

$3T market cap (2024)

Testing All Four Risks

The Testing Sequence

1

Desirability

Test first—if no one wants it, nothing else matters

2

Feasibility

Test early—identify technical blockers before investing heavily

3

Viability

Test once you know people want it and you can build it

4

Adaptability

Test continuously—the environment keeps changing

Start Cheap, Get Expensive

Begin with the cheapest experiments that give useful evidence. Only invest in expensive pilots and prototypes after cheaper tests have reduced risk. Save your budget for validating what matters most.

Test the Riskiest First

Identify your riskiest assumptions—the ones that would kill the business if wrong—and test those first. Don't waste time optimizing things that don't matter if a fundamental assumption is untested.

Evidence Over Opinion

One customer paying money is worth more than 100 people saying "I'd buy that." Seek the strongest evidence you can get at each stage. Move up the evidence ladder as your investment increases.

Decide with Data

Define success criteria before running experiments. Know what evidence would make you proceed, pivot, or abandon. This prevents post-hoc rationalization and keeps decisions honest.