Alacrita White Papers

When Probability is Fiction: Risk vs. Uncertainty in Biotech Due Diligence

Written by Anthony Walker, PhD, and Cort Hepler | Jan 9, 2026 10:37:21 AM

Risk, Exposure and Uncertainty

The concept of risk is central to due diligence as the essence of the exercise is to identify, assess and attempt to mitigate risks in the project/company under evaluation. However, there is a widespread lack of understanding of what risk is, how it differs from uncertainty and why these two concepts are not simply two sides of the same coin. Economist Frank Knight drew the distinction in 1921: risk describes situations with measurable probabilities; uncertainty describes situations where such probabilities are fundamentally indeterminate.1

Risk vs. Uncertainty in Practice

An investment committee evaluates a phase 2b readout in moderate-to-severe ulcerative colitis. The target is validated - IL-23 inhibition, with multiple approved competitors - the endpoint is standard (clinical remission at week 12), and there are a dozen comparable programs to benchmark against. The team debates probability of success - somewhere between 25% and 40% depending on assumptions about differentiation and trial design - but the debate is grounded in data. This is risk.

Six months later, the same committee considers a platform investment in an AI-driven protein design company. The technology is genuine, but will it produce a clinical candidate? In what timeframe? Against which targets? With what competitive dynamics? No dataset exists to answer these questions. Any probability assigned would be invented. This is uncertainty.

What is Risk?

Risk is a condition where future outcomes are not certain, but the set of possible outcomes is known and probabilities can be assigned, often from data or models. It is the basis of all insurance products; for example, it is possible to understand the probability that your car will be stolen, and the value of the loss thereby incurred (the "exposure"). An insurer multiplies the probability by the exposure and sets a premium accordingly.

Two scenarios can have the same probability but radically different exposures. If you bet on a coin toss, the probability is 50%. If the exposure - the wager - is a pint of beer, you intuitively feel that the risk is low. If it's $1 million, the probability is the same but you feel far more "at risk".

Scenario A 50%
Stake: A pint of beer
Feels low risk
Scenario B 50%
Stake: $1,000,000
Feels high risk

At pain of stating the obvious, in biopharma, an early-stage discovery program involves low budgets, and although there is a very high chance of failure, risk is low. For a phase 3 trial, chances of failure are statistically lower but exposure is very high - hence there is much more risk involved.4

Early Stage
Discovery Program
~5-10% reach Phase 1
Low budget exposure
 
Low risk
Late Stage
Phase 3 Trial
~50-60% approval rate
High budget exposure
 
High risk

What is Uncertainty?

Uncertainty is a condition where the set of possible outcomes, their likelihoods, or both are not known well enough to assign reliable probabilities. This is not simply "high risk" or "low confidence" - it is a fundamentally different epistemic state. For a cutting-edge technology, it is simply not possible to use historic precedents to forecast chances of success. A company deciding whether to invest in an entirely new technology platform faces unknown market structures, regulations, competitive dynamics, and technical trajectories. Probabilities for success or failure cannot be credibly specified because the relevant reference class does not exist.

Consider the difference: when Ortho Biotech developed muromonab-CD3 (OKT3) - the first approved monoclonal antibody therapeutic - in the mid-1980s, there was no historical success rate to reference. Would sustained CD3 engagement in humans produce durable immunosuppression, paradoxical activation, or catastrophic immune collapse? Would manufacturing scale? Would regulators approve a biologic derived from mouse hybridomas? These were not risks to be quantified - they were uncertainties to be resolved through experimentation. Only after decades of iteration and dozens of mAbs reached market did "antibody program probability of success" become a meaningful calculation.

The Uncertainty Spectrum in Drug Development
Quantifiable Risk
Phase 3 in validated mechanism, known endpoint, historical comparators
Partial Uncertainty
Novel target, established modality; or validated target, novel modality
Genuine Uncertainty
First-in-class mechanism, novel modality, no clinical precedent

Many things in drug development are inherently uncertain, and these are generally uninsurable. In a due diligence context, there is often pressure to quantify the "risk" associated with uncertainties - investors want a number for their models, boards want expected values for their decisions. But assigning a precise probability to a genuinely unprecedented program is not analysis; it is fiction dressed as math. People are generally satisfied by such figures (perhaps only because they can later blame the analyst if it all goes wrong), but satisfaction is not the same as validity.

The critical question becomes: what do you do when stakeholders demand a number you cannot credibly provide?

Key Conceptual Differences

Dimension Risk Uncertainty
Measurability Quantifiable; probabilities can be estimated and used in tools like expected value, decision trees, and Monte Carlo simulations. Not susceptible to measurement because required information is incomplete or incalculable.
Information Assumes sufficient historical or model-based information to characterize likelihoods. Arises when information is too sparse, ambiguous, or novel to assign probabilities.
Manageability Can be managed via diversification, insurance, and hedging. Cannot be insured against; calls for flexibility, robustness, and adaptive strategies.

The Fallacy of the Risk Matrix

The risk matrix is a commonly used tool that multiplies the probability of an event occurring by the magnitude of impact if it does, typically plotted on a 3×3 or larger grid. Despite its ubiquity, the approach suffers from fundamental problems that undermine its usefulness in serious due diligence.

Interactive Risk Matrix
Click any cell to see the calculation and its limitations
IMPACT
Low (1) Medium (2) High (3)
1
2
3
2
4
6
3
6
9
Low (1) Medium (2) High (3)
PROBABILITY
Click a cell to explore
Select any cell in the matrix above to see how the risk score is calculated and why this methodology is problematic.

The methodology itself is flawed. Likelihood and impact ratings - terms like "rare," "likely," "minor," or "severe" - are interpreted differently by different assessors, meaning the same risk can receive markedly different scores depending on who completes the assessment. Most matrices compound this problem by assigning numbers to rank-ordered categories and then multiplying them, even though ordinal categories do not have true numeric distance. The arithmetic is mathematically dubious: the difference between "rare" and "unlikely" is not necessarily the same as between "unlikely" and "possible," yet the multiplication treats them as equivalent intervals.

The approach systematically misrepresents catastrophic, low-probability risks. Multiplication can make a very low-probability catastrophic event appear equivalent to a high-probability trivial loss - a 1×3 score equals a 3×1 score - masking the fundamentally different nature of those risks. This leads organizations to treat genuinely unacceptable tail risks as "medium" or "low" simply because the probability estimate is small, precisely the error that matters most in contexts where rare but severe outcomes define success or failure.

The Black Swan Problem: A rare but catastrophic event (probability 1, impact 3 = score 3) receives the same rating as a common but trivial event (probability 3, impact 1 = score 3). Yet these scenarios require fundamentally different responses. The matrix flattens this distinction.

Risk matrices also oversimplify by reducing risk to just two dimensions. Critical factors such as risk appetite, cost and feasibility of mitigation, reversibility, detectability, and correlation between risks are ignored entirely. Interdependencies and cascading effects - one event triggering another - cannot be captured in a simple two-axis matrix, so system-level vulnerability can be seriously underestimated.

Perhaps most dangerously, a single numeric score or colored box creates an air of scientific precision that is not justified by the underlying uncertainty and judgment, encouraging overconfidence in the output. Empirical work across several domains shows that such tools can misprioritize risks and sometimes perform little better than chance in ranking which hazards matter most.2,3

Implications for Practice

For well-characterized, moderate risks - a phase 3 program in a validated mechanism with historical comparators - probability × exposure remains a reasonable heuristic for ranking and resource allocation. Tools like Monte Carlo simulation can stress-test assumptions and quantify the range of outcomes, provided inputs are carefully calibrated against relevant datasets.4

For genuine uncertainties, quantification invites false precision. The critical discipline is recognizing which type of question you face, then applying the appropriate toolkit:

Type of Question Approach Key Question to Ask
Quantifiable Risk Probability × exposure models, Monte Carlo simulation, decision trees "What is the expected value and range of outcomes?"
Genuine Uncertainty Scenario analysis, stress testing, pre-mortems "What would need to be true for this to work, and how would we know?"
Uncertainty Over Time Stage-gated frameworks with defined milestones "What must we learn before committing further capital?"
Expert-Dependent Judgment Structured elicitation, independent aggregation "What evidence would change your view?"

When Stakeholders Demand a Number

The political reality of investment committees and board meetings is that stakeholders often demand quantification even when it cannot be credibly provided. "We can't model this" is rarely an acceptable answer. Several approaches can help navigate this tension:

Bound the uncertainty explicitly. Rather than providing a point estimate, present a range that reflects genuine uncertainty: "Under optimistic assumptions X, Y, and Z, success probability could be as high as 30%. Under pessimistic assumptions, it may be below 5%. We do not have sufficient data to narrow this range." This forces the decision-maker to confront the uncertainty rather than hiding it inside a false-precision number.

Shift from probability to milestones. Instead of "15% probability of success," frame the investment around what you will learn and when: "A $5M Series A buys us IND-enabling studies. If target engagement exceeds 70% in tox studies, we have a fundable Series B. If not, we stop." This reframes the decision from betting on an outcome to buying information.

Separate the question. "What is the probability this works?" often conflates multiple distinct uncertainties. Decompose it: "What is the probability the biology is valid? That we can drug the target? That the therapeutic window exists? That we can manufacture at scale?" Some of these may be quantifiable risks; others genuine uncertainties. Treating them separately produces clearer thinking even if it does not produce a single number.

Converting Uncertainty to Risk

The most sophisticated investment strategies in biotech are built around a simple insight: uncertainty is expensive to hold but cheap to buy. Early-stage investors acquire uncertainty at low valuations, then systematically convert it to quantifiable risk through staged experimentation. Each milestone - target validation, PK/PD confirmation, clinical proof-of-concept - removes a layer of uncertainty and unlocks a new tranche of capital at higher valuations.

This is the logic of stage-gated development. Rather than making a single large commitment based on unreliable probability estimates, structure investments around sequential learning: smaller initial commitments with defined milestones that reduce uncertainty before larger capital is deployed. The CAR-T field illustrates this pattern. In 2010, when the first patients received CD19-targeted CAR-T cells at Penn, critical questions were genuine uncertainties: would engineered T-cells persist? Would second-generation designs with co-stimulatory domains succeed where first-generation constructs had failed? Would manufacturing be feasible outside an academic center? No historical dataset could answer these questions. By 2017, when Novartis received approval for Kymriah, the field had accumulated enough clinical data that these became quantifiable risks - response rates by disease subtype, durability curves, manufacturing failure rates. Today, investors can model CAR-T program economics with reasonable confidence in a way that would have been fiction a decade earlier.

The practical implication: Uncertainty is often priced as if it were risk. When probability estimates are assigned to genuinely unprecedented programs, valuations can reflect confidence that doesn't exist. The discipline - for investors, operators, and partners alike - is recognizing which type of situation you're in before the number goes into the model.

References

1. Knight, F.H. (1921). Risk, Uncertainty, and Profit. Boston, MA: Hart, Schaffner & Marx; Houghton Mifflin Company.
2. Cox, L.A. (2008). What's Wrong with Risk Matrices? Risk Analysis, 28(2), 497-512.
3. Hubbard, D.W. (2009). The Failure of Risk Management: Why It's Broken and How to Fix It. Wiley.
4. Wong, C.H., Siah, K.W., & Lo, A.W. (2019). Estimation of clinical trial success rates and related parameters. Biostatistics, 20(2), 273-286.