Precision medicine struggles not from lack of data, but from lack of causal understanding. Without modeling how interventions change disease trajectories, predictive AI fails at the bedside. A causal operating system enables mechanism-driven decisions, adaptive treatment, and safer, evidence-grounded care.
Start Simulating with Causal AI
Feb 3, 2025
Domains
Abel
7 min read
Precision medicine is one of those ideas that almost nobody disagrees with—and that should already make us suspicious.
Between 2005 and 2024, the healthcare industry poured more than $500B into genomics, precision diagnostics, AI platforms, and data-driven drug discovery. Over the same period, the cost of sequencing a human genome collapsed from roughly $100M to under $1K. National biobanks scaled from pilot projects to population infrastructure: 500K+ participants in the UK Biobank, 1M+ targeted by the U.S. All of Us program, with comparable efforts across Europe and Asia.
If data were the bottleneck, we should have solved this by now.
Hospitals now produce 1–5 petabytes of clinical data per year. Imaging archives run into the millions of studies. Continuous monitors sample patient physiology hundreds of times per second. AI models ballooned from millions to 10^11–10^12 parameters, capable of digesting text, images, time series, and biology in one system.
And yet, the most important question in medicine remains stubbornly unanswered:
Which action will actually change the outcome—and for whom?
That is not a data problem. It is a causality problem.
Medicine Is Not a Prediction Problem
Most medical AI systems today are very good at one thing: prediction.
They estimate the probability of readmission, deterioration, adverse events, or treatment response. Retrospective benchmarks regularly report 0.80–0.95 AUC, numbers that look convincing in slides and papers.
Then these systems hit the real world.
Prospective deployments typically show outcome improvements of 0–5%, and often no statistically significant effect at all. Clinicians ignore alerts 50–90% of the time, especially when systems generate more than 5–10 alerts per shift. The model may be “right,” but it doesn’t change what happens.
This isn’t a UX problem. It’s a category error.
Medicine is not an observational science. It is an interventional one. Every decision—starting a drug, adjusting a dose by 10–20%, delaying treatment by 24–72 hours, stopping therapy after 2 cycles—changes the system itself. Once you intervene, the past stops being a reliable guide.
Prediction assumes the world stays put. Medicine never does.
That’s why so many models look brilliant retrospectively and irrelevant prospectively. They describe yesterday’s patients, not tomorrow’s decisions.
The Correlation Trap
Precision medicine has become extraordinarily good at finding associations.
Genome-wide studies report hundreds to thousands of variants linked to disease risk. Oncology panels track 50–500 molecular features per tumor. Multi-omics datasets correlate signals across 10,000–100,000 patients.
This is real progress. It is also where many programs quietly go wrong.
A biomarker can explain 20–40% of outcome variance and still be a terrible intervention target. You can move it dramatically and change nothing—or make things worse.
The HDL cholesterol story is a reminder that still stings. Observational data suggested that each 1 mg/dL increase in HDL corresponded to a 2–3% reduction in cardiovascular risk. CETP inhibitors raised HDL by 50–100%. Clinical outcomes did not improve. In one major program, mortality increased by more than 50%.
The signal was strong. The lever wasn’t there.
Oncology repeats this lesson at scale. Fewer than 30% of Phase II programs advance to Phase III. Fewer than 10% of oncology assets entering trials ever reach approval. Biomarkers predict response; interventions fail to control disease over timelines measured in months, not years.
Correlation tells you what moves together. It does not tell you what you can safely push.
Why RCTs Exist—and Why They Can’t Do Everything
Regulators have always understood this, even when the industry pretends otherwise.
Randomized controlled trials exist to answer a very specific question: did this intervention actually cause a change? Not whether it correlated with one—whether it caused it.
The problem isn’t that RCTs are wrong. It’s that they’re slow, expensive, and blunt. A typical Phase III trial enrolls 500–3,000 patients, runs for 3–7 years, and costs $50M–$300M. Add combination therapies or adaptive dosing, and complexity jumps by 2–4× almost immediately.
Precision medicine is trying to personalize decisions across millions of patients and thousands of decision points. RCTs validate averages. Clinicians don’t treat averages—they manage trajectories.
That gap is widening, not shrinking.
The Price of Getting Causality Wrong
The economics make the problem impossible to ignore.
Pharmaceutical R&D productivity has quietly collapsed. On an inflation-adjusted basis, output has fallen by roughly 90% since the mid-20th century. FDA approvals now drift between 30 and 60 a year, while global R&D spending has climbed past $200B annually.
The odds inside the pipeline are even harsher. Overall clinical success rates hover around 10%. Phase II is where most programs die, with success rates below 30%. In oncology, a molecule entering Phase I has just a 5–8% chance of ever reaching approval.
Every failure represents years of work and hundreds of millions of dollars spent learning something that, in hindsight, could have been ruled out far earlier.
Precision medicine was supposed to change that math. Too often, it just made the correlations look better.
Why Predictive AI Keeps Disappointing Clinicians
Here is the uncomfortable but consistent logic.
Healthcare data reflects past decisions. Sicker patients get treated more aggressively. Treatment intensity correlates with severity, access, reimbursement, and local norms. Predictive models learn these patterns.
Deploy the model, and behavior changes. Patients are treated earlier. Monitoring increases by 2–5×. Thresholds shift. Feedback loops appear within weeks or months.
Standard metrics—accuracy, calibration, AUC—don’t measure whether those new decisions help. They measure fit in a world that no longer exists.
So you end up with systems that are statistically impressive and clinically hollow.
Why Causation Is No Longer Optional
This is not an academic shift. It’s an operational one.
Modern medicine is longitudinal. Chronic disease management spans 10–30 years. Oncology involves 3–6 lines of therapy. Autoimmune conditions require continuous adjustment. Outcomes depend on sequences of actions, not single predictions.
These systems are path-dependent. A delay of 24–48 hours can change a trajectory. A dose change of 10–20% can flip toxicity and efficacy. Early decisions constrain later options.
Other industries learned this lesson earlier. Finance moved from point forecasts to stress-testing across 10–100 scenarios. Aerospace simulates 1,000+ failure modes before flight.
Medicine is arriving at the same conclusion, whether it wants to or not.
What a Causal Operating System Actually Is
A causal operating system is not a magic model. It’s plumbing.
It makes assumptions explicit instead of burying them in weights. It treats interventions—start, stop, escalate, delay—as first-class objects. It answers counterfactuals that clinicians actually care about: now or 1 week later, 50 mg or 75 mg, sequence A→B or B→A.
It handles time and feedback natively. And it leaves a trail—what we assumed, what we tested, what changed—because accountability matters when decisions affect millions of lives and billions of dollars.
This isn’t philosophical rigor. It’s basic operational hygiene.
Where Abel Fits
Abel starts from a simple premise: decisions come first.
Instead of optimizing for prediction accuracy, it models how decisions propagate through biological and clinical systems. It allows interventions to be tested in simulation, before patients and balance sheets absorb the cost of being wrong.
The goal is not certainty. Medicine will never have that. The goal is to stop mistaking correlation for control at scale.
For executives and investors, the implication is straightforward. Precision medicine will not be unlocked by more data or slightly better models. It will be unlocked by infrastructure that turns data into decisions that hold up once the system reacts.
Conclusion
The first era of precision medicine was about measurement. By that standard, it succeeded.
The next era is about control.
Prediction will remain useful, but it is not enough. Medicine has always demanded causal justification for intervention. What has changed is that we can finally build systems that support that reasoning continuously.
Precision medicine needs a causal operating system because the future of healthcare will not be judged by how much we know—but by how reliably we act.
Causation isn’t a buzzword.
It’s how responsibility shows up in code.
© 2026 Abel Intelligence Inc. All rights reserved
System
Platform
Trust
Legal
Community