Every AI reads the world. Abel computes it.

The world's first causal intelligence engine — trace real cause and effect across 200,000+ variables, not just correlation.

Start making decisions with
live causal intelligence.

Already built

The world's largest live causal graph
for financial markets.

200,000+ variables across 30 time steps. PCMCI at industrial scale. Structure refreshed daily. Running now.

0+
Financial variables tracked
0
Time steps per inference
0M+
Causal edges discovered
Live
Structure refreshed daily

Scale comparison

200x beyond published SOTA.

The only known system running true causal inference at 200K+ variables with daily structural refresh.

AbelnowTrue Causal
200K+ vars
Frequency: DailyMethod: True causal (PCMCI)
BloombergCorrelation
~5,000 vars
Frequency: DailyMethod: Correlation only
Kensho (S&P Global)Statistical
~500 vars
Frequency: Event-drivenMethod: Statistical association
Causality LinkNLP-extracted
~2,000 vars
Frequency: DailyMethod: NLP-extracted "causal"
Two Sigma / CitadelGranger
Unknown vars
Frequency: UnknownMethod: Likely Granger
Academic PCMCITrue Causal
~500 vars
Frequency: One-shotMethod: True causal

Mission

Three gaps no existing
system closes.

Every major AI system today is built on text. The world does not run on text. It runs on numerical reality: prices moving, rates shifting, flows redirecting, structures forming and dissolving.

The Structure GapDecode Reality
The problem

Correlation-based systems see that X and Y move together. They cannot tell you whether X drives Y, Y drives X, or both are driven by Z.

How Abel solves it

Abel discovers directed causal structure from data using constraint-based and score-based algorithms — then encodes it as a live DAG with 200K+ variables. Every edge has a direction, a β coefficient, a time lag τ, and a p-value. Not correlation matrices. Directed graphs.

Principle: Structure, Not Surface

The mathematical proof

Pearl's Causal Hierarchy

LLMs are stuck at Layer 1 — association. Abel operates at Layer 2 (intervention) and Layer 3 (counterfactual). A mathematical impossibility, not an engineering gap.

Layer 1·Association·Seeing

What is the probability of Y given that we observe X?

Google
Retrieval

WHAT happened?

"BTC dropped 5% today"

Retrieves facts from indexed pages. No mechanism, no directionality. Pure observation.

ChatGPT
Pattern Matching

HOW does it work?

"BTC often drops when Fed raises rates due to historical patterns"

Synthesizes patterns from training data. Sounds causal — is not. Cannot distinguish correlation from causation.

Abel
Causal Inference

WHAT is connected?

"BTC ↔ DXY: β=−0.042, τ=5h, p<0.003 — a directed edge in the live causal graph"

Discovers directed associations with edge weights, time lags, and statistical significance.

One engine, two surfaces

Same question. Same engine.
Two interfaces.

200,000+ variables. 6M causal spatiotemporal nodes. Structure refreshed daily, predictions updated hourly. Whether you're a human or an AI agent — same engine.

Abel AppFor humans

Ask any decision question

“If the Fed raises rates 50bp, should I hold my crypto?”

HOLD
p = 0.003

Causal chain

Fed_Rateτ=5hDXYτ=2hBTCUSD

Effect

−4.2%

95% CI

[−2.1, −6.8]%

β coeff

−0.042

Natural language in, structured causal analysis out. No code required.

Try Abel App
PlatformFor agents
agent.pypython
import abel

client = abel.Client(api_key="sk-your-key")

prediction = client.predict("BTCUSD_close", horizon=48)
drivers = client.explain("BTCUSD_close", depth=2,
  cross_domain=True)
effect = client.intervene("Fed_Funds_Rate", "BTCUSD_close",
  treatment_value=0.5)

print(prediction)
print(drivers)
print(effect)

`pip install abel-cap` — three lines to your first causal query. Typed responses, async support, and built-in caching for high-throughput agent pipelines.

MCP gives agents tools. CAP gives agents causal reasoning.

View API Docs

Use Cases

Every causal question
disguised as a casual one.

People don't ask “run a causal intervention.” They ask “should I invest?” “will this skill survive?” “is an MBA worth it?” — Abel turns each one into a computable causal query.

Social Physical Engine

Abel Response

Signal

Structural Shift

87

Causal Chain

AI_Adoption_Rate[beta=0.67, tau=90h, p<0.001]Design_Tool_Automation[beta=0.41, tau=60h, p<0.004]Junior_Designer_Demand

Decision

Core visual design tasks automate 40-60% within 3 years. But causal-strategic roles such as user research, systems thinking, and brand architecture still show no automation signal. The graph points upward specialization, not sideways drift.

Signal-63.3%
30d

Hi

60

Lo

22

Chg

-63.3%

An LLM gives you a thoughtful opinion. Abel gives you the causal graph behind the answer — with numbers, lags, and confidence you can check.

Try Abel

Evidence

Real structural findings from
Abel's live graph.

These are real, statistically significant findings — directed causal edges with p-values, β coefficients, and time lags. Not hypotheses. Not AI guesses.

Reflexive Loop

Preferred Share / mREIT Reflexive Cluster

A tightly coupled feedback loop between preferred shares and mortgage REITs, invisible to correlation analysis.

Preferred SharesmREIT Prices
β=+0.38τ=72hp<0.001
mREIT PricesPreferred Shares
β=+0.29τ=120hp<0.003

Tap or hover parameters for definitions

Each directed edge is discovered from data using PCMCI — not assumed or manually specified. All edges carry statistical significance tests.

Same question. Different universe.

What it looks like when
answers are computable.

The difference isn't better language. It's a different kind of answer — every number traceable to a graph edge, every claim falsifiable with a timestamp.

ETH is crashing. Should I cut losses?

Any LLM

"It depends on your risk tolerance. ETH has strong fundamentals..."

No causal traceNo magnitudeNot falsifiable
Abel
89%

Causal parent = creative-industry stock (τ=3h), crashed -5σ. P(ETH↓) = 89.4% in 48h. Range: $2,100-2,250 (95% CI). 4 causal paths traced.

Probability
89.4%
ETH down in 48h
Range
$2,100–2,250
95% CI
Paths
4
causal traces
3 causal edgesp-values attachedFalsifiable

For developers

Use the platform to give any LLM
a causal cortex.

Docs are for implementation. The platform is for understanding the model, integrations, and deployment path before you ship.

MCP gives agents tools. CAP gives agents causal reasoning. Orthogonal by design — Schema-as-API provides deterministic, zero-LLM-cost routing into Abel's 200K+ variable graph.

agent.pypython
import abel

client = abel.Client(api_key="sk-...")

# Predict using causal Markov blanket — not correlation
prediction = client.predict("BTCUSD_close", horizon=48)
print(prediction.mean, prediction.ci_95)

# Explain: what drives BTC this week?
drivers = client.explain("BTCUSD_close", depth=2, cross_domain=True)
for d in drivers:
    print(f"{d.variable} → weight: {d.edge_weight}")

# Intervene: what if oil hits $120?
effect = client.intervene("WTI_Crude", "CPI", treatment_value=120)

Fully typed SDK with async support, response caching, and streaming for large graph queries. Designed for agent pipelines that need high-throughput causal inference.

Start making decisions with
live causal intelligence.

Interested in shaping the future of
causal intelligence? We're hiring.

Open Roles

Join Abelian Groups to stay on top of new
releases, features, and updates.