Back to Blog
Causal Research

The Causal Hierarchy Explained: A Practical Guide

A beginner-friendly explanation of Pearl's Causal Hierarchy — the three layers of reasoning, why they matter, and how Abel uses them to answer real-world decision questions.

A

Abel Research

·4 min read
EducationCausal ComputationPearl's Hierarchy

If you've heard about causal computation and wondered what makes it different from regular AI, this guide is for you. We'll explain Pearl's Causal Hierarchy — the mathematical framework that defines what types of questions AI can and cannot answer — in practical terms.

Layer 1: Association — "What Do I See?"

The question: "When X happens, what usually happens with Y?"

The math: P(Y | X) — the probability of Y given that we observe X.

Everyday example: "People who buy diapers also buy beer" (a famous retail correlation). Every recommendation system, every search engine, and every LLM operates here.

LLMs are Layer 1 machines. When GPT says "Rate hikes usually lead to crypto drops," it's saying "In the text I was trained on, these events were discussed together." That's association — valid for pattern matching, but dangerous for decisions.

Layer 2: Intervention — "What If I Act?"

The question: "If I do X, what happens to Y?"

The math: P(Y | do(X)) — the probability of Y given that we intervene on X.

The critical difference: observing and doing are not the same. Observing that hospitals have high death rates doesn't mean hospitals cause death — sick people go to hospitals (confounding). If you intervene by randomly assigning people to hospitals, you'd see hospitals reduce death rates.

This is where most real decisions live. "Should I take this job?" is not "What happened to people who took similar jobs?" (Layer 1). It's "If I take this job, what happens to my career?" (Layer 2). Your situation, your confounders, your causal graph.

do-calculus is the mathematical tool that converts Layer 2 questions into computable answers, given a causal graph. Abel uses PCMCI and 38 other algorithms to discover the graph, then applies do-calculus to answer your intervention question.

Layer 3: Counterfactual — "What If Things Had Been Different?"

The question: "Given that X happened and Y resulted, what would Y have been if X had been different?"

The math: P(Y_{x'} | X=x, Y=y) — the probability of a different Y, given what actually happened.

Example: "I didn't invest in Bitcoin in 2020. If I had, how much would I have now?" This isn't a prediction — it's reasoning about an alternative timeline while anchored to what actually happened.

Counterfactual reasoning requires a full Structural Causal Model (SCM) — the most complete representation of cause and effect. It's also the hardest to compute, which is why Abel's counterfactual() primitive is still in beta.

Why This Matters for Your Decisions

Every important life decision is a Layer 2 or Layer 3 question:

  • "Should I buy a house in Austin?" → Layer 2 (intervention)
  • "What would have happened if I'd chosen the startup over Google?" → Layer 3 (counterfactual)
  • "If AI replaces designers, should I pivot?" → Layer 2 (intervention)
  • "Is an MBA worth $200K?" → Layer 2 (intervention)

When you ask ChatGPT these questions, you get Layer 1 answers dressed up in causal language. The model pattern-matches against similar discussions in its training data. It might sound right, but the math is wrong — it's correlation, not causation.

When you ask Abel, you get actual causal computation: a graph was discovered from real-world data, do-calculus was applied, and the answer includes the causal chain, time lags, confidence intervals, and the specific conditions under which the answer holds.

The Social Physical Engine

Abel's causal graph isn't limited to finance. The Social Physical Engine models connections across human behavior (employment, migration, consumption), physical constraints (supply chains, resources, geography), and market signals (prices, volumes, indices).

Financial markets serve as a "signal layer" — they encode real-world consensus as price data. A career question like "Will AI replace designers?" can be answered by tracing causal chains through Adobe stock (creative tool demand), Fiverr/Upwork (freelance labor markets), and NVIDIA (AI capability growth).

These connections exist in the causal graph. Abel walks them for you.

Getting Started

The best way to experience the difference between Layer 1 and Layer 2/3 reasoning:

  1. Ask ChatGPT: "If the Fed raises rates 50bp, what happens to Bitcoin?"
  2. Ask Abel the same question.
  3. Compare: ChatGPT gives you a narrative. Abel gives you a number, a causal chain, a confidence interval, and a time lag.

That's the difference between pattern matching and causal computation.

Any dollar-value decision, just Abel it.