Foundations

Causal AI

A branch of AI focused on modeling cause-and-effect relationships rather than only statistical association.

Causal AI combines machine learning with formal causal models so systems can estimate the effects of actions, not just predict likely observations. Instead of stopping at correlation, it represents mechanisms, interventions, and feedback loops.

Foundations

Causal Inference

The process of estimating the effect of actions or treatments from data and assumptions.

Causal inference is the formal task of estimating causal effects from observational or experimental data. It asks questions like whether changing X would alter Y, and by how much.

Foundations

Causal Reasoning

Using a causal model to answer questions about effects, mechanisms, interventions, and counterfactuals.

Causal reasoning is the act of operating on a causal model to answer structured questions. It moves from representation to computation.

Foundations

Causal Graphs

Graphical representations of variables and the directed causal relationships among them.

A causal graph encodes variables as nodes and causal relationships as directed edges. It gives a compact picture of how change propagates through a system.

Foundations

Confounders

Variables that influence both a candidate cause and an outcome, creating misleading associations if ignored.

A confounder is a variable that affects both the treatment and the outcome. If you do not account for it, the estimated effect of the treatment can be biased.

Foundations

Counterfactuals

Statements about what would have happened to the same case under a different action or condition.

A counterfactual asks what would have happened for this specific case if a different action had been taken. It is more specific than a population-level causal effect.

Foundations

Structural Causal Model

A causal model that specifies each variable as a function of its direct causes and exogenous factors.

A structural causal model, or SCM, combines a causal graph with structural equations and exogenous noise terms. It formalizes how variables are generated and how interventions replace or alter those mechanisms.

Foundations

Do-Calculus

A set of rules for transforming intervention queries using a causal graph and conditional independence assumptions.

Do-calculus is Judea Pearl's algebra for reasoning about interventions. It provides rules for converting expressions with the do-operator into estimable quantities when the causal graph supports that transformation.

Decisions

Decision Intelligence

The discipline of combining models, objectives, constraints, and uncertainty to support better actions.

Decision intelligence is the system layer that turns data, models, and business objectives into recommended actions. It is broader than analytics because it includes intervention logic, uncertainty, tradeoffs, and accountability.

Decisions

Decision-Making

The act of choosing an action under goals, constraints, and uncertainty.

Decision-making is the final commitment to one action or policy from a set of alternatives. In consequential systems, that choice should depend on more than a point prediction.

Decisions

What-If Analysis

Evaluating how outcomes might change under a specified hypothetical adjustment or intervention.

What-if analysis asks how a system would respond if one or more inputs changed. In causal systems, the useful version of what-if analysis is intervention-aware rather than purely spreadsheet-based.

Decisions

Scenario Analysis

Comparing multiple coherent future states to understand how a decision performs across different conditions.

Scenario analysis evaluates actions across multiple future conditions rather than a single expected path. Each scenario bundles assumptions about regimes, external events, timing, and constraints.

Decisions

Intervention Modeling

Representing actions explicitly so their downstream effects can be computed rather than guessed.

Intervention modeling is the practice of specifying an action, its scope, and its entry point into a system so downstream consequences can be evaluated. It turns decisions into formal objects instead of narrative prompts.

Abel Platform

CAP

An Abel-native protocol for representing causal state, interventions, and outcomes as machine-operable objects.

Within Abel, CAP can be understood as the protocol layer that turns causal concepts into portable computational objects. Instead of passing around vague prompts or disconnected model outputs, the system exchanges structured state, interventions, paths, and decision artifacts.

Abel Platform

Social Physical Engine

An execution model that combines social behavior, institutional rules, and physical constraints in one causal system.

A social physical engine is Abel's way of treating real-world systems as mixtures of material constraints and human or institutional behavior. Markets, logistics, healthcare, and agent networks all have both social and physical structure.

Abel Platform

Schema-as-API

A design pattern where the causal schema itself defines what can be queried, simulated, and acted on.

Schema-as-API treats the underlying causal schema as the contract for computation. The system does not bolt meaning onto endpoints afterward; the structure itself determines which interventions, paths, and decision objects exist.

Abel Platform

Two Surfaces

A separation between the surface that builds or curates the causal model and the surface that queries or executes decisions from it.

Two Surfaces describes a design split: one surface is for building, validating, and evolving the world model, while the other is for operating on that model in decision workflows. Separating the two reduces confusion between model governance and action execution.

Abel Platform

Decision Layer

The system layer that computes the consequences of action and turns them into accountable decision objects.

The Decision Layer is Abel's central product idea: a layer of AI that exists to compute action consequences, not just generate plausible text or scores. It connects world models, interventions, uncertainty, and execution-facing outputs.

Abel Platform

Computation-Gated Access

A pattern where access to outputs or actions depends on whether the system can ground them in valid causal computation.

Computation-gated access means the platform only exposes certain answers, tools, or actions when the underlying structure supports a real computation. If the schema is missing, the assumptions are weak, or the intervention is out of scope, access should narrow rather than hallucinate confidence.

Abel Platform

Canary Edge

A monitored causal relationship used as an early warning signal for drift, breakage, or regime change in the model.

A canary edge is a designated relationship in the causal graph that is especially useful for early validation. If its strength, sign, or timing changes unexpectedly, that may indicate drift in the underlying system or in the model's fit.