We optimized markets for speed, efficiency, and scale—and quietly lost control over causation. In a machine-run economy, actions propagate faster than oversight, and outcomes emerge without clear authorship. Optimization still works, but responsibility dissolves. The next economic infrastructure won’t be built on better prediction, but on causal control.
Start Simulating with Causal AI
Jan 31, 2026
Domains
Abel
9 min read
In the spring of 2010, the U.S. stock market lost nearly $1 trillion in value in a matter of minutes. The episode—later labeled the Flash Crash—ended almost as abruptly as it began. Prices rebounded. Liquidity returned. Official investigations followed, careful and restrained in tone. No single actor was blamed. No malicious intent was found. The system, we were told, had recovered.
Yet something subtler had broken. What failed that afternoon was not infrastructure, nor regulation, nor even market integrity in the narrow sense. What failed was the assumption that markets move at a pace compatible with human understanding. Algorithms responded to algorithms faster than explanation could catch up. Intervention arrived after causation had already run its course. The market stabilized—but trust did not.
Over the following decade, variations of this pattern quietly repeated themselves. Financial markets began repricing before regulatory oversight could meaningfully respond. Supply chains, optimized relentlessly for efficiency, proved brittle under modest shocks. Digital platforms reshaped public belief through recommendation systems that insisted they merely reflected user preference. Each episode came with a technical explanation. None fully addressed the deeper question: who—or what—is acting on whom?
Economic theory has long assumed an answer. From Adam Smith’s “invisible hand” through general equilibrium theory and modern principal–agent models, economics has treated humans as the sole legitimate decision-makers. Machines accelerate choice; they do not originate it. As long as automation replaced repetitive labor, this assumption held. The machine remained a tool; the human remained the agent.
That boundary is now eroding.
When autonomous systems interact continuously with other autonomous systems—learning, adapting, and responding in real time—the problem ceases to be one of labor substitution. It becomes a question of decision-making itself: how it emerges, how it propagates, and how responsibility is distributed when no single human occupies the center.
Jeremy Rifkin sensed this shift as early as 1995, when he argued in The End of Work that information technology would structurally weaken labor markets and reorganize economic growth. His forecast was often criticized for overstating unemployment. In retrospect, his more durable insight lay elsewhere: technology would not simply displace workers—it would alter how economic coordination occurs. Two decades later, Erik Brynjolfsson and Andrew McAfee sharpened this argument in Race Against the Machine, showing empirically that digital technologies were decoupling productivity growth from wage growth and reshaping institutional form. The interaction between humans and machines, they argued, was no longer linear. It was recursive.
Once feedback loops accelerate and begin circulating among autonomous systems, optimization itself becomes unstable. Classical optimization assumes two things: a stable objective function and a largely passive environment. We know what we are maximizing, and the world does not fundamentally change because we pursue that objective more efficiently. Autonomous systems violate both assumptions. Their actions reshape the environment that generates the data they learn from. Optimization begins to alter the rules of optimization.
This is where systems like Clawd Bot cease to be technical curiosities and begin to function—both experientially and statistically—like a new class of economic participant.
The scale and speed of adoption are instructive. Within weeks of its public release, Clawd Bot’s open-source repository accumulated over 100,000 GitHub stars, placing it among the fastest-growing developer projects in the platform’s history. On peak days, star counts increased by 15,000–20,000 in twenty-four hours—an adoption curve typically associated with foundational infrastructure rather than experimental tooling. Within its first week of virality, the project drew millions of site visits, and was rapidly integrated into workflows ranging from personal task delegation to enterprise-scale process orchestration.
More revealing than raw popularity was how the system was used. Clawd Bot was not deployed as a one-off automation. Users configured it to persist—to monitor, decide, escalate, renegotiate, and adapt over time. In parallel, an experimental agent-only social environment emerged around it, populated by tens of thousands of autonomous agents interacting with one another without direct human prompting. What appeared was not merely automation, but ongoing participation.
This behavioral profile matters. Systems that maintain intent across contexts, reallocate resources, and adapt strategy without continuous instruction begin to resemble economic actors in the formal sense: entities that make decisions, influence state transitions, and generate downstream effects over time. At this point, the language of tools begins to fail. What emerges is not instrumentation, but agency distributed across code, data, and feedback.
Academic theory is beginning to catch up. A 2025 paper introduced the concept of Decentralized Autonomous Machines (DAMs): systems embedded in real economic processes, operating through rules and feedback rather than direct human oversight. Their defining feature is not intelligence, but the diffusion of causal responsibility. Actions occur, consequences propagate, yet no single decision-maker fully owns the outcome.
This reframes a long-standing problem in economic theory. Market failure has traditionally been explained through externalities, asymmetric information, or coordination breakdowns. A growing body of causal economics suggests that many modern failures stem instead from causal decoupling—situations in which those who initiate decisions systematically avoid bearing downstream costs while retaining upstream benefits. Under such conditions, optimization does not converge. It drifts. The system appears efficient by its own metrics, yet increasingly misaligned with social outcomes. This is not inefficiency in the traditional sense; it is structural mis-causation.
Labor markets illustrate this clearly. Empirical studies across OECD economies show that automation affects not only employment levels but labor income shares, wage elasticity, and job stability. In advanced economies, robotics adoption has been associated with declining wage responsiveness to unemployment and a falling labor share of national income. In emerging economies, automation imported through global supply chains has, in some sectors, suppressed labor income altogether. These divergences are not technological inevitabilities; they reflect differences in institutional causality—who absorbs risk, who captures surplus, and who remains exposed.
Even at the individual level, the effects are measurable. Psychologists now describe technostress: chronic strain produced when human agency is subordinated to machine-paced decision systems. This is not primarily about productivity loss. It is about erosion of control—about the narrowing space in which individuals experience themselves as authors of their own actions.
In such an environment, “optimization results” cease to be neutral signals. They become triggers. Forecasts alter behavior. Rankings reshape attention. Recommendations rewrite preference. Prediction no longer describes the future; it actively participates in its construction. This is the critical fault line between optimization and causation.
Federico Pistono once proposed a provocative idea in Robots Will Steal Your Job, But That’s OK: technological displacement might enable a post-scarcity economy in which work and income decouple entirely. Related post-capitalist theories echo this optimism. Perhaps automation liberates rather than impoverishes. Whether such futures materialize remains an open question. But all of them hinge on the same unresolved issue: who controls the causal structure of action?
When autonomous systems become primary actors in economic coordination, humans are no longer the fastest—or even the central—decision-makers. Understanding how actions propagate through these systems becomes less a technical challenge than a civilizational one. We are approaching a limit case. Optimization alone can no longer explain economic behavior. It must be constrained by causal understanding.
Optimization without causal control does not merely accelerate outcomes—it obscures responsibility. It allows systems to expand without reflection, guided by metrics that improve even as meaning erodes. A stable, intelligible, and accountable future will not be built by better optimization alone. It will be built by recovering causation—by insisting that systems not only perform, but make sense.
Not because causation is fashionable.
But because it is how responsibility survives at scale.
© 2026 Abel Intelligence Inc. All rights reserved
System
Platform
Trust
Legal
Community