Pioneering Causality-Empowered Models
for the Next Generation of Intelligence
Pioneering Causality-Empowered Models
for the Next Generation of Intelligence
Advancing the frontier of causal intelligence - by develop foundation models and agents
that reason not only from patterns, but from cause-and-effect. Unlike large language
models (LLMs) that rely on next-token prediction without explicit causal structure,
our systems are designed to be interpretable, generalizable, and intervention-ready.
Advancing the frontier of causal intelligence - by develop foundation models and agents
that reason not only from patterns, but from cause-and-effect. Unlike large language
models (LLMs) that rely on next-token prediction without explicit causal structure,
our systems are designed to be interpretable, generalizable, and intervention-ready.

Research

causality/
causality/ 
These innovations strengthen the foundations
of intelligence systems across domains.
These innovations strengthen the foundations
of intelligence systems across domains.

Research Directions



Research pioneers pillars of causality-empowered AI

Beyond these initial directions, our research continues to expand toward new frontiers in causal intelligence,
autonomous reasoning, and scientifically grounded AI systems.
Beyond these initial directions, our research continues to expand toward new frontiers in causal intelligence,
autonomous reasoning, and scientifically grounded AI systems.

Research Topics

Related Paper List

All

Expert

Causal Agent

White-box
LLM / VLM

Causal Driven
World Model

Latent
Causal Discovery

Counterfactual

Reasoning

Causal-Copilot: An Autonomous Causal Analysis Agent

Xinyue Wang, Kun Zhou, Wenyi Wu, Har Simrat Singh, Fang Nan, Songyao Jin, Aryan Philip, Saloni Patnaik, Hou Zhu, Shivam Singh, Parjanya Prashant, Qian Shen, Biwei Huang. arXiv preprint arXiv:2504.13263 (2025).

Activation Control for Efficiently Eliciting Long Chain-of-thought Ability of Language Models.

Zekai Zhao, Qi Liu, Kun Zhou, Zihan Liu, Yifei Shao, Zhiting Hu, and Biwei Huang. arXiv preprint arXiv:2505.17697 (2025).

Towards generalizable reinforcement learning via causality-guided self-adaptive representations.

Yupei Yang, Biwei Huang, Fan Feng, Xinyue Wang, Shikui Tu, and Lei Xu. arXiv preprint arXiv:2407.20651 (2024).

Modeling Unseen Environments with Language-guided Composable Causal Components in Reinforcement Learning.

Xinyue Wang, and Biwei Huang. arXiv preprint arXiv:2505.08361 (2025).

Learning world models with identifiable factorization.

Yu-ren Liu, Biwei Huang, Zhengmao Zhu, Honglong Tian, Mingming Gong, Yang Yu, and Kun Zhang. Advances in Neural Information Processing Systems 36 (2023): 31831-31864.

Generalized independent noise condition for estimating latent variable causal graphs.

Feng Xie, Ruichu Cai, Biwei Huang, Clark Glymour, Zhifeng Hao, and Kun Zhang. Advances in neural information processing systems 33 (2020): 14891-14902.

Learning discrete concepts in latent hierarchical models.

Lingjing Kong, Guangyi Chen, Biwei Huang, Eric Xing, Yuejie Chi, and Kun Zhang. Advances in Neural Information Processing Systems 37 (2024): 36938-36975.

Latent hierarchical causal structure discovery with rank constraints.

Biwei Huang, Charles Jia Han Low, Feng Xie, Clark Glymour, and Kun Zhang. Advances in neural information

Differentiable Causal Discovery For Latent Hierarchical Causal Models.

Parjanya Prajakta Prashant, Ignavier Ng, Kun Zhang, and Biwei Huang. arXiv preprint arXiv: 2411.19556 (2024).

When and how: Learning identifiable latent states for nonstationary time series forecasting.

Zijian Li, Ruichu Cai, Zhenhui Yang, Haiqin Huang, Guangyi Chen, Yifan Shen, Zhengming Chen, Xiangchen Song, Zhifeng Hao, and Kun Zhang. CoRR (2024)

Why Causality Matters
Why Causality Matters
While today’s LLMs excel at fluency and pattern recognition,
they face critical limitations
While today’s LLMs excel at fluency and pattern recognition,
they face critical limitations
Limited
Interpretability
Limited
Interpretability
Coherence without
causal structure
Coherence without
causal structure
Weak
Generalization
Weak
Generalization
Fragile under out-of-distribution
shifts.
Fragile under out-of-distribution
shifts.
No Interventions
or Counterfactuals
No Interventions
or Counterfactuals
Incapable of systematic “what-if”
reasoning for decision support.
Incapable of systematic “what-if”
reasoning for decision support.
See research opportunities.
See research opportunities.
Join Abelian Groups to stay on top of new
releases, features, and updates.
Join Abelian Groups to stay on top of new
releases, features, and updates.

© 2026 Abel Intelligence Inc. All rights reserved