Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

February 17, 2025

Category Theorists in AI

Posted by John Baez

Applied category theorists are flocking to AI, because that’s where the money is. I avoid working on it, both because I have an instinctive dislike of ‘hot topics’, and because at present AI is mainly being used to make rich and powerful people richer and more powerful.

However, I like to pay some attention to how category theorists are getting jobs connected to AI, and what they’re doing. Many of these people are my friends, so I wonder what they will do for AI, and the world at large — and what working on AI will do to them.

Let me list a bit of what’s going on. I’ll start with a cautionary tale, and then turn to the most important program linking AI and category theory today.

Symbolica

When Musk and his AI head Andrej Karpathy didn’t listen to their engineer George Morgan’s worry that current techniques in deep learning couldn’t “scale to infinity and solve all problems,” Morgan left Tesla and started a company called Symbolica to work on symbolic reasoning. The billionaire Vinod Khosla gave him $2 million in seed money. He began with an approach based on hypergraphs, but then these researchers wrote a position paper that pushed the company in a different direction:

Bruno Gavranović, Paul Lessard, Andrew Dudzik, Tamara von Glehn, João G. M. Araújo and Petar Veličković, Position: Categorical Deep Learning is an Algebraic Theory of All Architectures

Kholsa liked the new direction and invested $30 million more in Symbolica. At Gavranovic and Lessard’s suggestion Morgan hired category theorists including Dominic Verity and Neil Ghani.

But Morgan was never fully sold on category theory: he still wanted to pursue his hypergraph approach. After a while, continued disagreements between Morgan and the category theorists took their toll. He fired some, even having one summarily removed from his office. Another resigned voluntarily. Due to nondisclosure agreements, these people no longer talk publicly about what went down.

So one moral for category theorists, or indeed anyone with a good idea: after your idea helps someone get a lot of money, they may be able to fire you.

ARIA

David Dalrymple is running a £59 million program on Mathematics for Safeguarded AI at the UK agency ARIA (the Advanced Research + Invention agency). He is very interested in using category theory for this purpose.

Here’s what the webpage for the Safeguarded AI program says:

Why this programme

As AI becomes more capable, it has the potential to power scientific breakthroughs, enhance global prosperity, and safeguard us from disasters. But only if it’s deployed wisely. Current techniques working to mitigate the risk of advanced AI systems have serious limitations, and can’t be relied upon empirically to ensure safety. To date, very little R&D effort has gone into approaches that provide quantitative safety guarantees for AI systems, because they’re considered impossible or impractical.

What we’re shooting for

By combining scientific world models and mathematical proofs we will aim to construct a ‘gatekeeper’: an AI system tasked with understanding and reducing the risks of other AI agents. In doing so we’ll develop quantitative safety guarantees for AI in the way we have come to expect for nuclear power and passenger aviation.

You can read more details here.

Here are some category theory projects getting money from this program. I’m just quoting the website here.

Axiomatic Theories of String Diagrams for Categories of Probabilistic Processes

Fabio Zanasi, University College London

This project aims to provide complete axiomatic theories of string diagrams for significant categories of probabilistic processes. Fabio will then use these theories to develop compositional methods of analysis for different kinds of probabilistic graphical models.

Safety: Core Representation Underlying Safeguarded AI

Ohad Kammar, University of Edinburgh Team: Justus Matthiesen; Jesse Sigal, University of Edinburgh

This project looks to design a calculus that utilises the semantic structure of quasi-Borel spaces, introduced in ‘A convenient category for higher-order probability theory’. Ohad + team will develop the internal language of quasi-Borel spaces as a ‘semantic universe’ for stochastic processes, define syntax that is amenable to type-checking and versioning, and interface with other teams in the programme—either to embed other formalisms as sub-calculi in quasi-Borel spaces, or vice versa (e.g. for imprecise probability).

Philosophical Applied Category Theory

David Corfield, Independent Researcher

This project plans to overcome the limitations of traditional philosophical formalisms by integrating interdisciplinary knowledge through applied category theory. In collaboration with other TA1.1 Creators, David will explore graded modal logic, type theory, and causality, and look to develop the conceptual tools to support the broader Safeguarded AI programme.

Double Categorical Systems Theory for Safeguarded AI

David Jaz Myers, Topos Research UK Team: Owen Lynch; Sophie Libkind; David Spivak, Topos Research UK; James Fairbanks, University of Florida

This project aims to utilise Double Categorical Systems Theory (DCST) as a mathematical framework to facilitate collaboration between stakeholders, domain experts, and computer aides in co-designing an explainable and auditable model of an autonomous AI system’s deployment environment. David + team will expand this modelling framework to incorporate formal verification of the system’s safety and reliability, study the verification of model-surrogacy of hybrid discrete-continuous systems, and develop serialisable data formats for representing and operating on models, all with the goal of preparing the DCST framework for broader adoption across the Safeguarded AI Programme.

Monoidal Coalgebraic Metrics

Filippo Bonchi, University of Pisa

Filippo + team intend to establish a robust mathematical framework that extends beyond the metrics expressible in quantitative algebraic theories and coalgebras over metric spaces. By shifting from Cartesian to a monoidal setting, this group will examine these metrics using algebraic contexts (to enhance syntax foundations) and coalgebraic contexts provide robust quantitative semantics and effective techniques for establishing quantitative bounds on black-box behaviours), ultimately advancing the scope of quantitative reasoning in these domains.

Doubly Categorical Systems Logic: A Theory of Specification Languages

Matteo Capucci, Independent Researcher

This project aims to develop a logical framework to classify and interoperate various logical systems created to reason about complex systems and their behaviours, guided by Doubly Categorical Systems Theory (DCST). Matteo’s goal is to study the link between the compositional and morphological structure of systems and their behaviour, specifically in the way logic pertaining these two components works, accounting for both dynamic and temporal features. Such a path will combine categorical approaches to both logic and systems theory.

True Categorical Programming for Composable Systems Total

Jade Master and Zans Mihejevs, Glasgow Lab for AI Verification (GLAIVE) Team: André Videla; Dylan Braithwaite, GLAIVE

This project intends to develop a type theory for categorical programming that enables encoding of key mathematical structures not currently supported by existing languages. These structures include functors, universal properties, Kan extensions, lax (co)limits, and Grothendieck constructions. Jade + team are aiming to create a type theory that accurately translates categorical concepts into code without compromise, and then deploy this framework to develop critical theorems related to the mathematical foundations of Safeguarded AI.

Employing Categorical Probability Towards Safe AI

Sam Staton, University of Oxford Team: Pedro Amorim; Elena Di Lavore; Paolo Perrone; Mario Román; Ruben Van Belle; Younesse Kaddar; Jack Liell-Cock; Owen Lynch, University of Oxford

Sam + team will look to employ categorical probability toward key elements essential for world modelling in the Safeguarded AI Programme. They will investigate imprecise probability (which provides bounds on probabilities of unsafe behaviour), and stochastic dynamical systems for world modelling, and then look to create a robust foundation of semantic version control to support the above elements.

Syntax and Semantics for Multimodal Petri Nets

Amar Hadzihasanovic, Tallinn University of Technology Team: Diana Kessler, Tallinn University of Technology

Amar + team will develop a combinatorial and diagrammatic syntax, along with categorical semantics, for multimodal Petri Nets. These Nets will model dynamical systems that undergo mode or phase transitions, altering their possible places, events, and interactions. Their goal is to create a category-theoretic framework for mathematical modelling and safe-by-design specification of dynamical systems and process theories which exhibit multiple modes of operation.

Topos UK

The Topos Institute is a math research institute in Berkeley with a focus on category theory. Three young category theorists there—Sophie Libkind, David Jaz Myers and Owen Lynch—wrote a proposal called “Double categorical systems theory for safeguarded AI”, which got funded by ARIA. So now they are moving to the UK, where they will be working with Tim Hosgood, José Siquiera, Xiaoyan Li and maybe others at a second branch of the Topos Institute, called Topos UK.

Glaive

The Glasgow Lab for AI Verification or ‘Glaive’ is a non-profit limited company focusing on applying category theory to AI verification. The people working for it include Dylan Braithwaite, Jade Master, Zans Mihejevs, André Videla, Neil Ghani and Jules Hedges.

Like the Topos UK, Glaive is supported by ARIA.

Conexus AI

Conexus AI, led by Ryan Wisnesky, and the associated open-source categorical data project, apply functorial data migration and related categorical and formal methods to problems in data integration (where the tech is known as generative symbolic AI) and so-called ‘safe AI’ (where the tech is used to validate LLM outputs).

Sandbox AQ

Among other things, Tai-Danae Bradley is a research mathematician at SandboxAQ, a startup focused on AI and quantum technologies. She has applied category theory to natural language processing in a number of interesting papers.

Posted at February 17, 2025 7:49 PM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/3589

19 Comments & 0 Trackbacks

Re: Category Theorists in AI

Thank you for this insightful post, that was a fascinating cautionary tale. I appreciate the collection of what is happening with category theory in AI spaces

Posted by: Eve on February 17, 2025 10:12 PM | Permalink | Reply to this

Re: Category Theorists in AI

Thanks! We’ll have to see how this plays out in the next couple of years.

Posted by: John Baez on February 17, 2025 11:19 PM | Permalink | Reply to this

Re: Category Theorists in AI

I wonder what is the common estimation of the toposic/Paris-Huawei work on the topos approach to the machine learning. There is of course the behemoth paper “Topos and Stacks of Deep Neural Networks” which ceased to advance to the publication but is still cited by 10 papers. And then there is work of Lafforgue and Caramello – they have presentations at the end here.

I see that their work most effectively is about the neural embedding and decoding of the (crisp) theories in the most general (categorical) form this can be done via model theory in the terms of toposes. It seems to be the most principled approach, yet, there is no published papers.

Posted by: tomr on February 18, 2025 9:07 AM | Permalink | Reply to this

Re: Category Theorists in AI

I’m engaged in quite a few lines of thought on the ARIA project, but something I’ve been doing is to get down finally to thinking about an idea I had several years ago to relate the modes of inference described by Charles Peirce.

I’m giving a talk in Oxford in a couple of days to the Topos Institute. Notes are at the link. I’ll open a thread on this if people are interested to discuss it.

Posted by: David Corfield on February 18, 2025 9:48 AM | Permalink | Reply to this

Re: Category Theorists in AI

Your link didn’t work for me, David, I hope this one will.

Posted by: David Roberts on February 18, 2025 11:57 AM | Permalink | Reply to this

Re: Category Theorists in AI

It seems to work for me. But maybe a magic elf fixed it behind the scenes.

Posted by: David Corfield on February 18, 2025 9:46 PM | Permalink | Reply to this

Re: Category Theorists in AI

David C, what’s your experience of ARIA been like?

They seem to have quickly gone from not existing (or not as far as I noticed) to being a major UK funder. Now and again I get emails from or about them that include a lot of management-speak and buzzwords but don’t explain in plain English what they actually are or do.

I’m suspicious of them both for that reason and because they seem to be linked to the military/intelligence world; I think I read somewhere that they were trying to be the UK analogue of DARPA. So I’m curious to know what working for/with them (or however you’d prefer to phrase it!) has actually been like.

Posted by: Tom Leinster on February 18, 2025 12:36 PM | Permalink | Reply to this

Re: Category Theorists in AI

Tom, since I’m on the TA1.1 phase of this program, it’s largely been the great fun of getting together with loads of category-theoretic minded people to talk over practical and theoretic ideas. I’ve been learning all kinds of things from the need for graded monads when amortizing to the bialgebraic approach to operational semantics, while reserving time for some Peirce.

I’ve been given no reason to be suspicious of anything troublesome. The goal of our program is very ambitious and I’ve no reason to think it less than worthy.

Posted by: David Corfield on February 18, 2025 9:59 PM | Permalink | Reply to this

Re: Category Theorists in AI

I’m impressed by what David Dalrymple has done… Once upon a time, there was an awareness that the rise of AI logically ends with AI replacing humanity as the dominant “species” on Earth. “AI safety” back then, meant trying to design human-friendliness into a hypothetical superintelligence.

Now we actually have a kind of AI that we can even talk to, and the politicians and tech CEOs have simply embraced the goal of making it more and more powerful, while barely acknowledging the obvious consequences of their actions. In this world of actually existing AI, “AI safety” means all kinds of very-short-term stuff that drowns out concern with the big picture.

And yet Dalrymple has a meta research program that is designed to remain relevant, all the way to the point at which AI is capable of escaping human control. I can’t vouch for whether it’s enough, but at least he’s facing up to the biggest challenge. The only comparable figure I can think of, is Paul Christiano at the NIST in the USA (I wonder how he’s doing in the era of DOGE).

Posted by: Mitchell Porter on February 18, 2025 10:28 AM | Permalink | Reply to this

Re: Category Theorists in AI

Yes, I’m glad the UK is funding safeguarded AI, not just AI. Getting a bunch of good mathematicians involved should increase the chance that it will work.

But I am generally pessimistic, because AI safety seems to be taking a back seat to the race to develop AI as fast as possible. Scott Alexander recently joked:

OpenAI has bad luck with its alignment teams. The first team quit en masse to found Anthropic, now a major competitor. The second team quit en masse to protest the company reneging on safety commitments. The third died in a tragic plane crash. The fourth got washed away in a flood. The fifth through eighth were all slain by various types of wild beast.

Only the first two are true, but still.

I guess you noticed that the UK and US did not sign the international agreement on AI at the global summit in Paris on February 11th. It was signed by representatives of these countries:

Armenia
Australia
Austria
Belgium
Brazil
Bulgaria
Cambodia
Canada
Chile
China
Colombia
Cote d'Ivoire
Croatia
Cyprus
Czechia
Denmark
Djibouti
Egypt
Estonia
Finland
France
Germany
Greece
Hungary
India
Indonesia
Ireland
Italy
Japan
Kazakhstan
Kenya
Latvia
Lithuania
Luxembourg
Malta
Mexico
Monaco
Morocco
New Zealand
Nigeria
Norway
Poland
Portugal
Romania
Rwanda
Senegal
Serbia
Singapore
Slovakia
Slovenia
South Africa
Republic of Korea
Spain
Sweden
Switzerland
Thailand
Netherlands
Ukraine
Uruguay
Vatican
European Union
African Union Commission

US vice president Vance claimed AI regulation could “kill a transformative industry just as it’s taking off”. Chris Smith writes:

While America’s stance isn’t exactly shocking, the UK’s approach is more puzzling, especially considering a recent survey in the country that showed Brits are actually concerned about the dangers of AI, especially the more intelligent kind.

That sentence can be read in two ways—pretty funny!

Posted by: John Baez on February 19, 2025 1:53 AM | Permalink | Reply to this

Re: Category Theorists in AI

This is not categorified yet, but still - AIXI may be the most general definition what intelligence is - most conscise description of all the registered facts for the prediction purposes. Maybe any other intelligence is just approximation of AIXI and also - superintelligence is then bounded from the top by the nonapproximated AIXI if that is possible at all. Two things. 1) AIXI is summation over all the theories and there is category of theories. So, maybe category theory is the most appropriate framework for solving AIXI. 2) AIXI is noncomputable and one can argue - unpractical. But there is ordinal computing beyond Turing - with ordinal size machines which can be quite practical setting for disciplined and tracked approximation.

So, maybe such outlook - ideas first and then their categorification - can be fruitful.

And, who knows, whats is being done by Sutskever and ssi.inc - they were saying that AI wall has been identified where scaling hypothesus breaks down. Maybe they are into neurosymbolism and categories as well. No hints but then - what else?

Posted by: Tomr on February 18, 2025 11:10 AM | Permalink | Reply to this

Re: Category Theorists in AI

For neurosymbolic AI inspired in part by sheaves and BS detection, see also arxiv:2401.16713 and this GitHub repo. A proper preprint for the latter and an algorithmic benchmark should be out in a week or two.

Posted by: Steve Huntsman on February 18, 2025 1:08 PM | Permalink | Reply to this

Re: Category Theorists in AI

https://arxiv.org/abs/2502.13953

Posted by: Steve Huntsman on February 20, 2025 2:38 AM | Permalink | Reply to this

Re: Category Theorists in AI

Perhaps Leicester University should be appointing El Naschie to lead a new department of Categorically Safe AI.

Posted by: Gavin Wraith on February 18, 2025 3:27 PM | Permalink | Reply to this

Re: Category Theorists in AI

It is great of you to compile all this! Thank you so much for instantiating this, cf

https://en.wikipedia.org/wiki/Coherence_length

Just calling attention to the issue is already helpful.

Posted by: jack morava on February 18, 2025 8:31 PM | Permalink | Reply to this

Re: Category Theorists in AI

Off topic, but, alas, Bill Browder has passed away.

Speaking of intelligence, he was a rare, remarkable example of the human kind. He was a major builder of the mathematical world we live in; the loss hurts but the work lives on.

Posted by: jack on February 19, 2025 1:49 PM | Permalink | Reply to this

Re: Category Theorists in AI

That’s sad. I never met him.

Posted by: John Baez on February 20, 2025 4:37 AM | Permalink | Reply to this

Re: Category Theorists in AI

Julian Rohrhuber allowed me to share his response, which he posted to the categories mailing list:

This is an interesting blog post, thank you, John.

In a detailed paper from ten years ago, Phillip Rogaway [1] sets out by comparing the situation of post-war nuclear physicists with cryptographers today. The Russell–Einstein-Manifesto from 1955 paved the road towards the Nuclear Non-Proliferation Treaty. In a similar way, Rogaway argues that cryptographers do not have the luxury of working outside ethical responsibility [2] and should work toward something similar.

Already as early as Alexander Grothendieck’s pacifist work and his departure from the IHÉS, an argument can be made that category theorists ought to reflect on the impact of their work on military technology. One could say of course that mathematics in general has technological, societal and often military impact. But to me it seems that the contribution of category theorists to AI is a concrete and strong case, because this technology is involved in an arms race of sorts. It is closely related to the reason why the paper makes the connection to the Russell–Einstein-Manifesto.

Best, Julian

[1] https://eprint.iacr.org/2015/1162.pdf

[2] “As a cryptographer, you can ignore this landscape of power, and all political and moral dimensions of our field. But that won’t make them go away. It will just tend to make your work less relevant or socially useful.” (p. 47)

Posted by: John Baez on February 20, 2025 4:43 AM | Permalink | Reply to this

Re: Category Theorists in AI

I have always thought, that rigorous certification of models, the formalization of norms (deontic logic etc) and provable normative control (alignment) that all can be made possible if neurosymbolic categorical approach the solution to all technical challenges in AGI/SSI safety. Of course, even democracies can produce dubuous norms and alignment instructions (e.g. how much powerty and cosnscious lack of automatization today!), but that is separate problem. So, full speed ahead!

Posted by: Tomr on February 20, 2025 6:41 AM | Permalink | Reply to this

Post a New Comment