This thesis explores novel approaches to compositional reasoning in AI leveraging the mathematics of quantum theory as a general probabilistic theory. Starting from the quantum picturialism paradigm, offering a diagrammatic category-theoretic language, I show that quantum theory provides practical modelling and computational benefits for AI. A literature survey connect various applications, from quantum game theory and satisfiability to ML, NLP and cognition. How to formally represent and reason with concepts is a longstanding challenge in cognitive science and AI. My thesis studies the Quantum Model of Concepts (QMC), which provides conceptual space theory with quantum theoretical semantics. The diagrammatic language serves as a compositional framework for both, exposing common structures and facilitating insights between domains. I implement the model as a hybrid quantum-classical architecture on real quantum hardware to explore how QMCs can form practical intermediate, compositional representations for artificial agents combining symbolic and subsymbolic reasoning. Addressing the symbol grounding problem, I show that QMC representations can be learned from raw data in a (self-)supervised subsymbolic way, but that composite concepts can also be grounded in simpler ones to be interpretable and data-efficient. By transforming quantum concepts into probabilistic generative processes, the QMC can solve visual relational Blackbird puzzles involving abstraction and perceptual uncertainty, similar to Raven’s Progressive Matrices.
@phdthesis{gauderisQuantumTheoryKnowledge2023,title={Quantum {{Theory}} in {{Knowledge Representation}}: {{A Novel Approach}} to {{Reasoning}} with a {{Quantum Model}} of {{Concepts}}},shorttitle={Quantum {{Theory}} in {{Knowledge Representation}}},author={Gauderis, Ward and Wiggins, Geraint},year={2023},month=aug,address={Brussels, Belgium},school={Vrije Universiteit Brussel},}
The Categorical Compositional Distributional (DisCoCat) model has been proven to be very successful in modelling sentence meaning as the interaction of word meanings. Words are modelled as quantum states, interacting guided by grammar. This model of language has been extended to density matrices to account for ambiguity in language. Density matrices describe probability distributions over quantum states, and in this work we relate the mixedness of density matrices to ambiguity in the sentences they represent. The von Neumann entropy and the fidelity are used as measures of this mixedness. Via the process of amplitude encoding, we introduce classical data into quantum machine learning algorithms. First, the findings suggest that in quantum natural language processing, amplitude-encoding data onto a quantum computer can be a useful tool to improve the performance of the quantum machine learning models used. Second, the effect that these encoded data have on the above-introduced relation between entropy and ambiguity is investigated. We conclude that amplitude-encoding classical data in quantum machine learning algorithms makes the relation between the entropy of a density matrix and ambiguity in the sentence modelled by this density matrix much more intuitively interpretable.
We propose χ-net, an intrinsically interpretable architecture combining the compositional multilinear structure of tensor networks with the expressivity and efficiency of deep neural networks. χ-nets retain equal accuracy compared to their baseline counterparts. Our novel, efficient diagonalisation algorithm, ODT, reveals linear low-rank structure in a multilayer SVHN model. We leverage this toward formal weightbased interpretability and model compression.
@inproceedings{dooms_compositionality_2024,title={Compositionality {Unlocks} {Deep} {Interpretable} {Models}},url={https://openreview.net/forum?id=bXAt5iZ69l},urldate={2025-02-17},booktitle={Connecting {Low}-{Rank} {Representations} in {AI}: {At} the 39th {Annual} {AAAI} {Conference} on {Artificial} {Intelligence}},author={Dooms, Thomas and Gauderis, Ward and Wiggins, Geraint and Mogrovejo, Jose Antonio Oramas},month=nov,year={2024},}
This paper presents Bayesian Ultra-Q Learning, a variant of Q-Learning adapted for solving multi-agent games with independent learning agents. Bayesian Ultra-Q Learning is an extension of the Bayesian Hyper-Q Learning algorithm proposed by Tesauro that is more efficient for solving adaptive multi-agent games. While Hyper-Q agents merely update the Q-table corresponding to a single state, Ultra-Q leverages the information that similar states most likely result in similar rewards and therefore updates the Q-values of nearby states as well. We assess the performance of our Bayesian Ultra-Q Learning algorithm against three variants of Hyper-Q as defined by Tesauro, and against Infinitesimal Gradient Ascent (IGA) and Policy Hill Climbing (PHC) agents. We do so by evaluating the agents in two normal-form games, namely, the zero-sum game of rock-paper-scissors and a cooperative stochastic hill-climbing game. In rock-paper-scissors, games of Bayesian Ultra-Q agents against IGA agents end in draws where, averaged over time, all players play the Nash equilibrium, meaning no player can exploit another. Against PHC, neither Bayesian Ultra-Q nor Hyper-Q agents are able to win on average, which goes against the findings of Tesauro. In the cooperation game, Bayesian Ultra-Q converges in the direction of an optimal joint strategy and vastly outperforms all other algorithms including Hyper-Q, which are unsuccessful in finding a strong equilibrium due to relative overgeneralisation.
@inproceedings{gauderisEfficientBayesianUltraQ2023,title={Efficient {{Bayesian Ultra-Q Learning}} for {{Multi-Agent Games}}},url={https://alaworkshop2023.github.io/papers/ALA2023_paper_57.pdf},urldate={2024-11-23},booktitle={Proc. of the {{Adaptive}} and {{Learning Agents Workshop}} ({{ALA}} 2023 at {{AAMAS}})},author={Gauderis, Ward and Denoodt, Fabian and Silue, Bram and Vanvolsem, Pierre and Rosseau, Andries},year={2023},month=may,}
@inproceedings{bnaic,title={Quantum {{Theory}} in {{Knowledge Representation}}: {{A Novel Approach}} to {{Reasoning}} with a {{Quantum Model}} of {{Concepts}}},shorttitle={Quantum {{Theory}} in {{Knowledge Representation}}},booktitle={Pre-Proceedings of {{BNAIC}}/{{BeNeLearn}} 2024},year={2024},month=nov,address={Utrecht},author={Gauderis, Ward and Wiggins, Geraint},url={https://bnaic2024.sites.uu.nl/wp-content/uploads/sites/986/2024/11/Quantum-Theory-in-Knowledge-Representation-A-Novel-Approach-to-Reasoning-with-a-Quantum-Model-of-Concepts.pdf},}