Ward Gauderis
Hello! I’m a PhD researcher at the Computational Creativity Lab within the Artificial Intelligence Research Group at the Vrije Universiteit Brussel, supervised by Prof. Dr. Dr. Geraint Wiggins.
I study compositionality in AI — how explicit structure and emergent behaviour interact in intelligent systems. My work bridges symbolic and sub-symbolic AI, with a focus on models that are inherently interpretable by design and can reason and create in human-like ways.
Q-CHARM
How can the compositional design of a model’s structure improve its compositional behaviour?
My PhD project, Q-CHARM, explores this question by distinguishing between the architecture of a model (its compositional design) and the structure that emerges during learning (its compositional behaviour). The goal is to uncover how architectural inductive biases can support interpretability, generalisation, and creative reasoning.
I’m especially interested in bringing together two complementary paths to interpretability: imposing explicit structure before training, for example through neuro-symbolic design, and exposing implicit structure after training, as in mechanistic interpretability. The first offers clarity and human-aligned control but relies on assumptions and supervision. The second reveals emergent structure without guidance, yet often lacks alignment with human reasoning. By embedding high-level structure into models from the start, we can guide learning toward internal representations that generalise well and are easier to interpret — creating a balance where known structure supports the discovery of meaningful organisation.
As a proper Yoneda disciple, I believe that a compositional perspective — grounded in the language of category theory — is key to shaping AI systems we can understand and trust.
CompInterp
Right now, I’m developing the CompInterp approach to interpretability, which treats weights and data as a unified modality to provide a compositional perspective on model design, analysis, and manipulation. By combining tensor and neural network paradigms, our $\chi$-nets pave the way for inherently interpretable AI without sacrificing performance.
$\chi$-nets are compositional by design, both in how they are built and in the representations they learn. Their architecture enables mathematical guarantees and weight-based subcircuit analysis, grounding interpretability in formal (de)compositions rather than post-hoc activation-based approximations.
We’re currently scaling CompInterp methods to CNNs and transformers by leveraging their specialised low-rank structure.
Research Interests
If you want my full attention, just mention any of these…
- Compositionality and category theory in AI
- Mechanistic (weight-based) interpretability
- Neuro-symbolic and hybrid architectures
- Models of cognition and computational creativity (e.g., active inference, conceptual spaces)
- Quantum-ish mathematics (e.g., information theory/geometry, Hilbert spaces, tensor networks)
Hobbies
When I’m not thinking about model structure, I’m probably skating through the city, singing, playing chess or table tennis, building something in code, or falling down a philosophical/mathematical rabbit hole. I also love free and open-source software, (board) games, and conversations that stretch the brain a little.
news
Apr 05, 2025 | I have a website now! ![]() |
---|---|
Mar 04, 2025 | Thomas Dooms and I are presenting our $\chi$-net poster at CoLoRAI (AAAI 2025)! |
latest posts
Feb 11, 2023 | Exploring Bayesian Linear Regression |
---|