Sep 3 – 4, 2025
Hörsaalgebäude, Campus Poppelsdorf, Universität Bonn
Europe/Berlin timezone

Constructive Empiricism for Explainable AI

TAI.1.2
Sep 4, 2025, 1:00 PM
1h 15m
Open Space (first floor)

Open Space (first floor)

Poster Trustworthy AI Poster Session

Speaker

Sebastian Müller (University of Bonn)

Description

We explore what it means to build a scientific "theory" of a black-box model, drawing on van Fraassen's Constructive Empiricism (CE), and demonstrate how such a theory can be used for explainable AI (XAI).
A scientific theory is more than just an explanation: it not only has value in its own right, but also serves as a robust framework for answering different questions.
According to CE, a theory must be both empirically adequate (i.e., accurate with respect to observed data) and shaped by pragmatic virtues, such as user preferences. These criteria align closely with the needs of XAI, which require fidelity and comprehensibility.
We turn CE's core notion of empirical adequacy into three concrete criteria: consistency, sufficient predictive performance, and algorithmic adaptability. We develop the Constructive Box Theorizer (CoBoT) algorithm within this framework.
As a proof of concept, we present a qualitative discussion showing that CoBoT can produce empirically adequate theories and illustrate the utility of a theory in XAI.

Authors

Presentation materials