Speaker
Description
We explore what it means to build a scientific "theory" of a black-box model, drawing on van Fraassen's Constructive Empiricism (CE), and demonstrate how such a theory can be used for explainable AI (XAI).
A scientific theory is more than just an explanation: it not only has value in its own right, but also serves as a robust framework for answering different questions.
According to CE, a theory must be both empirically adequate (i.e., accurate with respect to observed data) and shaped by pragmatic virtues, such as user preferences. These criteria align closely with the needs of XAI, which require fidelity and comprehensibility.
We turn CE's core notion of empirical adequacy into three concrete criteria: consistency, sufficient predictive performance, and algorithmic adaptability. We develop the Constructive Box Theorizer (CoBoT) algorithm within this framework.
As a proof of concept, we present a qualitative discussion showing that CoBoT can produce empirically adequate theories and illustrate the utility of a theory in XAI.