Speaker
Description
Deep learning (DL) models are increasingly used in astroparticle physics for tasks such as gamma–hadron separation, neutrino event reconstruction, and cosmic-ray classification. While these models achieve remarkable predictive accuracy, their opacity poses a challenge to the epistemic standards of discovery. Heatmap-based explainable AI (XAI) techniques—such as heatmaps—promise insight into model reasoning, yet visualization alone cannot justify scientific claims. This paper identifies the explanation–justification gap and proposes epistemic preconditions for closing it. By situating these conditions within contemporary practices of detector-based inference, the paper clarifies when heatmaps contribute to justified knowledge in astroparticle physics.