Sep 3 – 4, 2025
Hörsaalgebäude, Campus Poppelsdorf, Universität Bonn
Europe/Berlin timezone

Rule vs. SHAP: Complementary Tools for Understanding and Verifying ML Models

Not scheduled
1h 30m
Open Space (first floor)

Open Space (first floor)

Poster Human-centered AI Systems Poster Session

Speaker

Bahavathy Kathirgamanathan (Fraunhofer IAIS)

Description

Traditional interpretability techniques such as rule-based models and feature attribution methods, each offer complementary strengths, however are often applied in isolation. Rule-based approaches are intuitive and logically structured, making them easy to understand, but they often struggle to scale effectively. On the other hand, feature attribution techniques like SHAP are well-suited to handling complex models and large datasets but can fall short in terms of interpretability and alignment with human reasoning. In this paper, we introduce a hybrid, human centric interpretability framework that integrates rule-based modelling with SHAP-based feature attributions within a visual analytics framework and show the benefits for interpretability and interactivity through such techniques. We validate the framework on a case-study of Fishing vessel trajectories and demonstrate how this integrated approach reveals patterns and discrepancies that would not have been seen using a single approach alone.

Authors

Bahavathy Kathirgamanathan (Fraunhofer IAIS) Gennady Andrienko (Fraunhofer Institute IAIS) Natalia Andrienko (Fraunhofer Institute IAIS)

Presentation materials

There are no materials yet.