Speaker
Description
Traditional interpretability techniques such as rule-based models and feature attribution methods, each offer complementary strengths, however are often applied in isolation. Rule-based approaches are intuitive and logically structured, making them easy to understand, but they often struggle to scale effectively. On the other hand, feature attribution techniques like SHAP are well-suited to handling complex models and large datasets but can fall short in terms of interpretability and alignment with human reasoning. In this paper, we introduce a hybrid, human centric interpretability framework that integrates rule-based modelling with SHAP-based feature attributions within a visual analytics framework and show the benefits for interpretability and interactivity through such techniques. We validate the framework on a case-study of Fishing vessel trajectories and demonstrate how this integrated approach reveals patterns and discrepancies that would not have been seen using a single approach alone.