Speaker
Description
Providing clear explanations is crucial in interdisciplinary
research fields like bioinformatics, where non-experts in machine learning
must understand model decisions to foster trust in the system. This work
introduces an explainable AI approach for compound potency prediction that
combines decision tree models with rule exploration and topic modelling.
The method demonstrates its ability to generate meaningful insights into
ensemble potency prediction models while preserving overall predictive
performance. Additionally, the study presents a feature conversion
analysis on selected feature combinations, helping to identify individual
feature contributions within the feature space uncovered through topic
modelling and interactive rule exploration. The proposed workflow supports
interpretable, human-centred analysis of model logic and offers a
practical roadmap toward more transparent and trustworthy compound potency
prediction.