Trustworthy Machine Learning in Healthcare
by
In person event only. Registration is required before November, 7th, at: https://cs.tu-dortmund.de/veranstaltungsanmeldung/
Artificial intelligence can support clinical decision-making and enable new forms of medical research. Yet despite remarkable technical progress, the adoption of AI in healthcare remains slow. The core challenge is trust: models must be reliable, respect privacy, and provide explanations that make sense to human experts. This talk will discuss research on trustworthy machine learning that connects theoretical understanding with practical relevance. I will outline how insights into the geometry of loss surfaces help establish performance guarantees and improve robustness, how federated learning can enable collaboration across institutions without compromising patient privacy, and how explainable and causally informed models can make AI systems scientifically meaningful rather than opaque. The talk will connect core trustworthiness goals with the practical realities of healthcare, drawing on ongoing collaboration between the Lamarr Institute and the Institute for AI in Medicine (IKIM), and reflect on what it takes to close the gap between research and clinical practice.