Lamarr Lecture Series WS 25/26

Life Sciences and Health

Europe/Berlin
DO: JvF25/3-303 | BN: b-it/1.047

DO: JvF25/3-303 | BN: b-it/1.047

Due to the current weather conditions, the lectures will also be offered online via Zoom: https://tu-dortmund.zoom.us/j/94939494265?pwd=njnJNI9SEExbaZoyDmwBFaV9RFrcXn.1 TU Dortmund University Room 3-303 Lamarr-Institut Joseph-von-Fraunhofer-Str. 25 44227 Dortmund University of Bonn Room 3.110 Institute for Informatics Friedrich-Hirzebruch-Allee 8
Description

AI in the Life Sciences: An Overview by Andrea Mastropietro

This talk will provide an introduction to the application of artificial intelligence (AI), machine learning, and deep learning in life sciences research. It will present the core research activities carried out within the Lamarr Thematic Area of Life Sciences and Health (LSH), illustrated through selected examples from the scientific literature. The seminar will outline the most common tasks in chemoinformatics and discuss how AI-based methods can be employed to address them. The presentation will cover a range of concepts, from predictive to generative AI, and will also introduce explainable AI, highlighting why explainability is a desirable and necessary property of AI algorithms for chemoinformatics. In addition to demonstrating the effectiveness of machine learning and deep learning approaches, the talk will discuss their limitations in learning chemically meaningful information. The talk will conclude with remarks on the international collaborations that contribute to strengthening research within the Lamarr LSH thematic area.

Intuitive Explainable Artificial Intelligence for Molecular Design by Alec Lamens

The rise of artificial intelligence (AI) has taken machine learning (ML) in molecular design to a new level. As ML increasingly relies on complex deep learning frameworks, the inability to understand predictions of black-box models has become a topical issue. Consequently, there is strong interest in the field of explainable AI (XAI) to bridge the gap between black-box models and the acceptance of their predictions, especially at interfaces with experimental disciplines. Therefore, XAI methods must go beyond extracting learning patterns from ML models and present explanations of predictions in a human-centered, transparent, and interpretable manner. In this presentation an overview of established XAI concepts is given and the benefits of incorporating domain-specific knowledge into these XAI approaches are discussed. Subsequently, it is explored how XAI can be leveraged in molecular design to render opaque predictive models transparent and garner the trust required for their practical adoption.

   
From the same series
1 2 3 4 5 6 7 8 10 11 12 13
Organised by

Vanessa Faber & Brendan Balcerak Jackson