Speakers
Description
Within the Lamarr Institute, the topic of trustworthy AI is being explored across diverse application contexts and scientific disciplines. Lamarr researchers focus on areas such as developing effective certification and verification procedures for AI systems, ensuring explainability and robustness, as well as advancing trustworthy AI in domains like physics, life sciences, engineering, and other scientific fields. This work is complemented by broader legal, philosophical and ethical considerations related to trustworthiness of AI. Rather than attempting to cover all ongoing research within this multifaceted and highly interdisciplinary field, we will highlight two key contributions from Lamarr’s research on trustworthy AI. Both focus on the societal relevance of ensuring and implementing trustworthy AI. One spotlight talk will be given by Tim Katzke from Emmanuel Müller's research group. He will present work on "Trustworthy Machine Learning by Design." The other spotlight talk is by Rebekka Görge from Maximilian Poretschkin's group. She will discuss the trustworthiness of LLMs, focusing particularly on bias and copyright.
Area Presenter | Jakob Rehof |
---|---|
Spotlight Presenter | Tim Katzke, Rebekka Görge |