AI Colloquium

Safe Learning Systems - Artificial Intelligence and Formal Methods

by Prof. Nils Jansen (Ruhr University Bochum)

Europe/Berlin
JvF25/3-303 - Conference Room (Lamarr/RC Trust Dortmund)

JvF25/3-303 - Conference Room

Lamarr/RC Trust Dortmund

30
Show room on map
Description

Barbara HammerAbstract:

Artificial Intelligence (AI) has emerged as a disruptive force in our society. The increasing applications in healthcare, transportation, military, and other fields underscore the critical need for a comprehensive understanding of the robustness of an AI’s decision-making process. Neurosymbolic AI seeks to develop robust and safe AI systems by combining neural and symbolic AI techniques. We highlight the role of formal methods in such techniques, serving as a rigorous and structured backbone for symbolic AI methods.

We focus on a specific branch of formal methods, namely formal verification, with a particular emphasis on model checking. The most famous application of model checking in AI is in reinforcement learning (RL). RL carries the promise that autonomous systems can learn to operate in unfamiliar environments with minimal human intervention. However, why haven’t most autonomous systems implemented RL yet? The answer is simple: there are significant unsolved challenges. One of the most important ones is obvious: autonomous systems operate in unfamiliar and unknown environments. This lack of knowledge is referred to as uncertainty. Uncertainty, however, presents a problem when one seeks to employ rigorous state-based techniques, such as model checking. 
 
In this talk, we explore how various aspects of uncertainty can enter a formal system model to achieve trustworthiness, reliability, and safety in RL. The presented results range from robust Markov decision processes over stochastic games to multi-environment models. Moreover, we explore the direct connection of deep (neural) reinforcement learning with the (symbolic) model-based analysis and verification of safety-critical systems.
 
Bio: Nils Jansen is a professor at the Ruhr University Bochum, Germany, and leads the chair of Artificial Intelligence and Formal Methods. He is also an ELLIS fellow and a full professor of Safe and Dependable AI at Radboud University, Nijmegen, The Netherlands. The mission of his research is to increase the trustworthiness of Artificial Intelligence (AI). He was a research associate at the University of Texas at Austin and received his Ph.D. with distinction from RWTH Aachen University, Germany. He holds several grants in academic and industrial settings, including an ERC starting grant titled Data-Driven Verification and Learning Under Uncertainty (DEUCE).