-
Heitor Gomes (Victoria University of Wellington, New Zealand)18/02/2025, 14:30
In this talk, I will discuss the challenges and opportunities of applying machine learning to streaming data. To illustrate key concepts, I will introduce CapyMOA, a new open-source library designed for efficient real-time learning.
Go to contribution page -
Shubham Gupta18/02/2025, 15:10
-
Amal Saadallah (Lamarr Institute-TU Dortmund)18/02/2025, 15:25
Thanks to their inherent interpretability, tree models are widely utilized in various learning tasks, including time series forecasting. However, single tree models often suffer from overfitting, limiting their applicability to real-world scenarios. To address this issue, ensembles of tree models are commonly employed. Yet, ensemble construction must account for the dynamic nature of time...
Go to contribution page -
Dr Jens Buß (Lamarr Institute, TU Dortmund University)18/02/2025, 16:00
-
18/02/2025, 16:15
-
Dr Sabine Hunze (TU Dortmund, Grant Office)18/02/2025, 16:18
Presentation by Sabine Hunze (Research Support Service, TU Dortmund) discussing key aspects of the funding landscape and available funding opportunities.
Go to contribution page -
Sabine Hunze18/02/2025, 16:58
Following the presentation, there will be time for questions and a discussion where participants can share their experiences with previous application processes, including challenges faced and reasons for rejected applications. This session aims to facilitate knowledge sharing among applicants.
Go to contribution page -
Dr Ramsés Sanchéz (Lamarr institute, University of Bonn)19/02/2025, 10:00
Researchers at Hybrid-ML tackle a wide range of applied and theoretical problems, characterized by different data modalities, such as time series, graphs, natural language, images, and their combinations. Solving these problems requires drawing from an equally broad spectrum of background knowledge, from abstract algebra and statistical physics to cognitive psychology. Yet, despite this...
Go to contribution page -
19/02/2025, 10:00
In this talk, we introduce Splitting Stump Forests – small ensembles of weak learners extracted from a trained random forest. The high memory consumption of random forest ensemble models renders them unfit for resource-constrained devices. We show empirically that we can significantly reduce the model size and inference time by selecting nodes that evenly split the arriving training data and...
Go to contribution page -
Jakob Rehof19/02/2025, 10:00
-
Katharina Beckh19/02/2025, 10:15
-
Lio Schmitz19/02/2025, 10:20
Sketch-based Modeling and Animation are challenging problems due to the inherent ambiguity, style differences and lack of datasets. We explore how existing methods can be improved by integrating Diffusion Models for Video and 3D content generation. For this, Score-based Distillation Sampling, Optical Flow and alternative sketch representations are considered.
Go to contribution page -
Sebastian Buschjäger (Lamarr Institute for ML and AI, TU Dortmund)19/02/2025, 10:30
As machine learning models become increasingly integrated into various applications, the need for resource-aware deployment strategies becomes paramount. One promising approach for optimizing resource consumption is rejection ensembles. Rejection ensembles combine a small model deployed to an edge device with a large model deployed in the cloud, with a rejector tasked to determine the most...
Go to contribution page -
Carina Newen19/02/2025, 10:35
-
Hongyu Zhou19/02/2025, 10:40
With a growing interest in 3D Gaussian splatting, there comes a need for geometry processing applications directly on this new representation. In this work, we propose a formulation to compute the Laplace-Beltrami operator, a commonly used tool in geometry processing, directly on Gaussian splatting leveraging the Mahalanobis distance, and show its improvement in accuracy compared to point...
Go to contribution page -
Gennady Andrienko (Fraunhofer Institute IAIS), Natalia Andrienko (Fraunhofer Institute IAIS)19/02/2025, 10:55
-
19/02/2025, 11:00
Quadratic unconstrained binary optimization (QUBO) problems are well-studied, not least because they can be approached using contemporary quantum annealing or classical hardware acceleration. However, due to limited precision and hardware noise, the effective set of feasible parameter values is severely restricted. As a result, otherwise solvable problems become harder or even intractable. In...
Go to contribution page -
Tim Katzke19/02/2025, 11:00
Currently established graph anomaly detection methods predominantly operate in an unsupervised or self-supervised manner, assuming minimal anomaly contamination and relying solely on data-driven signals to infer notions of (a)normality. While these methods can, in theory, capture the relevant complex structural and attribute-based patterns, they typically do not allow for the meaningful...
Go to contribution page -
H.S. Lin (HHU Düsseldorf)19/02/2025, 11:15
-
19/02/2025, 11:30
Can we give outdated gaming consoles a second life in research and teaching? With GENUSES, we upcycle every single component of old Playstation 4 consoles in order to let them serve as a cost-effective teaching kit.
There is a video, check it out: https://www.youtube.com/watch?v=9iUO86Y1t8w
Go to contribution page
Talk is done by Christian Hakert -
Simon Klüttermann19/02/2025, 11:40
My research focuses on ensemble methods for unsupervised learning tasks. Recently, I discovered a surprisingly effective approach for anomaly detection, which I named as a Polyra swarm. Upon further investigation, I found that Polyra exhibits a property analogous to the universal function approximation capability of neural networks. This insight has led me to explore an alternative paradigm...
Go to contribution page -
Maram Akila (Lamarr / IAIS)19/02/2025, 11:45
-
Zeyu Ding19/02/2025, 12:00
In this talk I introduce MCBench, a benchmark suite designed to assess the quality of Monte Carlo (MC) samples. The benchmark suite enables quantitative comparisons of samples by applying different metrics, including basic statistical metrics as well as more complex measures, in particular the sliced Wasserstein distance and the maximum mean discrepancy. We apply these metrics to point clouds...
Go to contribution page -
Sebastian Buschjäger (Lamarr Institute for ML and AI, TU Dortmund)19/02/2025, 12:00
A long, long time ago, in a far away land, some smart people thought about how to connect hardware and machine learning to make ML and hardware more resource-aware. At this time, the term "resource-aware ML" came along. While our roots go back some 10-15 years now, the term "resource-aware ML" only partially reflects the current trend in ML research. In fact, most new projects and ideas...
Go to contribution page -
A. Baudzus19/02/2025, 12:05
-
Florian Mai19/02/2025, 12:20
Large language models are strong heuristic reasoners, but their planning abilities remain poor. We introduce a method for language models to learn to plan from unlabeled data by using a planner model to predict many steps ahead and conditioning the language model on the predicted plans. A crucial parameter in this framework is the level of abstraction of the generated plans: While some tasks...
Go to contribution page -
C. van Niekerk19/02/2025, 12:25
-
Bahavathy Kathirgamanathan (Fraunhofer IAIS)19/02/2025, 12:40
-
Armin Berger, Helen Schneider19/02/2025, 12:40
-
19/02/2025, 12:55
-
Jim Bergmann19/02/2025, 14:00
-
19/02/2025, 14:00
-
Prof. Wolfgang Rhode (TU Dortmund)19/02/2025, 14:00
-
Amal Saadallah (Lamarr Institute-TU Dortmund)19/02/2025, 14:10
The study of sunspot numbers is crucial for understanding solar activity and its impact on Earth's climate and space weather. This research analyzes the temporal patterns in historical sunspot data and develops predictive models for long-term forecasting. Using statistical and deep learning techniques, we identify key trends, periodicities, and anomalies in sunspot cycles. The proposed models...
Go to contribution page -
19/02/2025, 14:30
Idea Pitches
Go to contribution page
Use cases of LLMs in scientific discovery - types of applications that LLMs and LLM agents can solve
LLMs in physics, biomedicine and beyond
Research questions and priorities -
Rekha Prasad19/02/2025, 14:30
-
Mirko Bunse (Lamarr Institute, TU Dortmund University)19/02/2025, 14:35
Anomaly and signal detection is one of the most important use cases of machine learning (ML) both in scientific and in commercial applications. Anomalous signals are measured relative to an expected behavior of data, i.e., relative to the background or to the priors. Relevant examples of anomalies and signals in physics can be: an excess of gamma-rays near the center of our Galaxy (a possible...
Go to contribution page -
Sascha Mücke (TU Dortmund)19/02/2025, 15:00
-
19/02/2025, 15:00
Introduction to LLMs for personal and social modeling
Lamarr Dagstuhl 2026 on Socially Intelligent AI Systems
(Lucie Flek, Tomer Ullman, Maarten Sap, Jenn Hu)
LLMs and the Theory of MindWhat are the research questions are you interested in that can be modeled with LLM social agents?
Go to contribution page
What are the benefits and risks of such modeling?
How should we prioritize the research questions? -
Zorah Lähner (University of Bonn)19/02/2025, 15:20
Plasma fusion has the potential to provide an efficient and safe energy source, however, several technological challenges still remain before fusion reactors can be realized at large scale. One promising direction is called stellarator in which the plasma is guided into a possibly complex equilibrium flow by magnetic fields. The optimal form for this flow is still unknown and only very sparse...
Go to contribution page -
19/02/2025, 15:30
What are the common goals / how to collaborate while keeping out KPIs?
Go to contribution page
How can we help the industry?
What do we need in Academia?
How can we share resources towards a common goal? -
Rekha Prasad19/02/2025, 15:30
-
Jian-Jia Chen19/02/2025, 15:45
In the scientific experiments nowadays, sensor setups and algorithms that process sensory data are usually statically configured and applicable only for specific scenarios. The SenSE project, as a joint effort of 4 PIs from CS and 4 PIs from Physics, addresses the research question: how can we use machine learning models to make sensors flexible and resilient with respect to changing...
Go to contribution page -
19/02/2025, 16:00
Looking into the future
Go to contribution page -
Johannes Albrecht (TU Dortmund & LAMARR)19/02/2025, 16:10
High-energy particle physics experiments are on the brink of facing significant challenges in reconstructing complex events due to increasing intensities and energies. The scientific aim of the presented work is to address the growing computational complexity of event reconstruction while enhancing efficiency and improving the precision of analyses in the ATLAS, LHCb, and Belle II experiments....
Go to contribution page -
Tim Ruhe (TU Dortmund)19/02/2025, 16:35
Within the ErUM communities, the amount, acquistion rate and diversity of data has rapidly increased over the last decade. While these challenges have been successfully addressed by individual communities, with respect to data analysis, the application of FAIR principles is lagging behind and the existing infrastructure does not allow for a swift and convenient publication of data. Smaller...
Go to contribution page -
Prof. Gregor Kasieczka (Universität Hamburg)20/02/2025, 11:00
Lecture in the AI Colloquium by Prof. Gregor Kasieczka (Universität Hamburg)
Abstract: Machine learning and AI have quickly turned into indispensable tools for modern particle physics. They both greatly amplify the power of existing techniques - such as supercharging supervised classification - and enable qualitatively new ways...
Go to contribution page -
Prof. Manfred Bayer (President of TU Dortmund University)20/02/2025, 14:15
-
Emmanuel Müller20/02/2025, 14:18
-
Presentation Session on Explainable AI
Speakers:
Go to contribution page
10:15, K. Beckh, The Anatomy of Evidence - An Investigation into Explainable ICD Coding
10:35, C.Newen, Directional ExplainableA AI: The Problem of the Rabbit and the Duck
10:55, A.& N. Andriyenko, Does the model think as we expect?
11:15, H.S. Lin, Text as parameter interactive prompt optimisation for large language models -
Jakob Rehof
-
Presentation Block
Speakers List (Preliminary)
Go to contribution page
11:45, M. Akila, From Local to Global Explanations
12:05, A.Baudzus, Fast Linear Decomposition of ReLU Networks
12:25, C. van Niekerk, Reinforcement Learning from Self-feedback
12.40 B. Kathirgamanathan, Leveraging Human-Centered ML to create more Explainable ML models
Choose timezone
Your profile timezone: