Description
Learning and growing together: The poster session offers Lamarr members with a platform to showcase their research projects. Lamarr members across all research areas, working groups and levels of hierarchy and experience are asked to join the open exchange and discuss the presented work.
Using data-based approaches, accurate predictions of thermal deformations, which can significantly affect the quality of manufactured components, can be enabled. However, a sufficient amount of data with maximised information content is necessary for efficient training. In this paper, an approach for optimising sensor configurations for predicting thermal deformations is presented. From...
Chirality information (i.e., information that allows distinguishing left from right) is ubiquitous for various data modes in computer vision, including images, videos, point clouds, and meshes. Contrary to symmetry, for which there has been a lot of research in the image domain, chirality information in shape analysis (point clouds and meshes) has remained underdeveloped. Although many shape...
Despite advances in conversational systems, the evaluation of such systems remains a challenging problem. Current evaluation paradigms often rely on costly homogeneous human annotators or oversimplified automated metrics, leading to a critical gap in socially aligned conversational agents, where pluralistic values (i.e., acknowledging diverse human experiences) are essential to reflect the...
We address two critical capabilities required for autonomous robots operating in indoor environments, both centered around robust perception of unseen objects. This generalization can support various applications, but here we focus on mobile robotics.
The first focus is on robotic grasping, where 6D pose estimation is needed for successful manipulation. While 6D tracking is now reliable,...
This work explores the applicability of synthetic data for training deep learning models aimed at real-time classification of astronomical radio signals. Building on previous research where lightweight convolutional neural networks (CNNs) using DM-time representations showed promising performance in detecting transient signals, we now turn to the question of whether synthetic datasets can...
Service robots operating in cluttered human environments such as homes, offices, and schools cannot rely on predefined object arrangements and must continuously update their semantic and spatial estimates while dealing with possible frequent rearrangement. Identifying all objects in cluttered, occlusion-heavy environments, such as shelves, requires selecting informative viewpoints and...
Tractography enables the reconstruction of white matter pathways from diffusion MRI and is a key tool for studying brain connectivity in both research and clinical contexts. Within the overall tractography pipeline, the parcellation step assigns individual streamlines to specific anatomical bundles, or discards them as false positive detections. We introduce PETParc (Parallel Efficient...
The post-surgical gauze retention can lead to serious complications and necessitate additional surgery for its removal. Due to data scarcity, the research on gauze segmentation on real-world surgical data remains underexplored. This work presents first investigation of gauze segmentation on real-surgical data. We use prevalently used segmentation architectures, including CNN-based,...
Stochastically sampling word segmentations from a subword tokeniser, also called subword regularisation, is a known way to increase robustness of language models to out-of-distribution inputs, such as text containing spelling errors. Recent work has observed that usual augmentations that make popular deterministic subword tokenisers stochastic still cause only a handful of all possible...
In this work, we address unsupervised temporal action segmentation, which segments a set of long, untrimmed videos into semantically meaningful segments that are consistent across videos. While recent approaches combine representation learning and clustering in a single step for this task, they do not cope with large variations within temporal segments of the same class. To address this...
Multi-Agent Path Finding (MAPF) focuses on determining conflict-free paths for multiple agents navigating through a shared space to reach specified goal locations. This problem becomes computationally challenging, particularly when handling large numbers of agents, as frequently encountered in practical applications like coordinating autonomous vehicles or drone swarms. Quantum Computing (QC)...
Large Language Models (LLMs) remain vulnerable to adversarial jailbreaks, yet existing attacks rely on handcrafted priors or require white-box access for gradient propagation. We show that token-level iterative optimization can succeed without gradients and introduce RAILS (RAndom Iterative Local Search), a simple yet effective method using only model logits with a query budget comparable to...
In the healthcare domain, sensitive patient data is inherently decentralized across institutions and cannot be centralized due to strict privacy regulations. Federated learning offers a collaborative model training without explicitly sharing patient data by communicating model parameters or soft labels. These approaches, however, are still vulnerable to privacy leakage and often limit model...
Social sciences define values as preferred behaviors or outcomes that motivate an individual's actions or judgments.
While LLMs often reflect biases from their training data, it remains unclear what values underlie their generation processes, and whether such internal value systems can be measured or modified.
In this paper, we investigate whether fine-tuning can steer a model’s internal...
Forecasting high-energy flares in blazars—active galactic nuclei with relativistic plasma jets oriented toward Earth—over extended temporal horizons presents a significant challenge due to the complex variability inherent in their light curves. In this study, we investigate the long-term predictability of flare activity using over 15 years of photon flux observations from the Fermi-LAT...
Emergent Misalignment (EMA) is a puzzling phenomenon where models finetuned on a narrowly misaligned task (e.g., including insecure backdoors in code) learn to be broadly misaligned. EMA is concerning, as models trained on superficially harmless data might become broadly misaligned. At the same time, the fact that alignment behavior across different domains is so strongly correlated during...
Hyperbolic representations are effective in modeling knowledge graph data which is prevalently used to facilitate multi-hop reasoning. However, a rigorous and detailed comparison of the two spaces for this task is lacking. In this paper, through a simple integration of hyperbolic representations with an encoder-decoder model, we perform a controlled and comprehensive set of experiments to...
The AI research ecosystem is a demanding, high-pressure environment that profoundly shapes the future of technology. Its effectiveness and sustainability depend not only on technical innovation but also on the people who sustain its progress. Investigating the psychosocial factors that link individual traits to work experiences and mental health is therefore essential for enabling sustainable,...
In this article, we propose a novel quantum regression model by extending the Real-Part Quantum SVM. We apply our model to the problem of stability limit prediction in milling processes, a key component in high-precision manufacturing. To train our model, we use a custom data set acquired by an extensive series of milling experiments using different spindle speeds, enhanced with a custom...
This poster explores two complementary perspectives on optimizing limited resources through computational techniques. First, we present methods for reducing the dynamic range of Quadratic Unconstrained Binary Optimization (QUBO) problems to enhance the performance of current quantum annealers—making better use of quantum hardware as a resource. Second, we address a classical optimization task:...
Traditional interpretability techniques such as rule-based models and feature attribution methods, each offer complementary strengths, however are often applied in isolation. Rule-based approaches are intuitive and logically structured, making them easy to understand, but they often struggle to scale effectively. On the other hand, feature attribution techniques like SHAP are well-suited to...
While many have analyzed the resource efficiency of trained models, an important question remains: How can one be sustainable and resource-aware during AI development, or in other words, when looking for a suitable model to train on a specific learning task? AutoML can help with finding well-performing models on given data, however these frameworks overly focus on predictive quality and...
Pallets are one of the most important load carriers for international supply chains. Yet, continuously tracking activities such as driving, lifting or standing along their life cycle is hardly possible. As part of a preliminary project, it was shown that it is possible to develop a prediction model for pallet activities using data from inertial measurements units mounted on a pallet. A...
Automatic medical coding has the potential to ease documentation and billing processes. For this task, transparency plays an important role for medical coders and regulatory bodies, which can be achieved using explainability methods. However, the evaluation of these approaches has been mostly limited to short text and binary settings due to a scarcity of annotated data. Recent efforts by Cheng...
The advancement of artificial intelligence (AI) in intralogistics critically depends on the availability of realistic and diverse datasets. However, existing datasets in this domain often focus on narrow tasks such as object detection or activity recognition, lacking comprehensive three-dimensional (3D) representations of entire intralogistics systems. This paper addresses this gap by...
Understanding causal relationships in oncology is essential for improving treatment strategies and generating testable medical hypotheses. We present CaDSIm (Causal Discovery with Simultaneous Imputation), a new method for learning causal structures and associated Structural Equation Models from real world pan-cancer data, which is typically high dimensional, noisy, and incomplete.
Our...
Dynamical systems governed by ordinary differential equations (ODEs) serve as models for a vast number of natural and social phenomena. In this work, we offer a fresh perspective on the classical problem of imputing missing time series data, whose underlying dynamics are assumed to be determined by ODEs. Specifically, we revisit ideas from amortized inference and neural operators, and propose...