6 papers authored or co-authored by CORE members have been accepted at the main conference and workshops at The 2025 International Conference on Learning Representations (ICLR) in Singapore and The 2025 International Conference on Software Engineering (ICSE) in Seoul.

ICLR Main Conference

Paper Preview
Mitigating Information Loss in Tree-Based Reinforcement Learning via Direct Optimization
University of Mannheim
TU Clausthal
University of Rostock
The paper introduces SYMPOL, a novel method that integrates symbolic, tree-based models with policy gradient methods to enhance interpretability in reinforcement learning (RL). By enabling direct, end-to-end optimization of axis-aligned decision trees within standard on-policy RL algorithms, SYMPOL addresses the challenge of information loss associated with traditional neural network policies.
Paper Preview
NNsight and NDIF: Democratizing Access to Open-Weight Foundation Model Internals
Northeastern University
TU Clausthal
University of Hamburg
The paper introduces NNsight, an open-source extension to PyTorch enabling deferred remote execution, and NDIF, a scalable inference service for sharing GPU resources and pretrained models. Together, they facilitate transparent access to the internals of large neural networks, such as large language models, without requiring individual hosting of customized models.

ICSE Main Conference

Paper Preview
GUIDE: LLM-Driven GUI Generation Decomposition for Automated Prototyping
TU Clausthal
Karsruhe Institute of Technology
University of Mannheim
The paper introduces GUIDE, a novel approach that leverages large language models (LLMs) to automate the generation of graphical user interface (GUI) prototypes by decomposing high-level textual descriptions into fine-grained GUI requirements. Integrated with the prototyping tool Figma, GUIDE employs a retrieval-augmented generation (RAG) technique to efficiently translate these requirements into editable Material Design components, enhancing controllability and streamlining the prototyping process.

ICLR Workshops

Paper Preview
Decision Trees That Remember: Gradient-Based Learning of Recurrent Decision Trees with Memory
University of Mannheim
Boehringer Ingelheim
TU Clausthal
University of Rostock
The paper introduces ReMeDe Trees, a novel decision tree architecture designed to handle sequential data by integrating an internal memory mechanism akin to that of Recurrent Neural Networks (RNNs). Unlike traditional decision trees that rely on feature engineering to capture temporal dependencies, ReMeDe Trees learn hard, axis-aligned decision rules for both output generation and state updates, optimized efficiently via gradient descent.
Paper Preview
Disentangling Exploration of Large Language Models by Optimal Exploitation
TU Clausthal
University of Mannheim
The paper investigates the exploration capabilities of large language models (LLMs) by isolating exploration as the sole objective and introducing a framework that decomposes missing rewards into exploration and exploitation components based on the optimal achievable return.
Paper Preview
Shedding Light on Task Decomposition in Program Synthesis: The Driving Force of the Synthesizer Model
TU Clausthal
This paper compares ExeDec, a program synthesis framework that uses explicit task decomposition, with REGISM, its variant that relies solely on iterative execution-guided synthesis. While ExeDec shows strong performance in length generalization and concept composition due to its decomposition strategy, REGISM often matches or exceeds its performance, suggesting that repeated execution can be equally or more effective in many scenarios.
Paper Preview
Beyond Pixels: Enhancing LIME with Hierarchical Features and Segmentation Foundation Models
TU Clausthal
University of Mannheim
The paper introduces DSEG-LIME, an improved framework for Local Interpretable Model-agnostic Explanations (LIME) in image analysis. By integrating data-driven segmentation through foundation models and enabling user-controlled hierarchical segmentation, DSEG-LIME enhances interpretability by aligning explanations more closely with human-recognized concepts, outperforming traditional methods on several explainable AI metrics.
Paper Preview
Unreflected Use of Tabular Data Repositories Can Undermine Research Quality
University of Mannheim
TU Clausthal
University of Rostock
University of Freiburg
The paper highlights that indiscriminate utilization of datasets from repositories like OpenML may compromise research integrity. The authors present cases illustrating issues such as suboptimal model selection, neglect of robust baselines, and improper preprocessing, and propose enhancements to data repository practices to bolster empirical research standards.