We offer seminars that dive into current research questions and recent developments in our field. These are a great opportunity to engage with cutting-edge topics, practice critical thinking, and get a feel for ongoing work in the lab and beyond. Below is a list of seminar topics we have recently offered or are planning for upcoming terms.

Process

  1. Application: Send your Transcript of Records (CV optional) and topic preferences to patrick.knab@tu-clausthal.de.
  2. Selection: You will be informed whether you have been accepted due to limited slots.
  3. Kick-Off Meeting: Introductory session & milestone schedule announcement.
  4. Milestone 1: Submit your seminar paper (~2.5 months in).
  5. Milestone 2: Write two peer reviews.
  6. Milestone 3: Present your paper to the group.
  7. Milestone 4: Submit the camera-ready paper & change log.

Topics

Summer Semester 2025

This semester, we offer the following topics. If not stated otherwise, the topics are open to bachelor and master students.
  • Exploring Prompt Loss Weighting in Language Model Fine-Tuning. Supervised fine-tuning is the process of adapting a pre-trained language model to specific tasks by training it on labeled datasets. Traditionally, equal emphasis is placed on both the input prompts and their corresponding outputs (completions) during this training. However, recent research has begun to explore the effects of assigning different weights to prompts and completions—a method known as Prompt Loss Weighting (PLW). Studies have shown that varying PLW can significantly influence model performance, particularly in tasks involving short responses. This seminar aims to further investigate the impact of PLW across a broader range of datasets and domains. You will conduct experiments using an implementation provided by us, systematically adjusting PLW during different stages of training to observe its effects on model outcomes. The objective is to develop a scheduling system that dynamically selects optimal PLW values tailored to specific training scenarios. The seminar will begin with an overview of existing research on PLW, highlighting key findings and methodologies. The main focus will then shift to practical experimentation, where you will implement and evaluate various PLW strategies in different contexts. This work aims to provide deeper insights into the effective application of PLW in supervised fine-tuning, contributing to enhanced training practices for language models.
  • Reinforcement Learning This seminar teaches the basics of reinforcement learning in a practical way. You will be provided with a Jupyter Notebook that contains the full running code - ready for experiments. In your seminar, you would then adjust a specific parameter of the algorithm, compare the outcomes, and describe the results in a report. Contrary to a theoretical seminar, the literature review in this work is limited.
  • Fine-Tuning Segment Anything (SAM) is a powerful foundation model for segmentation, capable of processing diverse images and videos across domains. However, its performance can vary depending on the application. To address this, recent research has introduced adapter modules to fine-tune SAM for specific tasks or datasets. In this seminar, the student will analyze and compare different adapter strategies for SAM, evaluating their effectiveness in improving segmentation performance across selected use cases.