Header

Search

Bachelor/Master Theses and Master Project Topics

This pages lists the open BSc. and MSc. thesis descriptions, as well as the master projects opportunities currently available in the DDIS research group.

If you are interested in any of the listed projects, please do not hesitate to contact the person mentioned in the open topic description.

If there are currently no open topics but you are generally interested in our research (see https://www.ifi.uzh.ch/en/ddis/research.html), or if you would like to propose a thesis about your own idea, you can send us an email to ddis-theses@ifi.uzh.ch.

Master Thesis: Who to Ask and What to Correct: Joint Deferral Policies for Multi-Expert Interventions in Concept Bottleneck Models

Description:
Concept Bottleneck Models (CBMs) are interpretable deep learning models that predict human-understandable concepts (e.g. "has black wings") from inputs (e.g., images) and then use those concepts to make final task predictions. A key advantage of CBMs is intervenability: human experts can correct incorrect concept predictions at test time to improve the final output. However, current approaches assume a single oracle expert and uniform intervention costs.

This thesis investigates possibilities for designing a post-hoc deferral module that jointly selects which concept to intervene on and which expert to assign those interventions to, in settings with multiple experts of varying expertise and cost. In a first stage of the thesis, the thesis will explore potentials for a post-hoc deferral module that learns to route individual concept interventions to the most suitable expert. In a second stage, methods should be explored to incorporate and model heterogeneous cost between experts.

Start date: March/April 2026
Requirements:

  • Strong background in machine learning and deep learning (including theory), ideally with some knowledge in reinforcement learning.
  • Proficiency with PyTorch.
  • Major in AI or Data Science
  • Recommended courses: Deep Learning, Advanced Topics in Artificial Intelligence.

Please contact Laurin van den Bergh (he/him): bergh@ifi.uzh.ch

Master Thesis: Concept-Based Explainable Out-of-Scope Detection for Large Language Models

Description:
Large Language Models (LLMs) are increasingly used in high-stakes domains like medical triage and legal assistance. However, a major safety risk arises when these models encounter out-of-scope (OOS) queries, i.e., situations they weren't trained or fine-tuned for. Current detection methods are often "black boxes" that fail to explain why a query is being rejected.

This thesis investigates the use of Concept Bottleneck Models (CBMs) to create an interpretable monitoring layer for LLMs. By mapping LLM internals to human-understandable concepts, we aim to build a post-hoc guardrail that detects OOS inputs through specific concept patterns. The thesis will involve developing a distribution model (similar to open-set recognition) on top of concept activations to provide both a OOS signal and a concept-based explanation for that decision.

Start date: March/April 2026
Requirements

  • Strong background in Machine Learning and Deep Learning.
  • Experience with Natural Language Processing (NLP).
  • Proficiency in PyTorch.
  • Major in AI or Data Science.
  • Recommended courses: Deep Learning, Advanced Machine Learning

Please contact Laurin van den Bergh (he/him): bergh@ifi.uzh.ch

Master Project: Improve Human-AI Collaboration for Explainable AI Models

Concept Bottleneck Models (CBMs) are interpretable deep learning models that predict human-understandable intermediate concepts from inputs (e.g., images), and then use those concept predictions to make a final task prediction (Koh et al., ICML 2020).
For example, a model to predict arthritis grade first predicts concepts susch as has_sclerosis or has_narrow_joint_space, and then predicts the grade of arthritis based on those concepts.
This design makes the model's reasoning transparent and intervenable: human experts can correct wrong concept predictions at test time, which directly updates the final prediction without retraining.

A practical challenge is that CBMs can have many concepts, but an expert's time is limited, so verifying a long list of concepts for each input is not practical.
Intervention policies address this problem and prioritize which concepts are presented for intervention, i.e., picking those that are most likely to improve the final prediction if corrected (Sheth et al. (2022); Shin et al. (2023)).
However, these policies are not perfect and might prioritize already correct CBM predictions.
The human expert now must be able to detect where it is appropriate to rely on the policy and CBM predictions and intervene on the concept, and where not to rely on it.
While many policies and forms of explanation for CBM concept predictions exist, it is not clear which are best suited to enable effective human-AI collaboration.

The goal of this master project is to find out which forms of explanation enable effective human-AI collaboration and lead to increased task performance.
In the first stage of the project, students build an interactive application that runs CBMs on a server and lets users intervene on predictions. Various CBMs and explanation methods should be explored and implemented.
In the second stage, students setup and conduct a user study to evaluate the human-AI collaboration in various settings.

Requirements:

  • Deep learning
  • Proficiency with PyTorch and full stack application development
  • Major in AI, Data Science, or Software Systems
  • Recommended courses: Deep Learning, Advanced Topics in Artificial Intelligence

Start date: Summer 2026
Apply by 31.05.2026 as group of 3-4 students.
Please contact Laurin van den Bergh (he/him): bergh@ifi.uzh.ch

Master Project: Shaping News Stories with Conversational Feedback

Recommender systems are among the most ubiquitous forms of AI, and we encounter them every day. From TV shows to music and news, AI recommendations help us navigate the overwhelming amount of content available online. They filter information, highlight what is most relevant to each user, and even shape our tastes and preferences over time. Yet the current generation of recommender systems largely operates as a black box: users receive suggestions without insight into why those items were chosen or how their own feedback could influence the model. This opacity can lead to a lack of trust, reduced engagement, and missed opportunities for personalized content curation.

At the same time, advances in natural language processing and conversational AI open new possibilities for turning passive recommendation engines into interactive partners. By allowing users to converse with the system by asking clarifying questions, providing feedback, or correcting misaligned suggestions, we can make recommendations more transparent, adaptable, and aligned with individual needs. Conversational feedback not only improves the relevance of the recommendations but also empowers users to shape the content ecosystem they inhabit.

The proposed master project seeks to bridge these two trends. It will explore how real‑time conversational interactions can be integrated into the existing news recommendation platform Informfully to reinforce stories that resonate with readers, surface underrepresented perspectives, and adapt to shifting interests. Through a combination of deep learning, dialogue management, and human‑in‑the‑loop evaluation, the project aims to demonstrate that conversational feedback can significantly enhance user satisfaction and trust in recommender systems.

Requirements:

  • Knowledge in Machine Learning
  • Proficiency with PyTorch and/or Full‑stack application development
  • Major in AI, Data Science, or Software Systems

Recommended courses:

  • Deep Learning
  • Reinforcement Learning
  • Advanced Topics in Artificial Intelligence

Start: Fall 2026 – Apply by 31.05.2026 as a group of 3–4 students.

Contact: Noah Mamie nmamie@ifi.uzh.ch

Master Project: AI-Facilitated Group Decision-Making in Real-Time Chat

In group decision-making, voices are frequently raised but remain unheard. People bring different goals, values, and levels of confidence to a conversation, and yet the true power of collective reasoning only emerges when participants genuinely acknowledge each other's ideas. When dominant voices crowd out quieter ones, or when time pressure pushes a group toward premature consensus, the quality of decisions suffers. These dynamics are well-documented in the social and organizational psychology literature, yet they remain difficult to detect and address in practice.

At the same time, advances in large language models and conversational AI open new possibilities for real-time facilitation. Rather than relying on a human moderator to notice imbalances and intervene, an AI facilitator could monitor group conversations as they unfold, for example, detecting when a contribution has gone unacknowledged, when the group is converging too quickly, or when a minority perspective deserves more airtime. By nudging individuals or the group as a whole at the right moment, such a system could help groups make decisions that are not only faster, but fairer and more reflective of the full range of views present.

This master project sets out to build and study exactly that. Working within a multiplayer, chat-based environment such as Mattermost, you will design and integrate a bot that monitors group conversations in real time and intervenes with targeted nudges to individuals in private or groups directly. A follow-up (Master’s thesis) project could investigate how different AI intervention strategies affect decision quality, perceived fairness, and individual experience, and further explore the broader design space of AI-supported group collaboration.

Requirements:

  • Strong programming skills in Python
  • Experience with API and webhook development
  • Familiarity with bot or LLM integration, and looging of structured data
  • Interest in HCI research

Start Date: April / May 2026

Apply by 15.5.2026 as a group of 2–4 students
Please contact Patricia Kahr (she/her): patricia.kahr@uzh.ch