This pages lists the open BSc. and MSc. thesis descriptions, as well as the master projects opportunities currently available in the DDIS research group.
If you are interested in any of the listed projects, please do not hesitate to contact the person mentioned in the open topic description.
If there are currently no open topics but you are generally interested in our research (see https://www.ifi.uzh.ch/en/ddis/research.html), or if you would like to propose a thesis about your own idea, you can send us an email to email@example.com.
In large-scale deliberations, up to a few thousand people may discuss a topic over a period of time. For newcomers, it then becomes challenging to grasp the state of the debate without reading through all the comments. Machine Learning tools can help mitigate that problem by visualizing opinion maps. Here, groups of users that share certain values are placed close to one another, while users who have different views are placed further away. The goal of this thesis is to explore algorithms that embed users based on their likes and dislikes in a tree-like argumentation map, such as on https://kialo.com.
If interested, please get in touch with us at the email address below. We can provide a more detailed description during a meeting.
Contact: Fynn Bachmann
Various computer science tasks and domain-oriented applications nowadays rely on a selection or combination of multimodal data. However, there is no unified manner in which multimodality is expressed. In  this gap in terminology and taxonomy is discussed, while the focus is laid on constructing a taxonomy for multimodal classification approaches rather than data. The term multimodality itself is used in different contexts to express the data types of a variety of abstraction levels, or even to refer to multi-source or heterogenous textual data. Here we refer to multimodal data as the diversity of data types (such as images, text, audio) along with various levels of abstraction (e.g. text: word; and text: document) and feature details. As it is not always easy to see what types of multimodal data and which datasets are available to approach a task, this could facilitate the work of researchers as well provide a definition space of the multimodal landscape. The goal of this Master project is to create a taxonomy of multimodal data with unique identifiers in the first place. In the second phase this may include linkages to datasets and domains or use cases among others. The taxonomy should be publicly available.
Contact: Svenja Lange