Navigation auf uzh.ch

Suche

Department of Informatics Information Management Research Group

Seminar: Information Management (Bachelor / Master)

Description

This year's seminar will deal with the topic "Generative AI". We will strive to integrate technical topics, collaborating with Generative AI,  software engineering  with Generative AI, business, applications, as well legal and societal topics. Students can cover a topic through a systematic literature review or engage in an empirical or technical mini-study. Note that the extra effort in this seminar is compensated by 6 ECTS.

Module (Bachelor): 03SM22BIS012

Module (Master): 03SM22MIS012

ECTS Points: 6.0

VVZ Link: BSc, MSc

Dates

  • 21.02.24 16:15 - 17:45     Topic assignment / KickOff Meeting
  • 22.02.24 - 26.02.24     Meetings with advisors 
  • 26.02.24 - 04.03-24    Literature research, Topic formulation, and introduction to scientific writing (please watch the recording on MS Teams), 
  • 04.03.24 13:00    Final topic formulation to advisors and at sprenkamp@ifi.uzh.ch (in CC)
  • 28.04.24 23:59    Submission first version
  • 29.04.24 16:15 - 17:45    Introduction to scientific reviewing  
  • 05.05.24 23:59    Submission of reviews 
  • 12.05.24-21.05.24    Discussion Presentations with Advisors (optional)
  • 21.05.24 23:59    Submission final version
  • Block seminar:
    • 23.05.24 15.00-18.00 
    • 24.05.24 12.30-18.00 
    • 25.05.24 9.00 - 18.00

Language

English

Output

  • Seminar paper (first readable version, final version)
  • Written feedback to your peer students for 2-3 other seminar papers ("Peer-Review")
  • Continuous and active participation in the block seminar at the end of the semester, including a presentation of your seminar paper.

Further information

This module takes place in person.  Materials and recordings are made available online via MS Teams. This seminar is limited to 24 participants, 16 master, and 8 bachelor students.

IMPORTANT: As of fall 2022, the allocation of students to seminars at the Department of Informatics will be done through the new module booking tool. More information on the process can be found on our website: Seminar allocation in Informatics

DEADLINE for Registration:

04.02.2024 till 24:00

 

Topics for seminar papers

Here you will find the list of the topics. Please see the indication of bachelor's or master's to find topics suitable for your degree. We provide some initial literature to help you identify the topic of interest and prepare for the initial conversation with your advisor.

Please note that the description and initial literature will follow in the upcoming days.

Area

Topic

Description

Initial Literature

Technical Topics

Transformer Models

Transformer models, pivotal in the development of LLMs and GenAI, have revolutionized how machines understand and generate human language. Their ability to process large amounts of data simultaneously using attention mechanisms allows for more nuanced and contextually aware language understanding and generation, making them essential for advanced applications in natural language processing, from chatbots to content creation. The impact of transformer models extends beyond language tasks, influencing areas like image recognition and generative art, showcasing their versatility and potential in various AI domains. This topic delves into how the transformer model architecture works.

  • Vaswani, A., et al. (2017). "Attention Is All You Need." In Advances in Neural Information Processing Systems.
  • Radford, A., et al. (2019). "Language Models are Unsupervised Multitask Learners." OpenAI Blog.
  • Devlin, J., et al. (2018). "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding." arXiv preprint arXiv:1810.04805.

 

Prompt Engineering

In the realm of LLMs, prompt engineering emerges as a crucial skill for effectively harnessing the power of these models. By meticulously designing prompts, users can guide these models to produce more accurate, creative, or context-appropriate responses, maximizing their utility in diverse applications like automated customer service, content creation, and even complex problem-solving. The art of prompt engineering not only enhances the immediate output quality but also plays a significant role in training models to be more effective and efficient in understanding and responding to human language.

  • Liu, Y., et al. (2021). "Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing." arXiv preprint arXiv:2107.13586.
  • Brown, T.B., et al. (2020). "Language Models are Few-Shot Learners." In Advances in Neural Information Processing Systems.

 

Generative AI Platforms

Generative AI platforms encompass a broad range of systems that leverage artificial intelligence to create new content, ranging from text and images to music and code. Key applications include automated content generation for marketing, artistic creation, personalized user experiences, and innovative problem-solving in various domains. The development and ethical implications of these platforms are also a significant area of interest, as they raise questions about originality, copyright, and the impact on creative industries.

  • Wessel, Michael, et al. "Generative AI and its transformative value for digital platforms." Journal of Management Information Systems (2023).
  • FRÉRY, Frédéric. "Digital generative AI platforms: when technology disrupts management models.
  • Khan, Mehtab. "Fair Use Considerations for Generative AI Platforms." Available at SSRN 4535169 (2023).
     

 

Comparing ChatGPT and Bard/Gemini

This topic involves a comparative analysis between ChatGPT, developed by OpenAI, and Bard/Gemini, developed by Google, in the realm of conversational AI. The focus is on contrasting their capabilities in understanding, generating, and interacting in natural language conversations. Key points of comparison include the underlying technology, training methods, the breadth and depth of knowledge each system can access, response accuracy, and the ability to maintain context in a conversation. 

  • OpenAI. (2023). ChatGPT [Large language model]. https://chat.openai.com/chat
  • Google. (2023) BARD https://bard.google.com/chat
  • https://gemini.google.com/app

 

Generative AI Beyond Text

Generative AI's application extends beyond text, encompassing image, audio, and video generation, fundamentally altering content creation across various domains. These technologies use advanced machine learning models like generative adversarial networks and transformer models adapted for non-text data to generate realistic and creative outputs. Applications range from creating photorealistic images and art, generating music and synthetic voices, to producing deepfakes and virtual reality environments. This expansion of generative AI challenges traditional notions of creativity and raises significant ethical and societal questions, particularly regarding authenticity, copyright, and the impact on creative industries.  

  • Aydın, Ömer, and Enis Karaarslan. "Is ChatGPT leading generative AI? What is beyond expectations?." What is beyond expectations (2023).
  • Bommasani, Rishi, et al. "On the opportunities and risks of foundation models." arXiv preprint arXiv:2108.07258 (2021).
  • Yuan, Yang. "On the power of foundation models." International Conference on Machine Learning. PMLR, 2023.

 

Why generative AI is hallucinating (and what you can do against it)

GenAI, such as LLMs are prone to hallucinations. Based on their nature, these differ from common AI mistakes, such as wrong estimation.  However, it is not clear what kind of hallucinations are out there. Exemplary dimensions *could* be based on task (e.g., reasoning, mathematical errors) or output (e.g., visual, textual, ...)

  • Leiser, Florian, et al. "From ChatGPT to FactGPT: A Participatory Design Study to Mitigate the Effects of Large Language Model Hallucinations on Users." Proceedings of Mensch und Computer 2023. 2023. 81-90.
  • Yao, Jia-Yu, et al. "Llm lies: Hallucinations are not bugs, but features as adversarial examples." arXiv preprint arXiv:2310.01469 (2023).
  • Ji, Ziwei, et al. "Survey of hallucination in natural language generation." ACM Computing Surveys 55.12 (2023): 1-38
  • Zhang, Yue, et al. "Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models." arXiv preprint arXiv:2309.01219 (2023).

 

Tools for implementing generative AI applications

The implementation of GenAI applications relies on a suite of tools and frameworks that facilitate the development, training, and deployment of generative models. These tools range from machine learning libraries and platforms to specialized software for handling large datasets, model training, and optimization. The choice of tools can significantly impact the efficiency, scalability, and accessibility of GenAI applications, making their selection a crucial step in the development process. The task for the student is to review and map the given landscape, with the possibility of developing adequate prototypes.

  • Nguyen-Duc, Anh, et al. "Generative Artificial Intelligence for Software Engineering--A Research Agenda." arXiv preprint arXiv:2310.18648 (2023).
  • Dhoni, Pan. "Unleashing the Potential: Overcoming Hurdles and Embracing Generative AI in IT Workplaces: Advantages, Guidelines, and Policies." Authorea Preprints (2023).

Collaborating with Generative AI

Collaborating with Generative AI as an Individual

Collaborating with Generative AI as an individual involves leveraging AI tools to augment personal creativity, productivity, and problem-solving. This collaboration can manifest in various forms, such as using AI for writing assistance, graphic design, data analysis, coding or even personal decision-making. The key aspect is the symbiotic relationship where the individual's expertise and creativity are complemented by the AI's processing power and data-driven insights. This partnership can lead to enhanced creativity, more efficient workflows, and the exploration of new ideas and solutions that might not be feasible by humans alone.

  • Fui-Hoon Nah, Fiona, et al. "Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration." Journal of Information Technology Case and Application Research 25.3 (2023): 277-304.
  • Suh, Minhyang, et al. "AI as social glue: uncovering the roles of deep generative AI during social music composition." Proceedings of the 2021 CHI conference on human factors in computing systems. 2021.
  • Pavlik, John V. "Collaborating with ChatGPT: Considering the implications of generative artificial intelligence for journalism and media education." Journalism & Mass Communication Educator 78.1 (2023): 84-93.

 

Collaborating with Generative AI in a Dyad

This topic explores the dynamics of collaboration between two individuals facilitated by Generative AI. It focuses on how AI can enhance and influence the interaction, communication, and productivity within a dyadic partnership, such as between a manager and employee, doctor and patient, or teacher and student. The discussion includes how AI can provide personalized insights, mediate communication, and support decision-making processes. The aim is to understand the role of AI in augmenting human-to-human interactions in various professional and personal contexts, ensuring that the technology is used to strengthen and not hinder the collaborative relationship.

  • Fui-Hoon Nah, Fiona, et al. "Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration." Journal of Information Technology Case and Application Research 25.3 (2023): 277-304.
  • Zhang, Rui, et al. "Investigating AI Teammate Communication Strategies and Their Impact in Human-AI Teams for Effective Teamwork." Proceedings of the ACM on Human-Computer Interaction 7.CSCW2 (2023): 1-31.
  • Seeber, Isabella, et al. "Machines as teammates: A research agenda on AI in team collaboration." Information & management 57.2 (2020): 103174.

 

Collaborating with Generative AI in a Small Group

In smaller groups, Generative AI can provide more personalized assistance, tailor recommendations to specific group needs, and enhance the creative process on a more intimate scale. This collaboration can be particularly beneficial for tasks requiring innovation, problem-solving, and detailed analysis.

The AI's role in a small group involves generating ideas, providing data-driven insights, and assisting with the creation of content or solutions that might not be immediately obvious to human collaborators. However, the integration of AI in small groups also demands careful consideration of its impact on group dynamics, communication, and decision-making processes. Ensuring that the AI complements human skills without dominating the conversation is crucial.

  • Fui-Hoon Nah, Fiona, et al. "Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration." Journal of Information Technology Case and Application Research 25.3 (2023): 277-304.
  • Memmert, Lucas, and Navid Tavanapour. "TOWARDS HUMAN-AI-COLLABORATION IN BRAINSTORMING: EMPIRICAL INSIGHTS INTO THE PERCEPTION OF WORKING WITH A GENERATIVE AI." (2023).
  • Tan, Seng Chee, Wenli Chen, and Bee Leng Chua. "Leveraging generative artificial intelligence based on large language models for collaborative learning." Learning: Research and Practice 9.2 (2023): 125-134.
     

 

Collaborating with Generative AI in a large group

In large group settings, Generative AI can play a pivotal role in managing complex dynamics and large datasets, facilitating decision-making, and enhancing creativity. This collaboration is particularly relevant in large-scale projects, organizational management. The AI can analyze vast amounts of data, identify patterns, and provide recommendations, while the human participants contribute with strategic decision-making, contextual insights, and creative input. The challenge lies in ensuring effective communication and integration of AI capabilities within the group's workflow.  

  • Fui-Hoon Nah, Fiona, et al. "Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration." Journal of Information Technology Case and Application Research 25.3 (2023): 277-304.
  • Memmert, Lucas, and Navid Tavanapour. "TOWARDS HUMAN-AI-COLLABORATION IN BRAINSTORMING: EMPIRICAL INSIGHTS INTO THE PERCEPTION OF WORKING WITH A GENERATIVE AI." (2023).

 

Collaborating with Generative AI in a community  

Collaborating with Generative AI at the community level involves using AI to address collective needs and challenges, enhancing community engagement, and facilitating collaboration. This can include applications in smart city planning, public health monitoring, educational outreach, and community resource management. The AI systems can process community-generated data to provide insights, predict trends, and support decision-making processes. The focus is on harnessing AI to serve the community's interests, ensuring inclusivity, fairness, and transparency in the process.  

  • Dautov, Rustem, et al. "Towards Community-Driven Generative AI." Position Papers of the 18thConference on Computer Science and Intelligence Systems (2023): 43.
  • Burtch, Gordon, Dokyun Lee, and Zhichen Chen. "The consequences of generative ai for ugc and online community engagement." Available at SSRN 4521754 (2023).

 

Human-AI Teaming 

Human-AI teaming (HAIT) constitutes a human-centered approach to AI implementation at work, as its aspiration is to leverage the respective strengths of each party. Student explorer the possibilities and limitations of incorporating AI/Gen-AI into current workflow (e.g., business, professional, academic, or even personal) and discuss the potential use and future of it.

  • Hemmer, Patrick, et al. "Human-AI Complementarity in Hybrid Intelligence Systems: A Structured Literature Review." PACIS (2021): 78.
  • Bouschery, Sebastian G., Vera Blazevic, and Frank T. Piller. "Augmenting human innovation teams with artificial intelligence: Exploring transformer‐based language models." Journal of Product Innovation Management 40.2 (2023): 139-153.
  • Bansal, Gagan, et al. "Does the whole exceed its parts? the effect of ai explanations on complementary team performance." Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 2021.

Software Engineering and Generative AI

Using Generative AI for Software Development

The use of Generative AI in software development marks a significant shift in how software is conceptualized, designed, and implemented. This involves utilizing AI to generate code, automate testing, optimize algorithms, and even aid in system design. Generative AI can contribute to various stages of the software development lifecycle, from initial requirements gathering and design to coding, testing, and maintenance. The integration of AI in this field aims to enhance productivity, reduce human error, and accelerate the development process, while also posing challenges in terms of ensuring code quality, ethical considerations, and the potential impact on the role of human developers.  
  • Ebert, Christof, and Panos Louridas. "Generative AI for software practitioners." IEEE Software 40.4 (2023): 30-38.
  • Pothukuchi, Ameya Shastri, Lakshmi Vasuda Kota, and Vinay Mallikarjunaradhya. "Impact of Generative AI on the Software Development Lifecycle (SDLC)." International Journal of Creative Research Thoughts 11.8 (2023).
  • Chen, Mark, et al. "Evaluating large language models trained on code." arXiv preprint arXiv:2107.03374 (2021).

 

Software Engineering for Generative AI

Software engineering for Generative AI focuses on developing and maintaining software systems that enable and support generative AI models. This encompasses designing robust architectures, developing scalable and efficient algorithms, ensuring data quality and security, and creating interfaces for human-AI interaction. The field also involves addressing challenges unique to AI systems, such as managing the unpredictability of generative models, ensuring ethical use of AI, and handling the computational demands of training large models. Software engineering in this context is not just about technical proficiency but also about understanding the broader implications of AI systems in society.  

  • Pothukuchi, Ameya Shastri, Lakshmi Vasuda Kota, and Vinay Mallikarjunaradhya. "Impact of Generative AI on the Software Development Lifecycle (SDLC)." International Journal of Creative Research Thoughts 11.8 (2023).
  • Ebert, Christof, and Panos Louridas. "Generative AI for software practitioners." IEEE Software 40.4 (2023): 30-38.
  • Sun, Jiao, et al. "Investigating explainability of generative AI for code through scenario-based design." 27th International Conference on Intelligent User Interfaces. 2022.
  • Fan, Angela, et al. "Large language models for software engineering: Survey and open problems." arXiv preprint arXiv:2310.03533 (2023).

 

Developing Apps based on Large Language Models – understanding the ecosystem

APIs and Frameworks dedicated for development of Apps based on LLMs evolve rapidly, making some problems that occurred in early stages of broad LLMs adoption (e.g., until April 2023) became less now. A retrospective analysis of what has changed in the last years and a projection of potential future developments.

  • Topsakal, Oguzhan, and Tahir Cetin Akinci. "Creating large language model applications utilizing langchain: A primer on developing llm apps fast." Proceedings of the International Conference on Applied Engineering and Natural Sciences, Konya, Turkey. 2023.
  • Rillig, Matthias C., et al. "Risks and benefits of large language models for the environment." Environmental Science & Technology 57.9 (2023): 3464-3466.
  • Kargaran, Amir Hossein, et al. "MenuCraft: Interactive Menu System Design with Large Language Models." arXiv preprint arXiv:2303.04496 (2023).

Business

Challenges and conceptualization of Reliance on Generative AI/ Large Language Models 

Only if people rely on GenAI, it is used to its potentiel. However, GenAI introduces peculiar challenges for reliance and use. While reliance is often seen as implementing the AI decision, reliance on GenAI rather entails an “acceptance” of AI output, such as a text formulation. It is unclear how these concepts relate and what reliance on GenAI entails. In this thesis, the student should aim at quering exisiting literature on reliance with a focus on GenAI and come up with a novel conceptualization.

  • Chen, Mark, et al. "Evaluating large language models trained on code." arXiv preprint arXiv:2107.03374 (2021).
  • Lee, John D., and Katrina A. See. "Trust in automation: Designing for appropriate reliance." Human factors 46.1 (2004): 50-80.
  • Schemmer, Max, et al. "Appropriate reliance on AI advice: Conceptualization and the effect of explanations." Proceedings of the 28th International Conference on Intelligent User Interfaces. 2023.
  • Amaro, Ilaria, et al. "AI Unreliable Answers: A Case Study on ChatGPT." International Conference on Human-Computer Interaction. Cham: Springer Nature Switzerland, 2023.

 

Generative AI Business Models: The perspective of the Consumer of Generative AI

From the consumer's perspective, Generative AI business models encompass various ways in which. This perspective focuses on the value proposition, cost, accessibility, and usability of these AI services. Consumers of Generative AI may engage with models through direct subscription services, pay-per-use models, freemium models with premium features, or even through ad-supported platforms. Key considerations for consumers include the quality and reliability of the AI-generated content, the ethical implications of using such technology, data privacy concerns, and the potential impact on personal or organizational productivity and creativity. Understanding these models is crucial for consumers to make informed decisions about integrating Generative AI  into their personal or professional lives.    

  • Kanbach, Dominik K., et al. "The GenAI is out of the bottle: generative artificial intelligence from a business model innovation perspective." Review of Managerial Science (2023): 1-32.
  • Chui, Michael, et al. "The economic potential of generative AI." (2023).
  • Sohn, Kwonsang, et al. "Artificial intelligence in the fashion industry: consumer responses to generative adversarial network (GAN) technology." International Journal of Retail & Distribution Management 49.1 (2020): 61-80.

 

Generative AI Business Models: The perspective of the Provider of Generative AI

For providers of Generative AI, business models are centered around creating, maintaining, and monetizing AI technologies. This includes strategies for developing AI solutions, pricing models, distribution channels, and customer engagement. Providers must consider the costs associated with training and updating AI models, infrastructure requirements, and compliance with ethical and legal standards. Revenue models can vary from direct sales, subscription-based services, to offering AI capabilities through APIs. Providers also face the challenge of differentiating their offerings in a competitive market, ensuring reliability and quality of service, and addressing the evolving needs and concerns of users regarding AI technologies.

  • Kanbach, Dominik K., et al. "The GenAI is out of the bottle: generative artificial intelligence from a business model innovation perspective." Review of Managerial Science (2023): 1-32.
  • Chui, Michael, et al. "The economic potential of generative AI." (2023).
  • Kanbach, Dominik K., et al. "The GenAI is out of the bottle: generative artificial intelligence from a business model innovation perspective." Review of Managerial Science (2023): 1-32.

 

 

 

Generative AI Governance

Generative AI Governance encompasses the policies, ethical guidelines, and regulatory frameworks that guide the development, deployment, and use of generative AI technologies. This includes considerations around data privacy, intellectual property rights, transparency, accountability, and the societal impact of AI-generated content. Effective governance is crucial for ensuring that generative AI technologies are used responsibly and ethically, and that they contribute positively to society. It involves collaboration among various stakeholders, including AI developers, users, regulatory bodies, and the public, to balance innovation with societal norms and legal requirements.

  • Ferrari, Fabian, José van Dijck, and Antal van den Bosch. "Observe, inspect, modify: Three conditions for generative AI governance." New Media & Society (2023): 14614448231214811.
  • Baum, Kevin, et al. "From fear to action: AI governance and opportunities for all." Frontiers in Computer Science 5 (2023): 1210421.

 

Interorganizational processes

This topic examines how LLMs and GenAI are transforming interorganizational processes. It focuses on the integration of these technologies in facilitating communication, automating information exchange, and enhancing decision-making between organizations. The discussion includes how LLMs can be used for advanced data analysis, generating reports, and providing insights that support collaborative projects, while GenAI can assist in creating simulations or models for joint ventures. The aim is to explore the potential of these AI technologies in streamlining and optimizing interactions between organizations, leading to more efficient, innovative, and effective collaboration in various sectors.

  • Prasad Agrawal, Kalyan. "Towards adoption of Generative AI in organizational settings." Journal of Computer Information Systems (2023): 1-16.
  • Yamazaki, Tomomi, and Ichiro Sakata. "Exploration of Interdisciplinary Fusion and Interorganizational Collaboration With the Advancement of AI Research: A Case Study on Natural Language Processing." IEEE Transactions on Engineering Management (2023).

Applications

Conversational Government with Large Language Models 

Conversational government with Large Language Models refers to the use of advanced AI-driven conversational agents in public sector services. This can enhance citizen engagement, streamline service delivery, and provide personalized assistance. LLMs can be employed in various government functions, from answering queries to assisting in form-filling and providing information about public services. The key challenges include ensuring accuracy, privacy, and fairness in AI interactions, and integrating these systems within existing governmental frameworks.

  • Baldauf, Matthias, and Hans-Dieter Zimmermann. "Towards Conversational E-Government: An Experts’ Perspective on Requirements and Opportunities of Voice-Based Citizen Services." HCI in Business, Government and Organizations: 7th International Conference, HCIBGO 2020, Held as Part of the 22nd HCI International Conference, HCII 2020, Copenhagen, Denmark, July 19–24, 2020, Proceedings 22. Springer International Publishing, 2020.
  • Androutsopoulou, Aggeliki, et al. "Transforming the communication between citizens and government through AI-guided chatbots." Government information quarterly 36.2 (2019): 358-367.
  • Jo, Eunkyung, et al. "Understanding the benefits and challenges of deploying conversational AI leveraging large language models for public health intervention." Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 2023.

 

 

Price Negotiations with Generative AI 

The applications of GenAI is manifold. One application could be the price negotiations of goods (such as cars). Price negotiations are tedious and cognitive intensive, and people do not like to conduct them. In this thesis, the student should query existing literature for negotiation support systems (NSS) that include AI with focus on GenAI/LLMs.  
A bachelor's student should define design aspects for a LLM-based NSS and create mockups (e.g., paper based or click-dummies).  

  • Lin, Eleanor, James Hale, and Jonathan Gratch. "Toward a Better Understanding of the Emotional Dynamics of Negotiation with Large Language Models." Proceedings of the Twenty-fourth International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing. 2023.
  • Lewis, Mike, et al. "Deal or no deal? end-to-end learning for negotiation dialogues." arXiv preprint arXiv:1706.05125 (2017).
  • When will negotiation agents be able to represent us? The challenges and opportunities for autonomous negotiators
  Mapping GenAI capabilities and task characteristics of advisory services  GenAI can support a wide range of tasks with zero-shot learning, meaning letting the model predict without having been trained on a given data set for the task. However, some tasks are potentially better supported than others. In this topic, the student will first derive different task characteristics of advisory services and then map them to the capabilities of GenAI. Therein, the student should highlight different maturity stages of GenAI capabilities and showcase current limits. Level of task abstraction and specific focus must be discussed.  
  • Lu, Fangzhou, Lei Huang, and Sixuan Li. "ChatGPT, Generative AI, and Investment Advisory." Available at SSRN (2023).
  • Shabsigh, Ghiath, and El Bachir Boukherouaa. "Generative Artificial Intelligence in Finance." FinTech Notes 2023.006 (2023).

 

Detecting state of mind with Generative AI

Understanding the emotional and cognitive states of people is crucial for many tasks, including education, medical consultations or financial advice. For humans it is often easy to recognize even very subtle cues and react appropriately to them (e.g., by showing empathy). When integrating GenAI into such settings, e.g., to replace or augment human resources, it seems crucial that such systems can detect peoples emotional and cognitive state of mind as well. However, GenAI is often not specifically trained for such purposes. So, what are the possibilities and also the limits of current approaches to detect people’s state of mind with GenAI?  

  • Ringeval, Fabien, et al. "AVEC 2019 workshop and challenge: state-of-mind, detecting depression with AI, and cross-cultural affect recognition." Proceedings of the 9th International on Audio/visual Emotion Challenge and Workshop. 2019.
  • Lam, Jimmy, Willem-Paul Brinkman, and Merijn Bruijnes. "Generative algorithms to improve mental health issue detection." (2021).

 

Generative AI for Creativity: Art and Design 

This topic explores the impact of Generative AI on art and design, highlighting how these technologies are reshaping creative expression. Participants will examine the use of AI in generating novel artistic concepts and designs, from visual arts to fashion. 

  • Epstein, Ziv, et al. "Art and the science of generative AI." Science 380.6650 (2023): 1110-1111.
  • Hughes, Rowan T., Liming Zhu, and Tomasz Bednarz. "Generative adversarial networks–enabled human–artificial intelligence collaborative applications for creative and design industries: A systematic review of current approaches and trends." Frontiers in artificial intelligence 4 (2021): 604234.

 

Generative AI and Cybersecurity

This topic delves into the complex relationship between Generative AI and cybersecurity, examining both the potential risks and innovative solutions brought forth by these technologies. Participants will explore how Generative AI can be used to create sophisticated cyber threats, such as deepfakes, phishing attacks, and advanced malware, posing new challenges for cybersecurity professionals. Conversely, the study will also highlight how the same AI technologies can enhance cybersecurity measures, aiding in threat detection, response automation, and system resilience. 

  • Gupta, Maanak, et al. "From chatgpt to threatgpt: Impact of generative ai in cybersecurity and privacy." IEEE Access (2023).
  • Michael, Katina, Roba Abbas, and George Roussos. "AI in Cybersecurity: The Paradox." IEEE Transactions on Technology and Society 4.2 (2023): 104-109.

 

Applications of Large Language Models in (mental) Healthcare

This topic centers on the utilization of Large Language Models in mental healthcare, highlighting recent literature and market analysis. It examines the key players in this domain, ranging from healthcare providers to AI technology firms, and discusses the challenges they face, including legal, ethical, and health-related issues. The primary objective is to develop a framework for classification.

  • Ji, Shaoxiong, et al. "Mentalbert: Publicly available pretrained language models for mental healthcare." arXiv preprint arXiv:2110.15621 (2021).
  • Ma, Zilin, Yiyang Mei, and Zhaoyuan Su. "Understanding the benefits and challenges of using large language model-based conversational agents for mental well-being support." arXiv preprint arXiv:2307.15810 (2023).
     
  Large Language Models for Grading Sustainable Forest Investments Projects This seminar thesis explores the integration of Large Language Models , into the evaluation processes for sustainable forest investment projects. Developed in collaboration with the startup Xilva, the study focuses on how these AI tools utilize their vast data-processing capabilities to assess the ecological sustainability, economic viability, and social responsibility of investment projects. It delves into the evaluation criteria set by LLMs, assessing their precision and reliability in predicting project outcomes. The topic gives a chance to develop a real-life LLM application, which will be evaluated by members of the Xilva Team,
  • Bronzini, Marco, et al. "Glitter or Gold? Deriving Structured Insights from Sustainability Reports via Large Language Models." arXiv preprint arXiv:2310.05628 (2023).
  • https://python.langchain.com/docs/get_started/introduction -> tool used for building the prototype.

Legal and Societal Topics

Legal Considerations on Generative AI 

This topic addresses the legal aspects surrounding GenAI, focusing on intellectual property rights, data privacy, liability, and regulatory compliance. It explores how existing laws apply to AI-generated content and the challenges in attributing authorship and responsibility. The discussion also covers the implications of GenAI on privacy, particularly in relation to data used for training these models, and the legal frameworks evolving to manage these issues. The goal is to understand the complex legal landscape that GenAI operates within and the ongoing efforts to adapt legal systems to the nuances of AI technology.  

  • Ioannidis, Jules, et al. "Gracenote. ai: Legal Generative AI for Regulatory Compliance." (2023).Sun, Zhongxiang. "A short survey of viewing large language models in legal aspect." arXiv preprint arXiv:2303.09136 (2023).
  • Bommasani, Rishi, et al. "On the opportunities and risks of foundation models." arXiv preprint arXiv:2108.07258 (2021).

 

Data Protection in the age of Large Language Models 

This topic delves into the challenges and strategies of data protection in the era of LLMs. It examines how the massive data requirements of LLMs intersect with privacy concerns, focusing on issues like data consent, anonymization, and the potential risks of data breaches. The discussion also explores the evolving regulatory landscape, including GDPR and other global data protection frameworks, and how they apply to LLMs. The aim is to understand how to balance the data-driven nature of LLMs with the imperative to protect individual privacy and comply with stringent data protection laws.

  • Carlini, Nicholas, et al. "Extracting training data from large language models." 30th USENIX Security Symposium (USENIX Security 21). 2021.
  • Bender, Emily M., et al. "On the dangers of stochastic parrots: Can language models be too big?🦜." Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. 2021.
  • Bommasani, Rishi, et al. "On the opportunities and risks of foundation models." arXiv preprint arXiv:2108.07258 (2021).

 

Ethical Considerations in Generative AI

This topic explores the ethical dimensions of GenAI, addressing concerns such as the potential for bias in AI-generated content, the implications of deepfakes, and the moral responsibility of AI developers and users. It also examines the impact of GenAI on creative industries, questioning the originality and ownership of AI-generated works.

  • Bender, Emily M., et al. "On the dangers of stochastic parrots: Can language models be too big?🦜." Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. 2021.
  • Bommasani, Rishi, et al. "On the opportunities and risks of foundation models." arXiv preprint arXiv:2108.07258 (2021).

 

Misinformation  through Large Language Models Challenges and Opportunities

LLMs, due to their human-like conversational abilities, are prone to spread misinformation. How can society overcome this threat? Are there chances that the same models can be used to disentangle disinformation?

  • Whitehouse, Chenxi, et al. "Evaluation of fake news detection with knowledge-enhanced language models." Proceedings of the International AAAI Conference on Web and Social Media. Vol. 16. 2022.
  • Pan, Yikang, et al. "On the Risk of Misinformation Pollution with Large Language Models." arXiv preprint arXiv:2305.13661 (2023).

 

Generative AI and the Future of Work 

As generative AI technologies continue to advance, permeating various industries and sectors, a fundamental shift in the way we work is underway. What kind of effect does they have on the way we work?  From automated content creation to personalized assistance, we will examine the multifaceted implications of integrating generative AI into our professional lives.

  • Chui, Michael, et al. "The economic potential of generative AI." (2023).
  • Osborne, M. "Generative AI and the future of work: a reappraisal." Brown Journal of World Affairs (2023).
     

 

Weiterführende Informationen

Wichtige Links