Speaker: Prof. em. Ben Shneiderman, Ph.D.
Host: Prof. Dr. Jürgen Bernard
Human-Centered AI (HCAI) represents a second Copernican revolution. In the past, researchers and developers focused on building AI algorithms and systems, stressing the autonomy, correctness, and efficiency of machines rather than human control through user interfaces. In contrast, HCAI puts the human users at the center of design thinking, emphasizing user experience design. Researchers and developers for HCAI systems stress measuring human performance and satisfaction, valuing customer and consumer needs, and ensuring meaningful human control. Achieving HCAI will increase human performance, while supporting human self-efficacy, creativity, and responsibility and respecting human values, rights, and dignity.
The next step is to bridge the gap between widely discussed ethical principles of Human-Centered AI (HCAI) and practical steps for effective governance. I propose 15 recommendations at three levels of governance: team, organization, and industry. The recommendations are intended to increase the reliability, safety, and trustworthiness of HCAI systems: (1) reliable systems based on sound software engineering practices, (2) safety culture through business management strategies, and (3) trustworthy certification by independent oversight.
Prof. Ben Shneiderman, Ph.D., is an Emeritus Distinguished University Professor in the Department of Computer Science, Founding Director (1983-2000) of the Human-Computer Interaction Laboratory and a Member of the UM Institute for Advanced Computer Studies (UMIACS) at the University of Maryland. He is a Fellow of the AAAS, ACM, IEEE, and NAI, and a Member of the National Academy of Engineering, in recognition of his pioneering contributions to human-computer interaction and information visualization. His widely-used contributions include the clickable highlighted web-links, high-precision touchscreen keyboards for mobile devices, and tagging for photos. Shneiderman’s information visualization innovations include dynamic query sliders for Spotfire, development of treemaps for viewing hierarchical data, novel network visualizations for NodeXL, and event sequence analysis for electronic health records.
Ben is the lead author of Designing the User Interface: Strategies for Effective Human-Computer Interaction (6th ed., 2016). He co-authored Readings in Information Visualization: Using Vision to Think (1999) and Analyzing Social Media Networks with NodeXL (2nd edition, 2019). His book Leonardo’s Laptop (MIT Press) won the IEEE book award for Distinguished Literary Contribution. The New ABCs of Research: Achieving Breakthrough Collaborations (Oxford, 2016) describes how research can produce higher impacts. He is working on a book on Human-Centered AI: A Second Copernican Revolution.
Speaker: Sebastian Ruder, Ph.D.
Host: Prof. Dr. Rico Sennrich
Research in natural language processing (NLP) has seen striking advances in recent years but most of this success has focused on English. In this talk, I will give an overview of approaches that transfer knowledge across languages and enable us to scale NLP models to more of the world's 7,000 languages. I will cover open challenges in this area such as evaluation in the face of limited labelled data, generalizing to low-resource languages and different scripts, and dealing with erroneous segmentations and discuss approaches that help mitigate them.
Sebastian Ruder, Ph.D., is a research scientist in the Language team at DeepMind, London, U.K. He completed his Ph.D. in Natural Language Processing and Deep Learning at the Insight Research Centre for Data Analytics, while working as a research scientist at Dublin-based text analytics startup AYLIEN. Previously, he studied Computational Linguistics at the University of Heidelberg, Germany and at Trinity College, Dublin. He is interested in transfer learning for NLP and making ML and NLP more accessible.
Speaker: Prof. Anita Woolley, Ph.D.
Host: Prof. Dr. Abraham Bernstein
In our prior work, we demonstrated demonstrated that a group's ability to work together across a range of tasks could be characterized by a general collective intelligence factor. Since the time of the initial publication of our findings, we have been accumulating data from over 1,300 teams including over 5,000 individuals working on a battery of tasks on our online platform to better quantify the processes that drive collective intelligence. In a related line of research, we have experimented with deploying some low-level bots or "nudges" to help shape team process in the direction of higher collective intelligence. The findings of both of these lines of research will be discussed along with the implications for designing technologies to enhance collective intelligence.
Prof. Anita Woolley, Ph.D., is an Associate Professor of Organizational Behavior and Theory at the Tepper School of Business, Carnegie Mellon University, U.S.A. She has a PhD in Organizational Behavior from Harvard University, where she also earned Bachelor’s and Master’s degrees.
Her research and teaching interests include collaborative analysis and problem-solving in teams; online collaboration and collective intelligence; and managing multiple team emberships. Her research has been published in Science, Organization Science, Academy of Management Review, Journal of Organizational Behavior, Small Group Research, and Research on managing Groups and Teams, among others. Her research has been funded by grants from the National Science Foundation, the U.S. Army Research Office, and private corporations.
Anita is a Senior Editor at Organization Science and is a member of the Academy of Management, the Interdisciplinary Network for Group Research, and the Association for Psychological Science.
Speaker: Prof. Dr. Jan Mendling
Host: Prof. Gerhard Schwabe
The design and evaluation of algorithms has been a major concern of computer science since its founding days and is still a matter of discussion. In this talk, I will present joint research with Henrik Leopold and Benoit Depaire on a single ontological, epistemological and methodological framework describing algorithm engineering. Our development of such a framework is important for better understanding how research on algorithms can be evaluated.
Prof. Dr. Jan Mendling is a Full Professor with the Institute for Information Business at Wirtschaftsuniversitat Wien (WU Vienna), Austria. His research areas include Business Process Management, Conceptual Modelling and Enterprise Systems. He has published more than 450 research papers and articles, among others in ACM Transactions on Software Engineering and Methodology, IEEE Transactions on Software Engineering, Information Systems, Data & Knowledge Engineering, and Decision Support Systems. He is member of the editorial board of three international journals. His Ph.D. thesis has won the Heinz-Zemanek-Award of the Austrian Computer Society and the German Targion-Award for dissertations in the area of strategic information management. He is one of the founders of the Berlin BPM Community of Practice (http://www.bpmb.de) and Board Member of the Austrian BPM Society, organizer of several academic events on process management, and member of the IEEE Task Force on Process Mining. He was program co-chair of the International Conference on Business Process Management 2010 and co-organizer of this conference in 2019.
Speaker: André Anjos, Ph.D.
Host: Prof. Dr. Manuel Günther
One of the key aspects of modern technological research lies on the use of personal computers (PCs) either for the simulation of known phenomena or for the evaluation of data collected from natural observations. Mashups of these data, organized in tables and figures are attached to textual descriptions leading to scientific publications. In the current practice, data sets, code and actionable software leading to those results are excluded upon recording and preservation of articles.
This panorama slows down potential scientific development in at least two major aspects: (1) re-using ideas from different sources normally implies on the re-development of software leading to original results and (2) the reviewing process of candidate ideas is based on trust rather than on hard, verifiable evidence that can be thoroughly analyzed. In this talk, I'll discuss Reproducible Research (RR) for scientists and engineers working with software applications in Pattern Recognition (PR) and Machine Learning (ML). I’ll motivate and explain concepts behind RR, an increasing trend in scientific publications in this niche, its implications and tools for implementing it on an individual or group level.
André Anjos, Ph.D. received his Ph.D. degree in signal processing from the Federal University of Rio de Janeiro, Brazil, in 2006. He joined the ATLAS Experiment at European Centre for Particle Physics (CERN, Switzerland) from 2001 until 2010, where he worked in the development and deployment of the Trigger and Data Acquisition systems that are nowadays powering the discovery of the Higgs boson. During his time at CERN, André studied the application of neural networks and statistical methods for particle recognition at the trigger level and developed several software components still in use today. In 2010, André joined the Biometrics Security and Privacy Group at the Idiap Research Institute, Switzerland, where he worked with face and vein biometrics, presentation attack detection, and reproducibility in research. Since 2018 André heads the Biosignal Processing Group at Idiap.
His current research interests include medical applications, biometrics, image and signal processing, machine learning, research reproducibility and open science. Among André's open-source contributions, one can cite Bob and the the BEAT framework for evaluation and testing of machine learning systems. He teaches graduate-level machine learning courses at the École Polytechnique Fédérale de Lausanne (EPFL), Switzerland, and master courses at Idiap's Master of AI. He serves as reviewer for various scientific journals in pattern recognition, machine learning, and image.
Speaker: Prof. Dr. Thorsten Strufe
Host: Prof. Dr. Burkhard Stiller
Efficiently integrity verification of received data requires Message Authentication Code (MAC) tags. However, while security calls for rather long tags, in many scenarios this contradicts other requirements. Examples are strict delay requirements (e.g., robot or drone control) or resource-scarce settings.
Prior techniques suggested truncation of MAC tags, thus trading off linear performance gain for exponential security loss. To achieve security of full-length MACs with short(er) tags, we introduce Progressive MACs (ProMACs) - a scheme that uses internal state to gradually increase security upon reception of subsequent messages. We provide a formal framework and propose a provably secure, generic construction called Whips. We evaluate applicability of ProMACs in several realistic scenarios and demonstrate example settings, where ProMACs can be used as a drop-in replacement for traditional MACs.
Prof. Dr. Thorsten Strufe is professor for IT Security at the Karlsruhe Institute of Technology (KIT) and adjunct professor for Privacy and Network Security at TU Dresden, in the Excellence Centre for Tactile Internet with Human-in-the-Loop (CeTI), and received his Ph.D. from the University of Ilmenau in 2007, which was awarded the FFK price for the best theory-oriented dissertation. His research interests lie in the areas of privacy and resilience, especially in the context of social networking services and large scale distributed systems. Recently, he has focused on studying privacy implications of user behavior and possibilities to provide privacy-preserving and secure networked services. Previous posts include faculty positions at TU Dresden, TU Darmstadt, and Uni Mannheim, as well as postdoc/researcher positions at EURECOM and TU Ilmenau.
Speaker: Sébastien Marcel, Ph.D.
Host: Prof. Dr. Manuel Günther
In biometrics, Presentation Attacks (PA also referred to as spoofing) are performed by falsifying the biometric trait and then presenting this falsified information to the biometric system, one such example is to fool a fingerprint system by copying the fingerprint of another person and creating an artificial or gummy finger which can then be presented to the biometric system to falsely gain access. This is an issue that needs to be addressed because it has recently been shown that conventional biometric techniques are vulnerable to presentation attacks. One of the main challenges in Presentation Attack Detection (PAD also referred to as anti-spoofing) is to find a set of features and models (mostly classifiers) that allows systems to effectively distinguish signals that were directly emitted by a human from those reproduced by an attacker. This talk will mostly present face Presentation Attacks, including Morphing Attacks and DeepFakes, and discuss of PAD strategies.
Sébastien Marcel, Ph.D., received the Ph. D. degree in signal processing from Université de Rennes I in France (2000) at CNET, the research center of France Telecom (now Orange Labs). Today he heads the Biometrics Security and Privacy (BSP) group at the Idiap Research Institute, Switzerland, and conducts research on face recognition, speaker recognition, vein recognition and presentation attack detection. He is lecturer at the Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland, and the University of Lausanne, Switzerland,. Sébastien is Associate Editor of IEEE Transactions on Biometrics, Behavior, and Identity Science (TBIOM). He was the coordinator of European research projects including MOBIO, TABULA RASA, or BEAT and involved in international projects (DARPA, IARPA). He is also the Director of the Swiss Center for Biometrics Research and Testing conducting FIDO certifications and cooperative research.