Speaker: Adina Williams, Ph.D.
Host: Prof. Dr. Martin Volk
One possible explanation for the impressive performance of masked language models (MLMs) is that they can learn to represent the syntactic structures prevalent in classical NLP pipelines. Were this correct, we would expect that fine-tuning such models on tasks requiring syntactic structure would lead them to be sensitive to word order at inference time. To address this question, we permute example word order at several steps in the pipeline—during fine-tuning, evaluation, and/or pre-training—and measure the results. We find that permuting word order during fine-tuning has remarkably little effect on downstream performance for several purportedly syntax sensitive NLU tasks (including NLI). Next, we pre-train MLMs on examples with randomly shuffled word order, and find that these models still achieve high accuracy (even after unpermuted fine-tuning) on many downstream tasks—including tasks specifically designed to be challenging for models that ignore word order. Our results show that the success of MLM pre-training is largely due to distributional information not any knowledge of word order per se, and underscores the importance of curating challenging evaluation datasets that require deeper syntactic knowledge.
Adina Williams, Ph.D., is a Research Scientist at Facebook AI Research in New York City. Her recent projects focus on trying to understand the behavior of deep neural networks from an interdisciplinary perspective that draws from linguistics, cognitive science, and natural language processing. Her current projects focus on combinations of natural language inference, evaluating model performance and explaining model behaviors, dynamic adversarial dataset collection, and information theoretic approaches to computational morphology. Previously, she earned her PhD at New York University in the Department of Linguistics, where she investigated the brain basis of syntactic and semantic processing.
Speaker: Joe Peppard, Ph.D.
Host: Prof. Dr. Gerhard Schwabe
The “IT department” is one of the more enduring concepts of the last 60 years and has been in the vocabulary of management ever since computers first entered organizations. While the label attached to this organizational unit has regularly changed, it seems to be widely accepted that organizations need some sort of organizational unit dedicated to managing IT. Indeed, in today’s digital economy, what company wouldn’t wish to have an IT unit? But does having an IT unit and the organizing model it promotes actually hold a company back in achieving its digital transformation ambitions? In this presentation, I will share my latest research where I challenge the fundamental assumptions and practices that frame our thinking about how companies design, position, structure and govern their IT units. A key finding is that the majority of organizations are designed from a brief that seeks to manage IT rather than realize value from IT use. That is, as digital becomes part of the fabric of an enterprise, the challenge for leadership teams is not to build a more digitally savvy IT unit, but to establish the basis for coordinating and integrating the knowledge required for success with technology. This is particularly difficult as this knowhow is dispersed across the organization and increasingly into customers and ecosystem partners. It requires a fresh approach and new organizing models. Drawing on in-depth case studies, I illustrate how some leading companies are implementing new organizing models and highlight what these mean for technology leadership in organizations.
Joe Peppard is Principal Research Scientist at the MIT Sloan School of Management, U.S.A. He researches, teaches, and consults in the domains of IT leadership, digital strategy, and innovation the execution of digital transformation programs, the creation of value from IT investments, and the role, structure, and capabilities of the IT unit in contemporary organizations.
In an environment, where hype is all too common, he helps business and IT leaders navigate an appropriate route through what is an increasingly complex landscape. His research studies contemporary aspects and challenges that mangers face in a world of accelerating technological change. Joe recognizes that managers want frameworks and models to help them understand their own predicaments, insights to figure out options and consequences, and clear actionable advice and guidance.
Speaker: Prof. Dr. Christoph Lofi
Host: Prof. Dr. Michael Böhlen and Prof. Dr. Alberto Bacchelli
With the growing success of AI systems and other data-driven applications, the need for obtaining relevant and reliable data for these systems becomes a central bottleneck. However, while wrangling and shaping data for such systems usually takes the majority of system development efforts, the resulting Data Engineering Pipelines are rarely in the spotlight. Also, modern Data Engineering Pipelines increasingly need to become more powerful, and have to also tackle complex semantic challenges beyond the technical problem of just connecting different systems with different formats. In this talk, we highlight some of these semantic challenges in data engineering and their relevance, and discuss some approaches and solutions to them.
Prof. Dr. Christoph Lofi has been an Assistant Professor at the Web Information Systems group of the Faculty of Engineering, Mathematics and Computer Science (EEMCS/EWI), Delft University of Technology, since 2016. His research is on Semantic Data Engineering methods and techniques. The general problem is that the source data available to a data-driven system is often not fit for that purpose because data is scattered between different sources, is of low quality, or important semantic information is only implicitly available. This is a central challenge in the arising data-driven economy and a major detrimental aspect in many AI- or Data Science-driven systems. Semantic Data processing pipelines are needed to overcome these issues, producing the required target data from the available source data.
Christoph received his PhD from TU Braunschweig, Germany, in 2011 in the field of Data Management and Query processing, was a Research Fellow at National Institute of Informatics Tokyo, Japan, from 2012-2014, and spent several years as a research fellow at L3S Research Centre in Hannover, Germany.
Speaker: Prof. James Won-Ki Hong, Ph.D.
Host: Prof. Burkhard Stiller
Network softwarization paradigms like SDN and NFV provide network operators with advantages in terms of scalability, cost-saving, resource efficiency, as well as flexibility. However, in order to fully reap these benefits and cope with new challenges regarding the heterogeneity of user demands and the complexity of management of such networks requires a high degree of automation that ensures fast and proactive decision making. With the recent success of Artificial Intelligence (AI) technology across numerous domains, a shift from traditional rule-based policies towards AI-based approaches in the context of network management is taking place. In this talk, we introduce how AI technology is being applied in network management as well as AI approaches for a fully integrated architecture to achieve network intelligence. In this talk, I will also share my experience working as CTO of KT (Korea Telecom), the largest wired/wireless telecom service provider in Korea.
Prof. James Won-Ki Hong, Ph.D., is Director of Innovation Center for Education, Professor in the Dept. of Computer Science and Engineering, Director of Center for Crypto Blockchain Research, and Director of Distributed Processing and Network Management Lab at POSTECH, Pohang, Korea. James worked as CTO and Senior Executive Vice President for KT from March 2012 to February 2014, where he was responsible for leading the R&D effort of KT and its 50 subsidiary companies, and where he initiated R&D on SDN (Software-Defined Networking). He was Chairman of National Intelligence Communication Enterprise Association, and Chairman of Telecommunications Technology Association (TTA) Standardization Board in Korea. He was a co-founder and Executive Director of SDN/NFV Forum from October 2014 to May 2019 in Korea. He is a co-found and chief technical advisor of Kedutech, a Korean startup developing and providing Vmeeting, video conferencing solution. He was co-founder and CTO of Netstech, a Palo Alto, USA-based startup developing network integrated ultra-dense, blade servers from 2000 to 2002. His research interests include network intelligence, SDN and NFV, blockchain & cryptocurrency, cloud computing and online education.
Over the past 30 years, James has been an active volunteer in various committees in IEEE, ComSoc, and KICS. He has served as Steering Committee Chair of IEEE ICBC, NOMS, IM, NetSoft, APNOMS and CNSM as well as Chair of CNOM and KNOM. James received his HBSc and MSc degrees in Computer Science from the University of Western Ontario, Canada in 1983 and 1985, respectively, and the PhD degree in Computer Science from the University of Waterloo, Canada in 1991.
Machine Learning and AI have been mostly used to "program the impossible." That is, these technologies are most successful to address computational problems such as Computer Vision or Natural Language Processing for which traditional programming techniques are not appropriate. This talk argues that AI technologies can also be attractive for more traditional software engineering tasks such as sorting or mundane business applications. There are several recent systems, hardware, and security trends that support this prediction.
Dr. Donald Kossmann is a Distinguished Engineer and General Manager of Fraud Protection at Microsoft. Before that, he was the director of the Microsoft Research Lab in Redmond. He spent most of his career in academia. After his Ph.D. at the University of Karlsruhe, Germany, he became a professor in the Systems Group of the Department of Computer Science at ETH Zurich, Switzerland, for 13 years, doing research and teaching all flavors of data management systems. He is an ACM Fellow. He was chair of ACM SIGMOD from 2013 to 2017 and served on the Board of Trustees of the VLDB Endowment from 2005 to 2011. He is a co-founder of four companies: i-TV-T AG (1998), XQRL Inc. (2002), 28msec Inc. (2006), and Teralytics AG (2010).
Speaker: Prof. Walter J. Scheirer, Ph.D.
Host: Prof. Dr. Manuel Günther
The automatic analysis of face images can generate predictions about a person’s gender, age, race, facial expression, body mass index, and various other indices and conditions. A few recent publications have claimed success in analyzing an image of a person’s face in order to predict the person’s status as Criminal / Non-Criminal. Predicting “criminality from face” may initially seem similar to other facial analytics, but this talk argues that attempts to create a criminality-from-face algorithm are necessarily doomed to fail, that apparently promising experimental results in recent publications are an illusion resulting from inadequate experimental design, and that there is potentially a large social cost to belief in the criminality from face illusion.
Prof. Walter J. Scheirer, Ph.D. is an Associate Professor in the Department of Computer Science and Engineering at the University of Notre Dame, U.S.A. Previously, he was a postdoctoral fellow at Harvard University, with affiliations in the School of Engineering and Applied Sciences, Dept. of Molecular and Cellular Biology and Center for Brain Science, and the director of research & development at Securics, Inc., an early stage company producing innovative computer vision-based solutions. He received his Ph.D. from the University of Colorado and his M.S. and B.A. degrees from Lehigh University. Dr. Scheirer has extensive experience in the areas of computer vision, machine learning and image processing. His overarching research interest is the fundamental problem of recognition, including the representations and algorithms supporting solutions to it.