Speaker: Prof. Dr. Bertrand Meyer
Host: Prof. Dr. Harald C. Gall
Modern programs, particular object-oriented programs, produce large and complex object structures at run time. We lack a good theory of these structures, making it possible to answer important questions such as aliasing (can two reference expressions ever point to the same object?) correctly and efficiently. I will present "object diagrams", a comprehensive theoretical basis for discussing object structures, relying in part on concepts from abstract interpretation. The theory opens the way to sound and efficient alias analysis parameterized by an arbitrary level of precision. The talk will present a recent implementation of this analysis, and its application to such practical problems as frame analysis (inferring what operations can change) and deadlock analysis in concurrent programming. It will compare the results with those of other approaches to heap analysis (such as sepration logic) and conventional techniques of alias analysis. Part of the results come from joint work with Sergey Velder (on the theory side) and Victor Rivera (for the implementation of alias analysis).
Prof. Dr. Bertrand Meyer is a professor of software engineering at the Politecnico di Milano and Innopolis University. He was previously at ETH Zurich and has been lecturing at the University of Zurich. He is the author of several books on software engineering topics. Bertrand studied at the École Polytechnique, Paris, and holds an MSc degree from Stanford University, U.S.A, in Computer Science. Besides an MA in Russian from the Sorbonne University (Paris IV) he received a Dr. Sc. from University of Nancy, France.
Host: Prof. Dr. Burkhard Stiller
Network Functions Virtualization (NFV) is a networking paradigm that decouples network functions from proprietary hardware by using virtualization technology. Although originally conceived to reduce OPEX and CAPEX (operational e capital expenditure), NFV is today much more about flexibility and short innovation cycles in computer networks. NFV is said to be the enabler of an opener computer network market because it allows small players to compete when delivering innovative network functions running on top of off-the-shelf hardware. As in most new technologies, NFV enables opportunities of business that were not possible before its inception, but it also introduces new problems that were not present before. In this talk, we will observe NFV from two different yet complementary perspectives: NFV as a network management tool, as well as NFV as the target of the network management process. We will review which management activities are better/easier/faster performed in networks that support NFV, and we will also observe which management challenges are imposed by NFV into traditional management solutions and networks. The market and research state-of-the-art will be reviewed in the area, and a list of today’s further required research and development work will be discussed as well.
Prof. Lisandro Zambenedetti Granville, Ph.D., is an Associate Professor at the Institute of Informatics of the Federal University of Rio Grande do Sul (UFRGS), Brazil, from which he holds a Ph.D. (2001) and an M.Sc. (1998) degree in Computer Science. He is president of the Brazilian Computer Society (SBC) and co-chair of the IRTF’s Network Management Research Group (NMRG). Lisandro has served as chair of IEEE ComSoc’s Committee on Network Operations and Management (CNOM) and was a member of the Brazilian Internet Committee (CGI.br). He also has served as a TPC member for many important events in the area of computer networks (e.g.,IM, NOMS, and CNSM) and was TPC co-chair of DSOM 2007, NOMS 2010, ICC 2018, and NetSoft 2018.
Speaker: Wendy E. MacKay, Ph.D.
Host: Prof. Dr. Chat Wacharamanotham
The classic approach to Artificial Intelligence treats the human being as a cog in the computer's wheel – the so-called “human-in-the-loop”. By contrast, the classic approach to Human-Computer Interaction seeks to create a ‘user experience’ with the computer, what we might call the “computer-in-the-loop”. We seek a third approach, a true human-computer partnership that takes advantage of machine learning, but leaves the user in control. I describe how we can create interactive systems that are discoverable, appropriable and expressive, drawing from the principles of instrumental interaction and reciprocal co-adaptation. Our goal is to create robust interactive systems that grow with the user, with a focus on augmenting human capabilities.
Wendy Mackay is a Research Director, Classe Exceptionnelle, at Inria, France, where she heads the ExSitu (Extreme Situated Interaction) research group in Human-Computer Interaction at the Université Paris-Saclay. After receiving her Ph.D. from MIT, she managed research groups at Digital Equipment and Xerox EuroPARC, which were among the first to explore interactive video and tangible computing. She has been a visiting professor at University of Aarhus and Stanford University and recently served as Vice President for Research at the University of Paris-Sud. Wendy is a member of the ACM CHI academy, is a past chair of ACM/SIGCHI, chaired CHI’13, and the recipient of the ACM/SIGCHI Lifetime Acheivement Service Award and a Doctor Honoris Causa from Aarhus University. She received an ERC Advanced Grant for her research on co-adaptive instruments. She has published nearly 200 peer-reviewed research articles in the area of Human-computer Interaction. Her current research interests include human-computer partnerships and co-adaptive instruments, mixed reality and interactive paper, as well as participatory design and generative research and design methods.
Speaker: Prof. Dr. Bernd Möbius
Host: Prof. Dr. Volker Dellwo
Language offers speakers a multitude of choices of how to encode their messages. A growing body of research indicates that measures derived from information theory correlate with aspects of human language and sppech processing. Specifically, the ease of processing a linguistic expression is correlated with the predictability of this expression in context. In this talk I will report on our recent analyses of the impact of information density on acoustic-phonetic features of spoken language in six languages: American English, Czech, Finnish, French, German, and Polish. Information density is estimated from phone-level n-gram language models trained on large corpora. Our findings suggest that the prosodic structure, i.e. phrasing and accenting, mediates between requirements of efficient communication and the speech signal. However, this mediation is not perfect, as we found evidence for additional, direct effects of changes in information density on the phonetic structure of utterances. These effects appear to be stable across languages and different speech rates. I will also summarize work in our lab on the inclusion of information-theoretic features in statistical parametric speech synthesis.
This is joint work by Bistra Andreeva, Erika Brandt, Zofia Malisz, Bernd Möbius, Yoonmi Oh, Frank Zimmerer, Ingmar Steiner, and Sébastien Le Maguer.
Prof. Dr. Bernd Möbius is professor of Phonetics and Phonology at Saarland University. He holds a Dr. degree (Ph.D.) and M.A. from the University of Bonn (1992, 1985) and was a senior research scientist at Bell Labs (1993-1998) before joining the Institute of Natural Language Processing at the University of Stuttgart (1999-2007). He was acting chair of Phonetics and Speech Communication at the University of Bonn (2007-2010). A central theme of his research is to integrate phonetic knowledge in speech technology. He has worked extensively on prosody and text-to-speech synthesis. Recent work has focused on experimental methods and computational simulations to study aspects of speech production, perception and acquisition. He was a member of the Board of International Speech Communication Association (ISCA, 2007-2015) and was a founding member and chairman of ISCA's special interest group on speech synthesis (SynSIG, 2002-2005). He is currently serving as Editor-in-Chief of the Speech Communication journal.
Speaker: Prof. Eszter Hargittai, Ph.D.
Host: Prof. Dr. Abraham Bernstein
While digital media have certainly lowered the barriers to sharing one's perspectives and creative content with others, research on online engagement has found considerable differences by user background and Internet skills. Drawing on several survey data sets, this talk will discuss who is most likely to participate online from joining social media platforms to editing Wikipedia entries. The talk will also offer insights on the potential biases that can stem from relying on certainly types of data sets in big data studies.
Prof. Eszter Hargittai, PhD in Sociology from Princeton University is Chair of Internet Use and Society at the University of Zurich. Her research looks at how people may benefit from their digital media uses with a particular focus on how differences in people's Web-use skills influence what they do online. Her work has received awards from several professional associations and has been funded by the US National Science Foundation, several private foundations (e.g., the MacArthur Foundation, the Alfred P. Sloan Foundation) and industry (e.g., Google, Merck, Facebook, Nokia). She is co-editor of Research Confidential: Solutions to Problems Most Social Scientists Pretend they Never Have and, with Christian Sandvig, of Digital Research Confidential: The Secrets of Studying Behavior Online from MIT Press. She has given invited talks in 15 countries on four continents. She tweets @eszter.
Speaker: Prof. Rakesh Vohra, Ph.D.
Host: Prof. Dr. Sven Seuken
The elegance and simplicity of Gale and Shapley's deferred acceptance algorithm (DA) has made it the algorithm of choice for determining stable matchings in a variety of settings. Each setting has imposed new demands on the algorithm. Among them are to how to handle complementarities and distributional constraints. The simplicity of the DA algorithm makes it difficult to accommodate these new considerations except in special cases. In this talk I outline an alternative approach based on Scarf¹s lemma for tackling such problems. It is based on joint work with Thanh Nguyen.
Prof. Rakesh Vohra, Ph.D., is an expert in mechanism design, an area of game theory that brings together economics, engineering and computer science. His economics research in mechanism design focuses on the best ways to allocate scarce resources when the information required to make the allocation is dispersed and privately held, an increasingly common condition in present-day environments. Professor Vohra is an expert in mechanism design, an area of game theory that brings together economics, engineering and computer science. His economics research in mechanism design focuses on the best ways to allocate scarce resources when the information required to make the allocation is dispersed and privately held, an increasingly common condition in present-day environments.