Department of Informatics

News
Department
Teaching
Research
Agenda
Accreditation
Archive  

Details Colloquium Fall 2006

26.10.2006 - Representing Web Services

Speaker: Bijan Parsia (University of Manchester)
Language: English

Abstract

The Web does not only allow us to publish and retrieve information, but also to more directly affect the world. One can buy or sell just about anything, send faxes, submit papers, file one's taxes, renew licenses, take classes, or pick up a potential life partner on the Web. Of course, some world changes, such as booking a ticket, just are the publishing (or retrieval) of information, but of a very different sort than publishing a snarky, anonymous comment on a weblog. Of course, the Web was designed, and is primarily used by, human beings who are able to distinguish between the sorts of action clicking a link will produce (though, there are problems even there with carelessness, deception, and poor usability). They do this, typically, by reading associated descriptions of the link.

Web Services are intended to make programmatic use of Web based actions easier and more robust. One obvious move toward this goal is the use of specifically designed and documented XML based languages instead of the limited and more presentation oriented HTML for data. However, this only solves the "content" problem, not how to describe the service so that we can programmatically make the sorts of distinction web users do manually all the time.

In this talk, I shall discuss the challenges of representing Web Services adequately for a variety of tasks. In particular, I will discuss the two major approaches to achieving adequate descriptions: the "programming language" approach adopted by most of industry and the knowledge representation approach championed by the Semantic Web Services community.

Bio

Bijan Parsia is a lecturer in the Information Management Group at the University of Manchester, UK. He works on Description Logics, Automated Reasoning, Planning, the Semantic Web, Web Services, Explanation, Epistemology, and Oppression Theory.

9.11.2006 - Energy-efficient communication protocols for wireless sensor networks

Speaker: Torsten Braun (Universität Bern)
Language: English

Abstract

Energy-efficiency is the main concern in wireless sensor networks, which are based on wireless multi-hop communication. The talk will discuss approaches to reduce the energy consumption of sensor nodes by proper design of communication protocols on transport, routing, and medium access control level. This includes a scheme to reduce TCP segment transmissions in wireless sensor networks, an approach to reduce duty cycles in wireless multi-hop networks running the AODV routing protocol as well as experiments of medium access control protocols on real sensor nodes.

Bio

Torsten Braun got his diploma and Ph.D. degrees from the University of Karlsruhe, Germany, in 1990 and 1993, respectively. From 1994 to 1995 he was a guest scientist with INRIA Sophia Antipolis. From 1995 to 1997 he worked as a project leader and senior consultant at the IBM European Networking Center, Heidelberg, Germany. Since 1998 he has been a full professor of computer science at the Institute of Computer Science and Applied Mathematics of University of Bern heading the Computer Networks and Distributed Systems research group. He has been a board member of SWITCH (Swiss Education and Research network) since 2000. During his sabbatical in 2004, he has been visiting scientist at INRIA Sophia-Antipolis and the Swedish Institute of Computer Science at Kista.

16.11.2006 - Classification in Networked Data

Speaker: Foster Provost (New York University)
Language: English

Abstract

Traditional statistical and machine learning classification methods assume that objects to be classified or scored are independent of each other. However, objects often are interconnected in complex networks. For example, commercial transactions link consumers into huge social networks. In this talk I start by discussing various applications of classification in networked data, from viral marketing to fraud detection to counter-terrorism. I then discuss two characteristics of classification in networked data that differentiate it from traditional classification, which can improve classification tremendously: (i) the ability to use specific identifiers, such as the identities of particular individuals, to improve inference, and (ii) the opportunity to perform collective inference, using inferences on linked data to mutually reinforce each other. I present results demonstrating the effectiveness of these techniques.

Bio

Foster Provost is Associate Professor and NEC Faculty Fellow at New York University's Stern School. He is Editor-in-Chief of the journal Machine Learning, a founding board member of the International Machine Learning Society, and was program chair of the ACM SIGKDD Conference in 2001. He has received Faculty Awards from IBM and a President's Award from NYNEX Science and Technology. His recent research has focused on classification in network data and utility-based data mining. He has been involved in various applications of machine learning technology to real-world problems, including fraud detection, network diagnosis, customer contact management, targeted marketing, and counterterrorism.

30.11.2006 - Middleware Support for the "Internet of Things"

Speaker: Manfred Hauswirth (Universität Galway)
Language: English

Abstract

The "Internet of Things" intends to enhance items of our everyday life with information and/or processing and interconnect them, so that computers can sense, integrate, present, and react on all kinds of aspects of the physical world. As this implies that enormous numbers of data sources need to be connected and related to each other, flexible and dynamic middleware support essentially providing zero-programming deployment is a key requirement to cope with the sheer scale of this task and the heterogeneity of available technologies. In this paper we briefly overview our Global Sensor Networks (GSN) middleware which supports these tasks and offers a flexible, zero-programming deployment and integration infrastructure. GSN's central concept is the virtual sensor abstraction which enables the user to declaratively specify XML-based deployment descriptors in combination with the possibility to integrate sensor network data through plain SQL queries over local and remote sensor data sources. The GSN implementation is available from globalsn.sourceforge.net.

Bio

Manfred Hauswirth is vice director of the Digital Enterprise Research Institute (DERI), Galway, Ireland and professor at the National University of Ireland, Galway (NUIG). He holds an M.S. (1994) and a Ph.D. (1999) in computer science from the Technical University of Vienna. Prior to his work at DERI/NUIG he was a research associate at the Distributed Information Systems Laboratory of the Swiss Federal Institute of Technology in Lausanne (EPFL). His research interests are on large-scale semantics-enabled distributed information systems and applications, peer-to-peer systems, sensor networks middleware, Internet of things, self-organization and self-management, Semantic Web services, and distributed systems security. He has published over 45 papers in these domains and co-authored a book on distributed software architectures (Springer) and several book chapters on P2P data management and semantics. He has served on over 100 program committees of international scientific conferences and recently was local chair of WDAS2004 (Workshop on Distributed Data and Structures) and program co-chair of SME05 (Semantics in Mobile Environments), STD3S (Security and Trust in Decentralized/Distributed Data Structures) MCISME (Managing Context Information and Semantics in Mobile Environments), and DMC2006 (Distributed and Mobile Collaboration). He is on the editorial board of the International Journal of Web Services Practices (IJWSP) and a member of IEEE and ACM.

14.12.2006 - The Role of Visual Modeling and Model Transformations in Business-driven Development

Speaker: Jana Koehler, IBM Zurich Research Laboratory
Language: English

Abstract

The talk explores the emerging paradigm of business-driven development, which presupposes a methodology for developing IT solutions that directly satisfy business requirements and needs. At the core of business-driven development are business processes, which are usually modeled by combining graphical and textual notations. During the business-driven development process, business-process models are taken down to the IT level, where they describe the so-called choreography of services in a Service-Oriented Architecture. The derivation of a service choreography based on a business-process model is simple and straightforward for toy examples only| for realistic applications, many challenges at the methodological and technical level have to be solved. The talk explores these challenges and describes selected solutions that have been developed by the research team of the IBM Zurich Research Laboratory.

Bio

Jana Koehler is manager of the Business Integration Technologies group in the Services and Software Department of the IBM Zurich Research Lab. The group works on model-driven technologies for Business-IT integration based on Service-Oriented Architectures. Jana Koehler built up this new research area that focuses on the intersection between services and software after joining IBM in Spring 2001. Prior to her work for IBM, she has been working at the German Research Center for AI, the International Computer Science Institute at Berkely, the University of Freiburg, and Schindler AG. Jana Koehler won several scientific and best-paper awards and was nominated full and associate professor in Computer Science.

11.1.2007 - Extracting Facts from Natural Language Text - or: How to fill a database?

Speaker: H. Schweppe (FU Berlin)
Language: English

Abstract

Most of the information stored in digital form is burried in natural language texts. As opposed to text understanding, a traditional subfield of artificial intelligence, fact extraction aims at finding those facts which conform to a predefined specification, given for example by a database schema. The notion of fact extraction, although slightly more specific, is often used interchangeably with information extraction. Text mining, however, is a more general technique aiming at finding unanticipated information.

We will present two kinds of algorithms, both of which are based on machine learning techniques employing supervised learning. The incremental statistical method only utilizes statistical features of the input. We have developed an incremental technique based on the Winnow classifier using joint input features of the input text. The second approach is based on the inductive learning of patterns found in the input text. In contrast to inflexible and expensive hand- coded rules, the pattern based learning algorithm generates specific rules from the training examples and abstracts them to more general ones. The rule language allows to express sophisticated language patterns, appropriate for natural language texts. However, both approaches expect a more uniform language than in fiction or everyday literature. Finally, we will evaluate the approaches based on specific characteristics of the texts and the attributes extracted. This methodology differs from evaluation techniques which compare algorithms on a standard set of text like the seminar announcement corpus (joint work with Christian Siefkes and Peter Siniakov)

Bio

Dr. Schweppe studied mathematics at the universities Göttingen, Zürich and Hannover. After his diploma he was a research assistant at the universitiy of Bonn and the Technical University Berlin, were he received his PhD with a thesis on a Natural Language interface for SQL-Databases. He was a lecturer at Technical University Braunschweig until 1983. He then joined Siemens Research and Development where he headed the research department on Knowledge Engineering. In 1986 he was appointed professor at the Freie Universität Berlin. He holds the chair for Databases and Information systems. His research interests are data management in distributed environments, e.g. mobile ad hoc nets, extraction of facts from unstructured natural language text, information retrieval, XML data management, stream data processing and web data management.

22.1.2007 - Embodied and Social Intelligence

Speaker: Rodney Brooks (MIT, CSAIL)
Language: English, Location: ETH Zurich, Main Building, HG F1

Bio

Rodney A. Brooks is Director of the MIT Computer Science and Artificial Intelligence Laboratory, and is the Panasonic Professor of Robotics. He is also co-founder and Chief Technical Officer of iRobot Corp (nasdaq: IRBT). He received degrees in pure mathematics from the Flinders University of South Australia and the Ph.D. in Computer Science from Stanford University in 1981. He held research positions at Carnegie Mellon University and MIT, and a faculty position at Stanford before joining the faculty of MIT in 1984. His research is concerned with both the engineering of intelligent robots to operate in unstructured environments, and with understanding human intelligence through building humanoid robots. He has published papers and books in model-based computer vision, path planning, uncertainty analysis, robot assembly, active vision, autonomous robots, micro-robots, micro-actuators, planetary exploration, representation, artificial life, humanoid robots, and compiler design. Dr. Brooks is a Member of the National Academy of Engineering, a Founding Fellow of the American Association for Artificial Intelligence (AAAI), a Fellow of the American Association for the Advancement of Science (AAAS), a Fellow of the Association for Computing Machinery (ACM, a Foreign Fellow of The Australian Academy of Technological Sciences and Engineering (ATSE), and a Corresponding Member of the Australian Academy of Science. He won the Computers and Thought Award at the 1991 IJCAI (International Joint Conference on Artificial Intelligence). He has been the Cray lecturer at the University of Minnesota, the Mellon lecturer at Dartmouth College, the Hyland lecturer at Hughes, and the Forsythe lecturer at Stanford University. He was co-founding editor of the International Journal of Computer Vision and is a member of the editorial boards of various journals including Adaptive Behavior, Artificial Life, Applied Artificial Intelligence, Autonomous Robots and New Generation Computing. He starred as himself in the Errol Morris movie "Fast, Cheap and Out of Control"; named for one of his scientific papers, a Sony Classics picture, now available on DVD.

8.2.2007 - Simulating virtual humans : 25 years of research

Speaker: Nadia Magnenat-Thalmann (University of Geneva)
Language: English

Abstract

Since early eighties, we have pionneered the field of Virtual Humans. What was the challenges in the last decades, where are we today and what is next? In our talk, we will make a survey of past, existing and future research in Virtual Humans. We will show a few results of our European Research projects and discuss the future of Virtual Humans, AR and VR technology. We will show with case studies some concrete applications of this research in medical field, in films and simulation fields.

Bio

Prof. Nadia Magnenat-Thalmann has pioneered research into virtual humans over the last 25 years. She obtained several Bachelor's and Master's degrees in various disciplines (Psychology, Biology and Chemistry) and a PhD in Quantum Physics from the University of Geneva. From 1977 to 1989, she was a Professor at the University of Montreal where she founded the research lab MIRALab . She was elected Woman of the Year in the Grand Montreal for her pionnering work on virtual humans and her work was presented at the Modern Art Museum of New York in 1988. She moved to the University of Geneva in 1989, where she founded the Swiss MIRALab, an internationally interdisciplinary lab composed of about 30 researchers. She is author and coauthor of a very high number of research papers and books in the field of modeling virtual humans, interacting with them and living in augmented life. She has received several scientific and artistic awards for her work, mainly on the Virtual Marylin and the film RENDEZ-VOUS A MONTREAL, but more recently, in 1997, she has been elected to the Swiss Academy of Technical Sciences, and has been nominated as a Swiss personality who has contributed to the advance of science in the 150 years history CD-ROM produced by the Swiss Confederation Parliament. She is editor-in-chief of the Visual Computer Journal published by Springer Verlag .She also participated to political events as to the WORLD ECONOMIC FORUM in DAVOS where she was invited to give several talks and seminars.

22.2.2007 - How to evaluate the interestingness of models extracted from data?

Speaker: Prof. Martine Collard, Université de Nice-Sophia Antipolis
Language: English

Abstract

Decision support systems are expected to offer the functionality to turn large volumes of data, quite incomprehensible, in smaller quantities of high quality information that can be understood by human beings. Knowledge discovery from data (KDD), also referred as Data Mining (DM) is the process of searching through these data to discover interesting and useful information previously unknown. From a general point of view, the two main goals of KDD are description and prediction. The overall process consists of multiple operations such as data cleaning and preparation, model extraction, refinement of the results and presentation. The evaluation of extracted models is a main issue. Current works which focus on this subject are divided into objective approaches which use interestingness measures and subjective approaches based on expert knowledge expression. However, it remains difficult to select relevant models according to user interest or to compare them to a priori expert knowledge. I will present the KEOPS approach developed by the EXECO team which sets up a method introducing expert knowledge during the data mining process. An ontology driven information system (ODIS) plays a central role in KEOPS, allowing efficient data selection, data preparation and models interpretation. Furthermore, KEOPS uses a part-way interestingness measure between objective and subjective measure in order to evaluate models relevance according to expert knowledge. Finally I will present an overview of activities deployed in the EXECO team on knowledge engineering and information systems.

Key-words: Data mining, interestingness measures, ontologies, knowledge expression

Bio

Martine Collard is currently Assistant Professor of Computer Science at the University of Nice-Sophia Antipolis (UNSA), France where she received Master degrees in Mathematics and in Computer Science and a PhD on "Deductive and object oriented databases". She is head of the EXECO team which works on Data Mining and Information system modelling in the I3S laboratory, UNSA. Her main research interests are currently knowledge discovery from data e.g. extraction of rules, data management, knowledge quality, links with knowledge management, ontologies and applications in modelling, scoring, CRM and biology.