Department of Informatics


Details Colloquium Fall 2010

30.9.2010 - Cooperative Digital Artefacts

Speaker: Prof. Alois Ferscha
Host: Rolf Pfeifer, IFI, UZH


Challenged by (i) technological progress, and (ii) the vast pervasion of global networks, Cooperative Systems research today addresses a whole new cosmos of research issues, that goes way beyond the traditional computermediated‚ Person-to-Person‚ technologies, now towards self-coordinated "Networks-of-Things" technologies. This presentation will outline aspects of a research agenda as rised by the miniaturization and invisible integration of (computing, communication and software-) technology into everyday objects like appliances, commodities, machinery, tools and environments. Such "Digital Artefacts" -built by pervading networked embedded systems technology into literally every thing, become increasingly interconnected, diverse and heterogeneous, raising the challenge of an operative, and semantically meaningful interplay among each other. One approach to address this challenge is to design and implement systems able to "manage" and to "organize" themselves for cooperation. Self-management here stands for the ability of single Digital Artefact to describe itself, to select and use adequate sensors to capture information, and to assess its context. Self-organizing stands for the ability of a group of possibly heterogeneous digital artefacts to establish a spontaneous cooperation network based on interest, purpose or goal, and to negotiating and fulfilling a group goal.


Prof. Dr. Alois Ferscha received the Mag. degree in 1984, and a PhD in business informatics in 1990, both from the University of Vienna, Austria. From 1986 through 2000 he was with the Department of Applied Computer Science at the University of Vienna at the levels of assistantand associate professor. In 2000 he joined the University of Linz asfull professor where he is now head of the department for PervasiveComputing and the speaker of the JKU Pervasive Computing Initiative.

Prof. Ferscha has published on topics related to parallel anddistributed computing. He has been the project leader of severalnational and international research projects like e.g.: NetworkComputing, Performance Analysis of Parallel Systems and their Workload,Parallel Simulation of Very Large Office Workflow Models, DistributedSimulation on High Performance Parallel Computer Architectures, Modelling and Analysis of Time Constrained and Hierarchical Systems, Broadband Integrated Satellite Network Traffic Evaluation and Distributed Cooperative Environments.

He has been a visiting researcher at the Dipartimento di Informatica, Universita di Torino, Italy, at the Dipartimento di Informatica, Universita di Genoa, Italy, at the Computer Science Department, University of Maryland at College Park, College Park, Maryland, U.S.A., and at the Department of Computer and Information Sciences, University of Oregon, Eugene, Oregon, U.S.A. He has served on the committees ofseveral conferences like PERVASIVE, UMBICOMP, WWW, PADS, DIS-RT,SIGMETRICS, MASCOTS, TOOLS, PNPM, ICS, etc. Prof. Ferscha is member ofthe OCG, GI, ACM, IEEE and holds the Heinz-Zemanek Award fordistinguished contributions in computer science.

28.10.2010 - Inductive Dependency Parsing of Natural Language Text

Speaker: Prof. Joakim Nivre
Host: Martin Volk, IFI, UZH


Broad-coverage parsing of natural language text poses special challenges with respect to robustness, disambiguation and efficiency. Inductive dependency parsing is a dependency-based, data-driven approach to natural language parsing that guarantees robust and efficient disambiguation, and that has been shown to give high empirical accuracy for close to thirty different languages. In this talk, I begin by characterizing the problem of parsing natural language text and lay down evaluation criteria for text parsers. I then give an overview of the transition-based approach to dependency parsing, which is based on abstract transition systems for robust derivation of dependency structures, statistical machine learning for accurate disambiguation, and greedy deterministic search algorithms for efficient parsing. I conclude with a survey of empirical results from a wide range of languages.


Joakim Nivre is Professor of Computational Linguistics at Uppsala University. He holds a Ph.D. in General Linguistics from the University of Gothenburg and a Ph.D. in Computer Science from Växjö University. Joakim's research focuses on data-driven methods for natural language processing, in particular for syntactic and semantic analysis. He is one of the main developers of the transition-based approach to data-driven dependency parsing, described in his 2006 book "Inductive Dependency Parsing" and implemented in the MaltParser system. Systems developed using MaltParser were tied for first place in the shared tasks on multilingual dependency parsing at the Conference on Computational Natural Language Learning in both 2006 and 2007. Joakim's current research interests include the analysis of mildly non-projective dependency structures, the integration of morphological and syntactic processing for richly inflected languages, and the modeling of human sentence processing.

11.11.2010 - Modelling for the Cloud

Speaker: Prof. Bernhard Rumpe
Host: Martin Glinz, IFI, UZH


Cloud Computing is one next big challenge that computer science has to master. While on the one hand, cloud computing further decouples hardware restrictions from the software that runs in the cloud, it imposes new questions like:

  1. What is the most appropriate structure of software for the cloud? Components, services, what else?
  2. Can we become more abstract and more efficient to provide new functions, extend and compose existing ones?
  3. How to configure and compose cloud functionality?
  4. How to control efficiency (storage, communication intensity and computation power)? giving up control on technical issues?
  5. How about Quality of Service, Reliability and Security?

We discuss possibilities on how these questions have been partially answered and how modelling techniques that have been developed over the last decade can help to improve answers. We demonstrate that modelling techniques help to easily define new services, to compose unrelated and to configure flexible services. We discuss how to ensure minimal costs and overhead for storage and communication using optimizing model-based generation and to constructively incorporate security in new services.


Bernhard Rumpe is chair of the Department for Software Engineering at the RWTH Aachen University, Germany. Before that he chaired the Software Engineering Institute at the TU Braunschweig. He made his Ph.D. and Habilitation an the TU Munich.

His main interests are software development methods and techniques that benefit form both rigorous and practical approaches. This includes the impact of new technologies such as model-engineering based on UML-like notations and domain specific languages and evolutionary, test-based methods, software architecture as well as the methodical and technical implications of their use in industry. He has furthermore contributed to the communities of formal methods and UML. Since 2009 he started combining modelling techniques and Cloud Computing.

He is author and editor of eight books and Editor-in-Chief of the Springer International Journal on Software and Systems Modeling (

He is co-Founder and Steering-Committee-member of the GI expert committee on "Modelling" in Germany, Program Committee Chair, PC member, workshop organizer etc. at various opportunities.

18.11.2010 - Opportunistic Communication in Transportation

Speaker: Prof. Lars Wolf
Host: Burkhard Stiller, IFI UZH


Current communication systems assume the existence of throughout end-to-end connectivity -- from source via intermediate nodes to the final destination. In several scenarios, especially while being on the move and in transportation, this end-to-end connection might not exist, at least not with the desired quality at an acceptable costs level. Further, not always is such a model of communication needed. Instead, for several applications, a delay- and disruption tolerant networking model may fit well. For instance, environmental monitoring based on sensors mounted on public transportation vehicles is such an application. The mobility of vehicles helps to forward data. Over time, several communication opportunities exist when vehicles meet each other or come into range of an infrastructure network. Therefore, it must also be decided when which network should be used. In this talk I will discuss our work on DTN and opportunistic communication including application issues and networking techniques.


Lars C. Wolf received the diploma degree in 1991 and the doctoral degree in 1995, both in computer science. From 1991 to 1996 he worked at IBM's European Networking Center in Heidelberg, Germany. In 1996 he joined the Technical University of Darmstadt as assistant professor. Dr. Wolf joined Universitaet Karlsruhe (TH), Germany, in 1999 where he was associated professor in the computer science department and alternate director of the computer center. Since spring 2002 Lars Wolf is full professor for computer science at the Technische Universität Braunschweig where he is head of the Institut of Operating Systems and Computer Networks. He served as head of the department (Math+CS) from 2005 to 2007. Lars serves on editorial boards of international journals and as chair and member of program committees of several international conferences and workshops. His current research interests include among others mobile and wireless communication, especially ad-hoc and sensor networks, including also vehicular and delay-tolerant networks.

25.11.2010 - From Web Engineering to User-Adaptation: How to Access the Web of Data

Speaker: Prof. Geert-Jan Houben
Host: Avi Bernstein, IFI, UZH


The user access to documents on the World-Wide Web has been studied in fields as Web Engineering and Adaptive Web-based Systems. It has traditionally been characterized by a focus on making information accessible via navigation in a hyperlinked space of documents. Main challenge is the automatic creation of the navigation of database contents. Model-driven approaches have proven to be effective for generating such web applications. When it comes to making this user-adaptive, the additional challenge is to include a user model that allows to make the navigation and page creation dynamically dependent on the individual user and the user's browsing. Over time researchers have added semantic technologies to meet this challenge of user-adaptation: for example, to model the user and integrate user models, to model the adaptation, and of course to model the contents. This has influenced the model-driven approaches and allows to now approach the same ambition in the new Web of Data, with its much finer granularity. In this talk we will address the evolution in the engineering of Web-based systems and user-adaptation, and discuss the current challenges that exist for the Web of Data.


Prof. Dr. ir. Geert-Jan Houben holds a doctorate in computer science from the Eindhoven University of Technology (TU/e), the Netherlands (1990). Since then he has been working as an assistant and associate professor at the TU/e, as an IT-consultant with several consultancy firms in the Netherlands, as a guest-professor at the University of Antwerp, Belgium, as a guest-researcher at the Centre for Mathematics and Computer Science (CWI), the Netherlands, and as a full professor in information systems at the Free University of Brussels (VUB), Belgium, also acting as co-director of the WISE research lab at VUB. Since the summer of 2008 he is working as a full professor in Web-based information systems at Delft University of Technology (TUD), the Netherlands, performing research on large-scale information systems, specifically information systems that involve Web and Semantic Web technology. He has been involved in the organization of many events in the fields of web engineering and adaptation, is member of editorial boards such as ACM TWEB and JWE, and acted as PC Chair for conferences in web engineering and user modeling.

2.12.2010 - Illustrative Visualization

Speaker: Prof. Eduard Gröller
Host: Renato Pajarola, IFI, UZH


Illustrations play a major role in the education process. Whether used to teach a surgical or radiologic procedure, to illustrate normal or aberrant anatomy, or to explain the functioning of a technical device, illustration significantly impacts learning. One of the key concepts for creating an expressive illustration is abstraction. Abstraction introduces a distortion between the visualization and the underlying model according to the communicative intent of the illustration. Inspired by observations from hand-made illustrations, similar techniques for the generation of rendered images have been developed. These techniques work on different levels: low level abstraction techniques (stylized depiction methods) deal with how objects should be presented, while high level abstraction techniques (smart visibility approaches) are concerned with what should be visible and recognizable. We review several existing approaches from both categories and describe concepts used in the design of a system for creating interactive illustrations directly from volumetric data. A fully dynamic three-dimensional illustration environment is discussed which directly operates on volume data. Single images have the aesthetic appeal of traditional illustrations, but can be interactively altered and explored. Furthermore, we discuss several illustrative concepts like style transfer functions, exploded views, semantic layers, and illustration-inspired integrated views. Further information on the presented techniques is available at


Prof. Dr. Eduard Gröller is Professor at the Institute of Computer Graphics and Algorithms (ICGA), Vienna University of Technology. In 1993 he received his PhD from the same university. His research interests include computer graphics, flow visualization, volume visualization, medical visualization, and information visualization. He is heading the visualization group at ICGA. The group performs basic and applied research projects in the area of scientific visualization ( Dr. Gröller has given lecture series on scientific visualization at various other universities (Tübingen, Graz, Praha, Bahia Blanca, Magdeburg, Bergen). He is a scientific proponent and key researcher of the VRVis Kplus center of excellence ( The center performs applied research in visualization, rendering, and visual analysis. Dr. Gröller is adjunct professor of computer science at the University of Bergen, Norway (since 2005). He co-authored more than 170 scientific publications and acted as a reviewer for numerous conferences and journals in the field. He also has served and serves on various program and paper committees. Examples include Computers&Graphics, IEEE Transactions on Visualization and Graphics, EuroVis, IEEE Visualization conference, Eurographics conference. He has been paper-co chair of Volume Graphics 2005, IEEE Visualization 2005 and 2006, and Eurographics 2006. Since 2008, Dr. Gröller is chief editor of the Journal Computer Graphics Forum ( He became a fellow of the Eurographics association in 2009. Dr. Gröller is head of the working group on computer graphics of the Austrian Computer Society and member of IEEE Computer Society, ACM (Association of Computing Machinery), GI (Gesellschaft für Informatik), OCG (Austrian Computer Society).

9.12.2010 - Some Remarks on Research Methods

Speaker: Prof. Ulrich Frank
Host: Gerd Schwabe, IFI, UZH


Research in IS is geared to the ideal of behaviorist social sciences. However, in recent years, there has been a plethora of publications, which question whether a neo-positivistic approach is the only proper research method in Information Systems. When it comes to the realisation of future business models and the deployment of future technologies, the development of methods and design artefacts seems to be more suitable. However, such an approach - sometimes referred to as "design science" or "constructive" - implies severe epistemological problems, which are mainly related to the very notion of scientific research. Does a design artefact qualify for a result of scientific research? If so, what is the difference between a scientific design artefact and artefacts produced by consultants or software firms? Within the neo-positivistic paradigm, research results are hypotheses or theories, which are evaluated by testing them against reality using accepted research methods and common concepts of truth - such as the 'correspondence theory of truth'.However, this kind of evaluation is not possible for design-oriented research, unless you want to wait for the many years it would take to apply and test the approach in practice. Against this background, the presentation will focus on three questions: What are the criteria, a decision between a behaviorist approach and a constructive approach could be based on? How could a research method look like that fits the specific requirements of a constructive approach? How could both approaches to IS research be combined in a synergetic way?


Not available in English.