Navigation auf uzh.ch

Suche

Department of Informatics s.e.a.l

Keynotes

Rob DeLine, The Next IDE: *Informative* Development Environments

Field studies have repeatedly shown that developers spend a large fraction (up to a third) of their time seeking information in order to get their work done. When a developer gets stuck because of missing information, the typical reaction is to walk up and down the hall, interrupting colleagues to ask questions. When the necessary colleagues are away, the developer must put the task away for later. In short, this constant need for missing information leads to frequent communication, interruptions and multitasking, which development environments are only beginning to support. In the age of mining software archives, can we do a better job of providing answers to developers' questions? In this talk, we'll look in detail at the questions developers ask. In the spirit of the workshop, I'll premiere a prototype called Code Canvas, which re-thinks the user interface of development environments to make information seeking a first-class activity.

Bio: Rob DeLine is a Principal Researcher at Microsoft Research, working at the intersection of software engineering and human-computer interaction. His research group designs development tools in a user-centered fashion: they conduct studies of development teams to understand their work practice and prototype tools to improve that practice. Rob has a background in both HCI and software engineering. His master's thesis was the first version of the Alice programming environment (University of Virginia, 1993), and his PhD was in software architecture (Carnegie Mellon University, 1999).

Slides (PDF, 2 MB)

Prem Devanbu, Where the Bugs Are (With apologies to George Hamilton and Dolores Hart)

All the girls (and many boys) want to know where the bugs are. But how do we find out where they are? When a bug is fixed, programmers are supposed to report where it is repaired. The association of the bug, and the repair is critical to quality-improvement efforts, such as bug prediction and defect etiology and avoidance. But what if programmers don't tell us where all the bugs are fixed? If they fail to report bug fixes, or worse, do the reporting in a selective manner, we might get a lousy sample of bug-repair data. But, what the heck does "lousy sample" mean? How do we know if a sample is "lousy"? Even if the samples are "lousy" does that matter? We try to answer some of these questions in this talk.
Warning: This talk is a little depressing. Bring Grappa. Or Chocolate. Preferably both.

Bio: Prem Devanbu received his B.Tech from IIT Madras before you were born, and his PhD from Rutgers in 1994. After spending nearly 20 years in New Jersey, at Bell Labs and its various offshoots, he left behind traffic, humidity, grumpy people, and decent pastrami sandwiches, and joined the CS faculty at UC Davis in late 1997. His research interests now are squarely in empirical software engineering, although he also tends to get excited about a few other things, like end-user regression testing, and crowd-sourcing governmental oversight. This research was the result of a thoroughly enjoyable collaboration with Adrian Bachmann, Avi Bernstein, Chris Bird, Vladimir Filkov, and Mhd. Foyzur Rahman. Devanbu was supported by NSF, and a gift from IBM.

Slides (PDF, 15 MB)

Audris Mockus, Measurement in Science and Software Engineering

Measurement is the essence of science: "To measure is to know", without data, there are only opinions. In engineering the data can't help if you don't understand it and use it to make decisions. As many professional and social activities are moving online and are supported by software tools, vast amounts of data about them become available. Practical applications in advertising, marketing, business intelligence, and sciences have been demonstrated that use various models and methods to solve a particular problem in the corresponding domain. It is, therefore, tempting to apply these techniques on software engineering data often without the adequate adaptations to the domain with the completely different needs. Furthermore, as the field of Computer Science matures, it requires more rigorous empirical approaches and the same can be said about rapidly maturing fields of Mining Software Archives/Repositories. Therefore, we discuss common issues facing researchers with Computer Science background as they move into empirical areas that require several fundamentally different concepts: variation, reproducibility, and human factors. In addition to methodological issues, we also look at the future challenges posed by the need to integrate more and more disparate sources of data, the tradeoffs between using the most easily available and the more meaningful measures, and the need to address core software engineering concerns.

Bio: Audris Mockus is interested in quantifying, modeling, and improving software development. He designs data mining methods to summarize and augment software change data, interactive visualization techniques to inspect, present, and control the development process, and statistical models and optimization techniques to understand the relationships among people, organizations, and characteristics of a software product. Audris Mockus received B.S. and M.S. in Applied Mathematics from Moscow Institute of Physics and Technology in 1988. In 1991 he received M.S. and in 1994 he received Ph.D. in Statistics from Carnegie Mellon University. He works in Avaya Labs Research. Previously he worked in the Software Production Research Department of Bell Labs.

Slides (PDF, 157 KB)

Gail Murphy, Context as an Antidote to Information Overload

Software developers who perform evolution tasks on a software system face an avalanche of information daily. These developers must deal with multiple source code elements, bug reports, system test data, questions from team members, and so on. Information mined from the historical archives of a development can provide helpful cues to developers as they perform their work, but how can this historical information be delivered effectively given the already overwhelming amount of information facing developers? In this talk, I will describe how various representations of a developer's context can help manage information overload, improve team awareness and provide an anchor for interpreting historical project information.

Bio: Gail Murphy is a Professor in the Department of a Computer Science at the University of British Columbia. She received a B.Sc. degree from the University of Alberta and M.S. and Ph.D. degrees from the University of Washington. She works primarily on building simpler and more effective tools to help developers manage software evolution tasks. She has received the AITO Dahl-Nygaard Junior Prize, an NSERC Steacie Fellowship, a CRA-W Anita Borg Early Career Award and a UW College of Engineering Diamond Early Career Award. In 2008, she served as the program chair for the ACM SIGSOFT FSE Conference and will serve as the co-program chair for the ICSE 2012 conference. One of the most rewarding parts of her career has been collaborating with many very talented graduate and undergraduate students.

Slides and movie

Slides (PDF, 1 MB)

Alex Orso, Repository Mining and Program Analysis & Testing: Better Together?

Software testing and analysis and mining of software archives are not completely orthogonal areas of research. Unfortunately, despite the fact that these two areas could benefit from each other, the synergies between the two are rarely exploited in current research. In this talk, we will first present a specific example of this issue; we will discuss a technique for testing evolving software that could greatly benefit from information that is available in software archives (but is currently not mined). We will then look at the problem from a more general standpoint and consider other ways in which repository mining could inform software testing and analysis and vice versa. We will also discuss possible reasons why research across these two areas is still limited and propose ways to foster and facilitate research in this direction.

Bio: Alessandro Orso is an associate professor in the College of Computing at the Georgia Institute of Technology. He received his M.S. degree in Electrical Engineering (1995) and his Ph.D. in Computer Science (1999) from Politecnico di Milano, Italy. From March 2000, he has been at Georgia Tech, first as a research faculty and now as an associate professor. His area of research is software engineering, with emphasis on software testing and analysis. His interests include the development of techniques and tools for improving software reliability, security, and trustworthiness, and the validation of such techniques on real systems. Dr. Orso is a member of the ACM and the IEEE Computer Society.

Slides (PDF, 1 MB)

Weiterführende Informationen

Title

Teaser text