Replication Package for: "The Impact of Test Case Summaries on Bug Fixing Performance: An Empirical Investigation"


Automated test generation tools have been widely investigated with the goal of reducing the cost of testing activities. However, generated tests have been shown not to help developers in detecting and finding more bugs even though they reach higher structural coverage compared to manual testing. The main reason is that generated tests are difficult to understand and maintain. Our paper proposes an approach, coined TestDescribe, which automatically generates test case summaries of the portion of code exercised by each individual test, thereby improving understandability. We argue that this approach can complement the current techniques around automated unit test generation or search-based techniques designed to generate a possibly minimal set of test cases. In evaluating our approach we found that (1) developers find twice as many bugs, and (2) test case summaries significantly improve the comprehensibility of test cases, which are considered particularly useful by developers.




This page provides the replication package with (i) material and working data sets of our study, (ii) complete results of the survey; and (iii) rawdata for replication purposes and to support future studies. A detailed description of the contents is included in README.txt.

Download Replication Package
Download the Survey sent to developers that participated to the study



Sebastiano Panichella University of Zurich, Switzerland


Annibale Panichella
Delft University of Technology, Netherlands


Moritz Beller
Delft University of Technology, Netherlands


Andy Zaidman Delft University of Technology, Netherlands


Harald C. Gall University of Zurich, Switzerland