Machine Learning (ML) models which support software engineering tasks have become a hot topic among both academics and practitioners. However, once deployed, ML models are rarely updated or re-trained. This seems to be a missed opportunity – a vast amount of data is continuously generated during the development process, such as bug reports, user reviews, or repository events. A static ML model cannot gain any advantage from it. In other words, while a specific software project keeps evolving, the ML model supposed to support its development practices remains unchanged. As a consequence, this can lead to a loss of accuracy of the model itself… potentially even making it obsolete.
The Melise project investigates how to exploit the data stream which is created in software development for re-training ML models – and therefore improving them. To achieve this, Melise will also include direct feedback from developers. For instance, developers can assess the warnings issued by the ML model, rewarding the model in case of a correct warning.