Regression testing is regularly performed on software systems to ensure that changes have not inadvertently affected existing system behavior. The simplest, yet expensive strategy to perform regression testing, retest-all, is to execute every test case in the test suite after each newly introduced change. However, with increasingly large test suites, limited resources, and shorter software (delivery) lifecycles this approach becomes too costly or even infeasible with adequate feedback time. Since the 1970s these challenges are subject to research on regression test optimization (RTO) to improve the cost-effectiveness of regression testing.
The goal of this project is to better understand current challenges in regression testing and build RTO techniques, such as test prioritization and test selection techniques, to address them appropriately. These challenges range from multi-language code bases, over high frequency testing in continuous integration environments, to domain-specific resource constraints.
We are focusing on the research questions:
- How are regression test suites empirically composed in terms of different test levels?
- How cost-effective are static, dynamic, and predictive RTO techniques?