Computer Science and Engineering, Department of


First Advisor

Gregg Rothermel

Second Advisor

Sebastian Elbaum

Date of this Version

Spring 4-17-2018


Liang, J. (2018). Cost-effective techniques for continuous integration testing (master thesis). University of Nebraska-Lincoln, Lincoln, Nebraska, United States.


A THESIS Presented to the Faculty of The Graduate College at the University of Nebraska In Partial Fulfilment of Requirements For the Degree of Master of Science, Major: Computer Science, Under the Supervision of Professors Gregg Rothermel and Sebastian Elbaum. Lincoln, Nebraska: May, 2018

Copyright (c) 2018 Jingjing Liang


Continuous integration (CI) development environments allow software engineers to frequently integrate and test their code. While CI environments provide advantages, they also utilize non-trivial amounts of time and resources. To address this issue, researchers have adapted techniques for test case prioritization (TCP) and regression test selection (RTS) to CI environments.

To date, current TCP techniques under CI environments have operated on test suites, and have not achieved substantial improvements. In this thesis, we use a lightweight approach based on test suite failure and execution history, and “continuously” prioritizes commits that are waiting for execution in response to the arrival of each new commit and the completion of each previously commit scheduled for testing. We conduct an empirical study on three datasets, and the result shows that, after prioritization, our technique can effectively detect failing commits earlier.

To date, current RTS techniques under CI environment is based on two windows in terms of time. But this technique fails to consider the arrival rate of test suites and only takes the results of test suites execution history into account. In this thesis, we present a Count-Based RTS technique, which is based on the test suite failures and execution history by utilizing two window sizes in terms of number of test suites, and a Transition-Based RTS technique, which adds the test suites’ “pass to malfunction” transitions for selection prediction in addition to the two window sizes. We again conduct an empirical study on three datasets, and the results show that, after selection, Transition-Based technique detects more malfunctions and more “pass to malfunction” transitions than the existing techniques.

Adviser: Gregg Rothermel, Sebastian Elbaum