Computer Science and Engineering, Department of


Date of this Version



J. Saddler, "EventFlowSlicer: A Goal-based Test Case Generation Strategy for Graphical User Interfaces: A Thesis," Master's thesis, University of Nebraska, Lincoln, 2016.


A THESIS Presented to the Faculty of The Graduate College at the University of Nebraska In Partial Fulfillment of Requirements For the Degree of Master of Science, Major: Computer Science, Under the Supervision of Myra Cohen. Lincoln, Nebraska: August, 2016

Copyright (c) 2016 Jonathan Saddler


Automated test generation techniques for graphical user interfaces include model-based approaches that generate tests from a graph or state machine model of the interface, capture-replay methods that require the user to specify and demonstrate each test case individually, and modeling-language approaches that provide templates for abstract test cases. There has been little work, however, in automated goal-based testing, where the goal is a realistic user task, a function, or an abstract behavior. Recent work in human performance regression testing (HPRT) has shown that there is a need for generating multiple test cases that execute the same user task in different ways, however that work is limited in that it lacks efficient test generation techniques and only a single type of goal has been considered.

In this thesis we expand the notion of goal based interface testing to generate tests for a variety of goals. We develop a direct test generation technique, EventFlowSlicer, that is more efficient than that used in human performance regression testing, reducing run times by 92.5% on average for test suites between 9 to 26 steps and 63.1% across all test suites. EventFlowSlicer generates test cases for additional types of abstract goals beyond those used in HPRT and contains new logical constraints to support those.

Our evaluation on 21 realistic tasks shows that EventFlowSlicer can generate test cases based on a user goal, and that the number generated for each goal is non-trivial, more than can be easily captured manually. It generates on average 38 and as many as 200 test cases which all achieve the same goal for a specified task.

Advisor: Myra B. Cohen