Off-campus UNL users: To download campus access dissertations, please use the following link to log into our proxy server with your NU ID and password. When you are done browsing please remember to return to this page and log out.

Non-UNL users: Please talk to your librarian about requesting this dissertation through interlibrary loan.

Computer -automated scoring systems and structured employment interviews: An examination of the impact of item type, scoring rubric complexity, and training data on the quality of scores

Piotr J Juszkiewicz, University of Nebraska - Lincoln

Abstract

Recent developments in the area of computer-automated scoring (CAS) systems offer a promising alternative to the use of human experts in the scoring of responses to open-ended questions on structured employment interviews. Despite the advances in the field, additional research has been needed to evaluate the quality of scores generated by modern CAS systems and the factors affecting CAS system performance. This study is an investigation of the impact of item type, scoring rubric, and training data set on the quality of scores generated by three different CAS systems on a sixty-item structured employment interview developed by a prominent testing organization. The degree of agreement between the scores given by human experts and CAS systems on each of the sixty items in the interview was used as the measure of the quality of CAS generated scores. It was first hypothesized that the quality of CAS generated scores would be higher on less complex item types. While significant differences in the quality of CAS scores were found as a function of item type, the pattern of differences did not support this hypothesis. Second, it was hypothesized that the quality of CAS scores would be higher on items for which human experts used less complex scoring rubrics to evaluate responses of job applicants. The results found no evidence of the hypothesized relationship. Finally, it was hypothesized that the quality of CAS scores would be higher on items for which the training data set included a comparable number of responses in each of the two possible score categories. The results supported this hypothesis for two of the three systems studied. The findings of this study provide insights into issues and factors that should be considered in future research on CAS systems.

Subject Area

School administration

Recommended Citation

Juszkiewicz, Piotr J, "Computer -automated scoring systems and structured employment interviews: An examination of the impact of item type, scoring rubric complexity, and training data on the quality of scores" (2004). ETD collection for University of Nebraska-Lincoln. AAI3159548.
https://digitalcommons.unl.edu/dissertations/AAI3159548

Share

COinS