Off-campus UNL users: To download campus access dissertations, please use the following link to log into our proxy server with your NU ID and password. When you are done browsing please remember to return to this page and log out.

Non-UNL users: Please talk to your librarian about requesting this dissertation through interlibrary loan.

Ability estimation for unidimensional computerized adaptive tests using three-dimensional data: A comparison across item selection methods and levels of dimensional influence

Lori Jean Nebelsick-Gullett, University of Nebraska - Lincoln

Abstract

This investigation was conducted in two parts to assess the effects on ability estimation of implementing different content balancing methodologies using three-dimensional item pools. In addition, the effects on ability estimation of varying the strength of dimensions and the degree of relationship between the dimensions was assessed. Part one was a preliminary investigation to assess the fit of a unidimensional model under two levels of inter-dimensional correlation (.57 and.82). Degree of fit was assessed in terms of parameter estimation accuracy and presence of a general factor. Simulated data was generated for 1,000 simulees at each of the two inter-dimensional correlation levels. Three ability parameters were generated for each simulee in each condition. Ten replications were performed at each level. One hundred item difficulty and discrimination parameters were generated for each dimension. Part two compared the accuracy of ability estimation using traditional, content-balanced, and mini computerized adaptive tests (CATs). Datasets generated for part one were used in the second part of this study. Estimation accuracy was assessed for both the entire range of true ability and within each of six ability blocks. Results for part one suggested the presence of a general factor in all datasets. Results for part two showed that the correlations between estimated and true ability on each dimension were similar across item selection methods, the correlations for the.82 data being higher and more stable. The relationship between estimated and average true ability were consistently higher for all conditions. In terms of estimation accuracy, the traditional and content balanced adaptive tests had the smallest degree of error for ability across the entire scale. The mini-cats were more accurate in the central ability blocks and less accurate in the more extreme blocks. These patterns were more pronounced in the.57 data and became less noticeable as the inter-dimensional correlation level increased.

Subject Area

Educational evaluation

Recommended Citation

Nebelsick-Gullett, Lori Jean, "Ability estimation for unidimensional computerized adaptive tests using three-dimensional data: A comparison across item selection methods and levels of dimensional influence" (1993). ETD collection for University of Nebraska-Lincoln. AAI9415987.
https://digitalcommons.unl.edu/dissertations/AAI9415987

Share

COinS