Buros-Nebraska Series on Measurement and Testing

 

Date of this Version

1991

Document Type

Article

Citation

From: The Computer and the Decision-Making Process, edited by Terry B. Gutkin and Steven L. Wise (Hillsdale, New Jersey, Hove & London: Lawrence Erlbaum Associates, 1991) .

Comments

Copyright © 1991 by Lawrence Erlbaum Associates, Inc. Digital Edition Copyright © 2012 Buros Center for Testing. This book may be downloaded, saved, and printed by an individual for their own use. No part of this book may be re-published, re-posted, or redistributed without written permission of the holder of copyright.

Abstract

As amply demonstrated by the chapters in this volume, computer applications have pervaded all aspects of psychological practice. Although thought by some to be relatively new (Nolen & Spencer, 1986), semiautomatic scoring of the Strong Vocational Interest Blank was accomplished more than 50 years ago (Campbell, 1968) and systems of computer-based test interpretation have been operational for 25 years (Fowler, 1985).

DEVELOPMENT OF ADMINISTRATION AND INTERPRETATION PROGRAMS

Early automated programs typically focused upon the scoring or interpretation of a single psychological test. Most frequently, that test was the Minnesota Multiphasic Personality Inventory (Fowler, 1985) but the Rorschach was interpreted as well (Piotrowski, 1964). In addition to automated interpretation, there were attempts to administer existing psychological tests directly by computer. The MMPI was again the test of choice (Lushene, O'Neil, & Dunn, 1974) although the Wechsler Adult Intelligence Scale (Elwood, 1972), Slosson Intelligence Test (Hedl, O'Neil, & Hansen, 1973), Peabody Picture Vocabulary Test (Klinge & Rodziewicz, 1976), and the California Psychological Inventory (Scissons, 1976) were also administered by computer.

Efforts to equate the conventional MMPI with computer-administered versions have continued unabated. White, Clements, and Fowler (1985) administered the full-length MMPI via microcomputer and standard booklet to 150 volunteer undergraduates. The two MMPI versions were generally equivalent in terms of mean scale scores, test- retest correlations, and stability of high-point codes. There was, however, a greater tendency for the computerized version to result in larger numbers of "cannot say" responses. Rozensky, Honor, Rasinski, Tovian, & Herz (1986) investigated the attitudes of psychiatric patients to computerized vs . conventional MMPI administrations. The computer group found the testing experience to be more interesting, more positive, and less anxiety-provoking than did the paper-and-pencil group. The equivalency of other conventional personality (Katz & Dalby, 1981; Lukin, Dowd, Plake, & Kraft , 1985; Skinner & Allen, 1983; Wilson, Genco, & Yager, 1986), neuropsychological (DeMita, Johnson, & Hansen, 1981), cognitive ability (Beaumont, 1981 ; Eller, Kaufman, & McLean, 1986), and academic (Andolina, 1982; Wise & Wise, 1987) tests to their computerized versions are also being widely explored.

The promise of parallel automated test forms has provoked investigations of the differences between computerized and conventional item presentations and their possible impact upon test reliability and validity (Hofer & Green, 1985). Jackson (1985) reviewed the evidence regarding equivalence of conventional and computerized tests and posited four methodological differences: (1) modifications in the method of presenting stimulus material; (2) differences in the task required of the examinee; (3) differences in the format for recording responses; and (4) differences in the method of interpretation. Despite these threats to equivalence, Moreland (1985) opined that "the bulk of the evidence on computer adaptions of paper-and-pencil questionnaires points to the tentative conclusion that non-equivalence is typically small enough to be of no practical consequence, if present at all" (p. 224). A more cautious note was sounded by Hofer and Green (1985). They suggested that for most computer-presented tests, "practitioners will have to use good judgment in interpreting computer-obtained scores, based on the available but inconclusive evidence" (p. 831). This conservative opinion seems well founded if automated testing is to influence the critical classification, placement, and treatment decisions made by psychologists.

Computer-interpreted Tests

Computerized interpretation of the MMPI has remained a major line of inquiry. Honaker, Hector, and Harrell (1986) asked psychology graduate students and practicing psychologists to rate the accuracy of interpretative reports for the MMPI that wee labeled as generated by either a computer or licensed psychologist. Their results demonstrated similar accuracy ratings for computer-generated and clinician-generated reports and did not support the claim that computergenerated reports are assigned more credibility than is warranted. Butcher (1987) reviewed early MMPI systems, summarized desirable attributes of automated systems, and described the development and use of the Minnesota Clinical Interpretive Report (University of Minnesota Press, 1982) computerized MMPI interpretive system. Limited attention has been given to automated interpretations of other personality tests (Exner, 1987; Greene, Martin, Bennett, & Shaw, 1981; Harris, Niedner, Feldman, Fink, & Johnston, 1981; Lachar, 1984), neuropsychological measures (Adams & Heaton, 1985; Adams, Kvale, & Keegan, 1984), and ability and achievement instruments (Brantley, 1986; Hasselbring & Crossland, 1981; Johnson, Willis, & Danley, 1982; Oosterhof & Salisbury, 1985; Webb, Herman, & Cabello, 1986).

As noted by Moreland (1985), investigations of the accuracy of computerbased clinical interpretations of personality tests have been limited almost exclusively to the MMPI. A thorough review of the types of MMPI validity studies, computer interpretation systems, and outcomes are presented by Moreland (1987). He summarized these findings by concluding:

Things look pretty good for computer-based MMPI interpretations. Consumers give them high marks, and the results of properly controlled studies indicate that this high acceptance rate is not the result of generalized reports that are equally applicable to most clients. (p. 43)

In contrast, Matarazzo (1985) noted that currently available automated interpretation systems are erected upon rather tenuous empirical bases and involve varying degrees of clinical and actuarial data accumulation and interpretation which have considerable potential for harm if used in isolation. These disparate views can be reconciled by Butcher's (1987) assertion that the computerized report should be used "only in conjunction with clinical information obtained from other sources" (p. 167).