Buros-Nebraska Series on Measurement and Testing
Date of this Version
1991
Document Type
Article
Citation
From: The Computer and the Decision-Making Process, edited by Terry B. Gutkin and Steven L. Wise (Hillsdale, New Jersey, Hove & London: Lawrence Erlbaum Associates, 1991) .
Abstract
The use of computers to interpret psychological tests is a "hot" topic, both within psychology and without. It is hot in the sense of giving rise to an increasing number of books and articles (e.g., Butcher, 1985, 1987; Eyde, 1987; Krug, 1987). It is hot in the sense of giving rise to an ever-increasing number of business enterprises (compare any recent APA Monitor with an issue from 1981). It is hot in the sense of capturing the attention of the news media (e.g., Petterson, 1983). And it is hot in the sense of giving rise to increasing controversy within psychology itself. In a Science editorial Matarazzo (1983) expressed concern lest computer-based test interpretations (CBTIs) fall into the hands of unqualified users, his bottom line being: "Until more research establishes that the validity of application of these computer products by a health practitioner is not dependent on the practitioner's experience and training in psychometric science, such automated consultations should be restricted to ... qualified user groups." Matarazzo (1985, 1986) has continued to write in that same vein, causing others to take up the cudgels to defend CBTI (Ben-Porath & Butcher, 1986; Fowler & Butcher, 1986; Murphy, 1987). Lanyon (1984) in his chapter on personality assessment in the Annual Review of Psychology, indicated that he was concerned by the proliferation of CBTI systems: "There is a real danger that the few satisfactory services will be squeezed out by the many unsatisfactory ones, since the consumer professionals are generally unable to discriminate among them .... " and " ... lack of demonstrated program validity has now become the norm" (p. 690). Finally the Subcommittee on Tests and Assessment of the American Psychological Association (APA) Committee on Professional Standards and the APA Committee on Psychological Tests and Assessment have developed standards for the area (American Psychological Association, 1986). I published an article describing attempts to establish the validity of CBTIs and made some suggestions regarding the shape future attempts might take (Moreland, 1985). The heat generated by the debate over CBTI seems not to have dissipated; however, some light seems to have been shed on the field since I was writing in 1984. In view of all this, a revision and expansion of my earlier efforts seems timely.
SOME HISTORY
The use of machines to process psychological test data is not a recent innovation (Fowler, 1985). A progression from hand scoring materials through a variety of mechanical and electronic "scoring machines" to the digital computer, has freed successive generations of beleaguered secretaries and graduate students from laborious hand scoring of objective tests . The first information concerning scoring machines for the Strong Vocational Interest Blank (SVIB) appeared in 1930 (Campbell, 1971). These initial machines were very cumbersome, involving the use of 1,260 Hollerith cards to score each protocol. In 1946, Elmer Hankes, a Minneapolis engineer, built the analogue computer that was the first automatic scoring and profiling machine for the SVIB (Campbell, 1971). A year later, he adapted the same technology to the scoring of the Minnesota Multiphasic Personality Inventory (MMPI) (Dahlstrom, Welsh, & Dahlstrom, 1972). In the mid 1950s, E. F. Lindquist's Measurement Research Center in Iowa City began to use optical (answer sheet) scanning devices instead of card-based scoring equipment. In 1962, National Computer Systems linked an optical scanner with a digital computer and began scoring both the SVIB and the MMPI (Campbell, 1971; Dahlstrom et aI., 1972). Most automated test scoring still employs optical scanning/ digital computer technology and the number and types of tests scored by this method have grown exponentially during the last three decades. Though automated scoring is most easily accomplished for objective tests with a limited number of response alternatives, sophisticated computer programs have also been developed to score the narrative responses elicited by projective techniques (e.g. , Gorham, 1967). Prior to the advent of these programs, extensive training, if not professional expertise, was required to score projective tests. Similar programs have also been developed to evaluate other types of complex verbal productions (e.g., Tucker & Rosenberg, 1980).
In addition to keeping nerves from becoming frayed, automated scoring frees psychologists to spend more time in other functions, such as psychotherapy, where computer technology is not so advanced (see, however, Colby, 1980). It also enables more individuals to undergo psychological assessment. Finally, though not completely immune from the slings and arrows of human imperfections (e.g., Fowler & Coyle, 1968; Grayson & Backer, 1972; Weigel & Phillips, 1967), computer scoring appears to be more reliable than that done solely by humans (Greene, 1980, pp. 25-26; Klett, Schaefer, & Plemel, 1985). A computer, once correctly programmed, will apply scoring rules with slavish consistency, whereas fatigue and other human frailties may render the psychologist, graduate student, or secretary inconsistent in the application of even the most objective scoring rules (Kleinmuntz, 1969).
Comments
Copyright © 1991 by Lawrence Erlbaum Associates, Inc. Digital Edition Copyright © 2012 Buros Center for Testing. This book may be downloaded, saved, and printed by an individual for their own use. No part of this book may be re-published, re-posted, or redistributed without written permission of the holder of copyright.