Buros-Nebraska Series on Measurement and Testing


Date of this Version


Document Type



From The Influence of Cognitive Psychology on Testing, edited by Royce R. Ronning, John A. Glover, Jane C. Conoley, and Joseph C. Witt (Hillsdale, NJ: Lawrence Erlbaum Associates, 1987)


Copyright © 1987 Lawrence Erlbaum Associates, Inc. Digital Edition Copyright © 2012 Buros Center for Testing.



Given the demands for higher levels of learning in our schools and the press for education in the skilled trades, the professions, and the sciences, we must develop more powerful and specific methods for assessing achievement. We need forms of assessment that educators can use to improve educational practice and to diagnose individual progress by monitoring the outcomes of learning and training. Compared to the well-developed technology for aptitude measurement and selection testing, however, the measurement of achievement and diagnosis of learning problems is underdeveloped. This is because the correlational models that support prediction are insufficient for the task of prescribing remediation or other instructional interventions. Tests can predict fa ilure without a theory of what causes success, but intervening to prevent failure and enhance competence requires deeper understanding.

The study of the nature of learning is therefore integral to the assessment of achievement. We must use what we know about the cognitive properties of acquired proficiency and about the structures and processes that develop as a student becomes competent in a domain . We know that learning is not simply a matter of the accretion of subject-matter concepts and procedures; it consists rather of organizing and restructuring of this information to enable skillful procedures and processes of problem representation and solution. Somehow, tests must be sensitive to how well this structuring has proceeded in the student being tested.

The usual forms of achievement tests are not effective diagnostic aids. In order for tests to become usefully prescriptive, they must identify performance components that facilitate or interfere with current proficiency and the attainment of eventual higher levels of achievement. Curriculum analysis of the content and skill to be learned in a subject matter does not automatically provide information about how students attain competence about the difficulties they meet in attaining it. An array of subject-matter subtests differing in difficulty is not enough for useful diagnosis. Rather, qualitative indicators of specific properties of performance that influence learning and characterize levels of competence need to be identified.

In order to ascertain the critical differences between successful and unsuccessful student performance, we need to appraise the knowledge structures and cognitive processes that reveal degrees of competence in a field of study. We need a fuller understanding of what to test and how test items relate to target knowledge. In contrast, most of current testing technology is post hoc and has focused on what to do after test items are constructed. Analysis of item difficulty, development of discrimination indices, scaling and norming procedures, and analysis of test dimensions and factorial composition take place after the item is written. A theory of acquisition and performance is needed before and during item design.