Date of this Version
Curriculum-Based Measurement, edited by Jack J. Kramer (Lincoln, NE: Buros Institute of Mental Measurements, 1993).
In the most meaningful use of the term assessment, important decisions are made daily by teachers based on their assessment of information obtained from student responses to curriculum-related materials. These assessment decisions may include deciding on extra work or deciding to refer a child for learning or behavior problems. The term curriculum-based assessment (CBA) has been used to encompass a wide range of procedures ranging from these daily informal analyses by teachers, to highly structured measurement systems used in special education systems. Although well-constructed guides exist for some sets of curriculum-based decisions (e.g., Shinn, 1989), there is inadequate empirical research to assist our understanding of how, or how well, most of these decisions are made.
Recently, attempts have been made to formalize the use of measures of student academic performance, especially in decisions about special education eligibility for students who seriously fail to meet classroom expectations (Le., Tindal, 1988; Shinn, 1989). At least one type of CBA developed for special education systems, called curriculum-based measurement (CBM) has been the subject of extensive evaluation research (see Tindal, 1988, for a comprehensive review) and interest on the part of special service personnel such as school psychologists (e.g., Shapiro, 1990) and special educators (e.g., Tucker, 1985). Yet, as interest has grown many questions have arisen about what we know about CBA, and we think more importantly, about how we know what we know!
With this paper we have set modest goals. It will be suggested that curriculum-based assessment fits best within a behavioral model of measurement and an examination of that assumption is provided. The discussion of the behavioral assessment model provides a foundation for our review of curriculum-based assessment (CBA) and the manner in which CBA has been developed and used. The approach taken herein is to some degree critical based on our analysis that many questions remain unanswered, questions about the nature of curriculum-based measures themselves and the manner in which the emerging CBA technology has been and will be applied. However, we wish to strongly emphasize our belief that CBA has already had a positive influence on educational practice, especially our understanding of how to help teachers make better decisions in order to enhance academic achievement (see, for example, Fuchs, this volume), and has served an equally important heuristic influence on the field of educational measurement.
We think CBA potentially has much more to offer in improving measurement within the assessment of school based problems. Our analysis suggests that CBA is best understood not as a monolithic assessment procedure, but as a source of data to be considered along with other sources in a comprehensive analysis of academic skills and learning environments. Because of this, CBA must be evaluated as part of, not different from, the entire evaluation process. To date this has rarely been accomplished (see Lentz, 1988, for an exception). We will argue that a choice of specific procedures (e.g., CBA, standardized intelligence or achievement tests, event sampling) to be used during an assessment should flow from an understanding both of the general assessment model to be followed and the specific assessment questions to be answered for a particular child. In this regard we are particularly interested in the use of CBA data within intervention assistance programs for at-risk students.
There appear to be many questions about the manner in which CBA procedures should and will be implemented in classroom settings. Specifically, we are concerned about the manner in which CBA will be adopted by school psychologists and the entire educational establishment. For example, we foresee a number of problems with piecemeal adoption of structured CBA procedures by a portion of special services staff (e.g., school psychologists but not special education teachers or vice versa). We fear that in the absence of a clear assessment model or evaluation goals, CBA may be used in a manner that diverts attention from other environmental factors (e.g., instructional variables) that may contribute to academic success or failure. For example, if evaluators focus prime attention on CBA data during decision making for intervention planning, then problems may arise because of the overemphasis on student skill or fluency deficits at the expense of examining problems between students' performance and the instructional environment. Publications describing CBM use seem to continue to address placement special education issues (and subsequent IEP development or monitoring) and deemphasize intervention assistance prior to placement (e.g., Marston and Magnusson, 1988).