Using the CSEM to Compare Scale Scores and Performance Levels

In any test, one can assume that scores for an individual student would vary if it were somehow possible to give the same test over and over again. For example, students may vary in their performance because of the way they are feeling on the day of the test or they may be especially lucky or unlucky when they guess at items they do not know. This random variation in individual scores is quantified through the use of a statistic of measurement precision called the CSEM. CSEMs are available in CERS and the student data files.

Given a single score for a student, it can be assumed that if the student were to take the test over and over again, the student would score within plus or minus one CSEM of the observed score about 68 percent of the time. This idea is expressed as follows:

“A student’s score is best interpreted when recognizing that the student’s knowledge and skills fall within a score range and not just a precise number. For example, 1530 (±15) indicates a score range between 1515 and 1545.”

For the Alternate ELPAC, a CSEM derived from a student’s overall score is calculated. For the Summative ELPAC, a CSEM was reported for a student’s composite scores but not for the overall score. In the current reports, the CSEM at each scale score point was provided.