The #1 Curriculum and Assessment Partner for Educators

Looking for Online Solutions?

See Our Products

[Assessment Literacy Video Series] Cut Scores and PLDs Explained

[Assessment Literacy Video Series] Cut Scores and PLDs Explained

Our assessment literacy video series aims to demystify, unpack, and connect assessment concepts and principles to help you make more sense out of your assessment data. Maybe you’re just learning the ropes of some of the more complicated metrics reported in educational assessments, or perhaps you’re hoping to see how an assessment concept applies to Edmentum’s suite of assessment programs. Either way, let our top-notch research team of former educators and subject-matter experts be your guide.

Have you ever wondered how the labels on summative assessment score reports, such as proficient, advanced, meeting expectations, or exceeding expectations, came to be? These are all examples of performance level descriptors. These performance levels are used for interpreting student scores and are important for high-stakes accountability by tracking the percentage of students in each performance level. But, with such important decisions tied to these metrics, such as how schools are evaluated or if students should be promoted a grade, you might wonder how those performance level classifications are made. This assessment literacy video will break down two important concepts that will help you understand these classifications: performance level descriptors and cut scores.

What are performance level descriptors (PLDs)?

Classifying students into performance levels begins with writing performance level descriptors, or PLDs. PLDs, sometimes called achievement level descriptors, describe what students in a particular performance level are expected to know and be able to do and are often phrased in relation to grade-level standards. For example, a PLD for the basic performance level might say that a student has not met grade-level standards, whereas a PLD for advanced might say that a student has exceeded grade-level standards. Sometimes, the PLDs provide details about the skills within standards that students should know. 

How do PLDs relate to cut scores?

Once performance level descriptors are defined, testing programs specify the location on the test’s scale that differentiates one performance level from the next. These are called cut scores; you can think of that term as cutting the scale into different performance levels. Cut scores might use the scale score metric or the raw score metric—that is, the number of questions answered correctly——depending on how the test is scored. The concept of cut scores is used in many different fields, even if it is not called a cut score. For example, loan companies set credit score ranges for which credit scores are considered poor, fair, good, or excellent credit.

How are cut scores set?

So, how are cut scores set? There are a number of different ways of deciding what exact number is the location on the scale that differentiates performance levels. Sometimes, a panel of experts, such as teachers and content specialists, are convened to collaboratively make judgements about where cut scores should be placed based on considering the expectations specified in the performance level descriptors and closely reviewing the items on the test. In other cases, real student data may drive cut score placement by seeing which percentage of students would fall into each performance level for certain cut scores. Often, cut scores are set with both data and expert judgements.

How are PLDs and cut scores used do drive classroom instruction?

Cut scores can tell you just how far away a student is from a higher or lower performance level. For example, if a student is classified as proficient on a midyear benchmark test but scores very close to the cut score distinguishing between proficient and advanced, that student may hit the advanced category with just a little bit of growth by the end of the year. Cut scores can also be used to set goals with students because you can see just how much a student must improve by to make it to the next performance level.

Keep in mind that there is always some variability around a score. The variability is referred to as the standard error of measurement and helps provide a level of confidence around a score. Regardless, it is typical for students to be classified into performance levels based on their single score.

How does this concept connect back to Edmentum programs?

Study Island’s state-specific Benchmark Assessments are built from the state assessment blueprints, and the performance levels reported with Study Island Benchmark scores reflect those used in that state. Many states have four levels—often called below basic, basic, proficient, and advanced. In those states, there are three different cut scores for each benchmark assessment for each grade and content area. Edmentum research scientists analyze data from the Study Island Benchmarks to set these cut scores using various methods.

Now, when you review your Study Island Benchmark or state assessment score reports, you’ll have a better understanding of how those performance levels were determined!

Interested in more assessment literacy topics? Check out our Edmentum Assessment Literacy video series, and continue to follow along on the blog as we dig deeper, making you assessment experts along the way! Want to learn more about Study Island? Get more information about our award-winning program on our website.

audra.kosh's picture

Dr. Audra Kosh began her career in education as an eighth-grade math teacher. After transitioning out of the classroom to pursue her passion for research, Audra completed a Ph.D. in Learning Sciences with a focus on educational measurement and mathematics education at the University of North Carolina at Chapel Hill. Now working as a Research Scientist at Edmentum, Audra does psychometric analyses and assessment research for Edmentum’s suite of assessments.