Assessing What Students Can Do, Not Just What They Know

Posted on by

male_student_literacy_whiteboard_persuasive_writingThe Common Core State Standards adopted by almost all of the states stress that students should not only gain knowledge and skills but also be able to use what they’ve learned to solve complex, unfamiliar problems in creative ways. That’s what’s needed for success in college as well as in a globally competitive job market. The Common Core assessments now being developed by two state-led consortia also will focus on those skills. In order to make sure their students are prepared to do well, states are calling for the development of interim and formative assessments that give teachers information about how their students are progressing.

That has led to a surge in demand for performance assessment, which can take many forms. Conducting an experiment in a science lab is a type of performance assessment. So is a dance performance or a violin recital. They can include the familiar, such as essays, as well as the unfamiliar, such as games and digital simulations. Researchers, test publishers, educators, and policymakers variously refer to performance assessments as tasks, events, activities, demonstrations, or exhibitions. Proponents say such assessments make it possible to measure knowledge and skills that are difficult to capture using more traditional test question formats. They may require test-takers to engage in activities that closely resemble what they will be expected to do in college and on the job. They also are inherently more appealing than typical test questions, and the process of completing the assessments can, in and of itself, be educational. A series of studies looking at the impact of the MSPAP program on students and teachers suggests that the introduction of performance assessment can lead teachers to develop higher performance expectations for their students and to change their instructional practices accordingly. Furthermore, skillful use of performance assessments can increase student achievement (Lane, Parke, & Stone, 2002). If the goal is to find out how well students write, then a test should require students to write and the quality of that writing ought to be judged in a fair, rigorous and standardized way. However, before performance assessment systems can be built, it is necessary first to define what they are and are not, understand their many forms, and identify current examples of successful approaches.

Those are the reasons Pearson’s Center for NextGen Learning & Assessment developed the Framework of Approaches to Performance Assessment, which is interactive and invites those who consult it to explore various types of assessments and how they differ. For example, in the Framework we defined a performance task as a collection of discrete test items organized around a theme or culminating activity. Performance tasks typically involve having students use a variety of materials, such as literary or technical texts, graphs, charts, photographs, or video clips as resources for completing an analytical task. Tasks tend to be set in engaging contexts, focus on valued learning outcomes, and align to multiple standards, supporting measurement of more than one type of knowledge, skill, or process.

Depending on what the task is designed to assess, test-takers may be asked to respond orally, produce a written response, create a graph or drawing, or some combination of the three. When digital technology is involved, test-takers may respond by clicking, dragging, rotating, or otherwise manipulating the image on the screen.

Performance tasks are not scored based on whether the answers are right or wrong. Instead, the quality of the response is scored by comparing it to a rubric, which is a specific statement characterizing the attributes of the performance. For example, an extended written response could be evaluated based on what the student wrote (the knowledge of a specific history topic, for example) as well as how it was written (grammar, punctuation, clarity, style). Depending on the types of items or activities included in a task, items may be machine-scored, human-scored, or some combination of those.

We think the Framework serves a number of purposes.

  • For researchers, it establishes definitions and a common language that will standardize how they refer to performance assessment.
  • Policymakers can use the Framework to increase their understanding of the diversity of performance assessment approaches, as well as the trade-offs in cost and efficiency associated with particular approaches.
  • The Framework can help teachers design and use performance assessments in their classrooms.
  • The Framework also can help students and parents become more familiar and thus more comfortable with the new types of assessments they’ll be encountering in the future.

What do you think of the Framework? What do you think about performance assessment? What have your experiences been like—as students, parents, teachers or policymakers?



Lane, S., Parke, C. S., & Stone, C. A. (2002). The impact of a state performance-based assessment and accountability program on mathematics instruction and student learning: Evidence from survey data and school performance. Educational Assessment8(4), 279-315.


Further Recommended Reading

Darling-Hammond, L., & Adamson, F. (2010). Beyond basic skills: The role of performance assessment in achieving 21st century standards of learning. Stanford, CA: Stanford University, Stanford Center for Opportunity Policy in Education.

Lane, S., Parke, C. S., & Stone, C.A. (2002). The impact of a state performance-based assessment and accountability program on mathematics instruction and student learning: Evidence from survey data and school performance. Educational Assessment, 8(4), 279-315.

Parke, C. S., Lane, S., & Stone, C. A. (2006). Impact of a state performance assessment program in reading and writing. Educational Research and Evaluation, 12(3), 239-269.

Stone, C. A. & Lane, S. (2003) Consequences of a state accountability program: Examining relationships between school performance gains and teacher, student, and school variables. Applied Measurement in Education, 16(1), 1-26.

Stecher, B. (2010). Performance Assessment in an Era of Standards-Based Educational Accountability. Stanford, CA: Stanford University, Stanford Center for Opportunity Policy in Education.

Emily Lai

About Emily Lai

Emily Lai is the Director of the Center for Product Design Research & Efficacy. Previously, Emily was a Research Scientist in Pearson’s Center for NextGen Learning & Assessment. In that capacity, she oversaw design and development of curriculum-embedded performance tasks for Pearson Forward, a K-5 digital learning solution. She also led development of Pearson’s Framework of Approaches to Performance Assessment. Emily is an expert in principled assessment design approaches, including Pearson’s Principled Design for Efficacy (PDE). In addition, she conducts research and presents on assessment of 21st Century competencies. Her most recent research includes leading Baltimore County Public School educators in applying PDE to design performance tasks for formative use and co-developing a learning progression and associated performance assessments for geometric measurement of area. Previously, Emily was a program evaluator for five years at the University of Iowa’s Center for Evaluation and Assessment. Emily holds a Ph.D. in Educational Measurement & Statistics from the University of Iowa, a Masters in Library and Information Science from the University of Iowa, and a Masters in Political Science from Emory University. Follow her on Twitter: @EmilyRLai
This entry was posted in NextGen Learning & Assessment and tagged , , , , , , , , , , , , , , , , . Bookmark the permalink.

4 Pingbacks/Trackbacks