About the NAEP Reading Assessment

The National Assessment of Educational Progress (NAEP) reading assessment uses literary and informational texts to measure students’ reading comprehension skills. Students read grade-appropriate passages and answer questions based on what they have read. Performance results are reported for the nation overall, for states and jurisdictions, and for 27 districts participating in the Trial Urban District Assessment (TUDA). In 2017, the NAEP reading assessment transitioned from a paper-and-pencil assessment to a digitally based assessment (DBA) at grades 4 and 8. A multi-step process was used for the transition from paper-based to DBA, with the careful intent to preserve trend lines that show student performance over time. The process involved administering the assessment in both the DBA and PBA formats to randomly equivalent groups of students in 2017. Thus, the results from the 2017 reading assessment can be compared to results from previous years.

Digitally Based Assessment

NAEP 2017 Digitally Based Reading Assessment

The NAEP 2017 digitally based reading assessment was designed to continue reporting trends in student performance dating back to 1992, while keeping pace with the new generation of classroom environments in which digital technology has become an increasing part of students' learning. The 2017 assessment content was developed with the same reading framework used to develop the 2015 paper-based assessment.

Most of the content administered in the 2017 digitally based reading assessment was also used in the 2015 paper-based assessment. The previously used passages and questions were adapted to fit a tablet screen. While the presentation of content changed, the content itself did not change. Of the 19 passages and question sets administered across the two grades assessed, one set at each grade was newly developed for 2017. The newly developed questions were also based on the NAEP reading framework which has guided assessment development since the 2009 assessment.

Watch an overview of how NAEP transitioned to digitally based assessments.

The assessment was administered on tablet computers supplied by NAEP using a secure, local NAEP network. This allowed the NAEP administrators to create a stable administration environment by bringing in their own equipment that would not be influenced by school-based equipment or school internet connectivity, thereby maintaining consistency across the assessed schools. Students were able to interact with the tablets via touchscreen, with an attached keyboard, or using a stylus provided by NAEP. The digitally based reading assessment provided students with online tools, such as look-back buttons to take them back to the passage and a highlighter to mark information in the passage. See how the reading assessment was presented to students. At the beginning of the assessment session, students viewed an interactive tutorial that provided all the information needed to take the assessment on tablet; for example, it explains how to navigate between the reading text and questions, how to progress through questions, and how to indicate answers for multiple-choice questions. The interactive nature of the tutorial allowed students to familiarize themselves with the digital delivery system before beginning the actual assessment.

In addition to the digitally based assessment, a random subsample of students was administered the complete 2015 paper-based version of the assessment in 2017. NAEP administered the assessment in both modes—paper-based and digitally based—in all the sampled schools to investigate potential differences in performance between students taking the assessment on a tablet and students taking the paper-based assessment. However, in schools with fewer than 21 students, all students were assigned to either the digitally or paper-based assessment. Each participating student, however, took the assessment in only one mode. See how a reading passage and questions looked in the paper mode and how the same set looked in the digital mode.

After the administration of the assessment, the National Center for Education Statistics (NCES) conducted rigorous analyses of the data and aligned the 2017 results to previous assessment years using a two-step process.

  • First, common item linking was used to calculate the trend line from 2015 to 2017 based on the paper-based assessment results. This kind of linking was possible because the majority of 2017 assessment questions were also administered in 2015 and showed the same statistical properties.
  • Second, common population linking was used to align the 2017 paper-based assessment results with the 2017 digital assessment results. This kind of linking was possible because the samples of students for each assessment mode were randomly equivalent; that is, each random sample included students from the same school, ensuring that the students' educational experiences and characteristics were equivalent.

Once the common population linking aligned the digital results to the paper results on the national level, the analyses evaluated whether the linking allowed for fair and meaningful comparisons for national student groups as well as for states and districts. These evaluations supported making trend comparisons between the digital assessment and previous paper-based assessments for subgroups, states, and districts.

These analyses—common item linking based on paper results and common population linking of paper results to digital results—enabled NCES to successfully maintain the reading trend line while transitioning to digital assessment. The 2017 reading assessment results in this report are based on the performance of students who took the assessment on tablets.

Technical documentation on the transition to the 2017 digitally based reading assessment is forthcoming.