About the NAEP Reading Assessment
The National Assessment of Educational Progress (NAEP) is a congressionally mandated project administered by the National Center for Education Statistics (NCES) within the U.S. Department of Education and is the largest continuing and nationally representative assessment of what our nation's students know and can do in select subjects. The NAEP reading assessment uses literary and informational texts to measure students' reading comprehension skills. Students read grade-appropriate passages and answer questions based on what they have read. Results at grades 4 and 8 are reported for the nation overall, for states and jurisdictions, and for districts participating in the Trial Urban District Assessment (TUDA); results for grade 12 are reported for the nation only.
The 2022 NAEP reading assessments at grades 4 and 8 were administered as digitally based assessments. Read more about the NAEP Digitally Based Reading Assessment.
NAEP Survey Questionnaires
As part of the NAEP reading assessment, survey questionnaires are given to students, teachers, and school administrators at grades 4 and 8 and to students and school administrators only at grade 12. These questionnaires collect contextual information to provide a better understanding of educational experiences and factors that are related to students' learning both in and outside of the classroom, and to allow for meaningful student group comparisons. Learn more about NAEP survey questionnaires.
In addition to collecting contextual information on general and subject-specific topics typically covered in the NAEP student, teacher, and school questionnaires, survey questions were administered in 2022 to collect information from grade 4 and 8 students about remote learning experiences during the 2020–21 school year. Information was also collected from school administrators and teachers of fourth- and eighth-graders about how they have responded to academic disruptions arising from the COVID-19 pandemic during the 2020–21 and 2021–22 school years.
The highlighted findings in this report demonstrate the range of information available from the NAEP reading survey questionnaires. They do not provide a complete picture of students' learning experiences inside and outside of school. The NAEP reading student, teacher, and school questionnaire data can be explored further using the NAEP Data Explorer.
Explore the 2022 NAEP reading survey questionnaires administered to fourth- and eighth-grade students (grade 4 and grade 8), teachers (grade 4 and grade 8), and school administrators (grade 4 and grade 8). Explore the 2019 NAEP reading survey questionnaires administered to twelfth-grade students and school administrators.
NAEP survey questionnaire responses provide additional information for understanding NAEP performance results. Although comparisons in students' performance are made based on student, teacher, and school characteristics and educational experiences, these results cannot be used to establish a cause-and-effect relationship between the characteristics or experiences and student achievement. NAEP is not designed to identify the causes of performance differences. Therefore, results must be interpreted with caution. There are many factors that may influence average student achievement, including local educational policies and practices, the quality of teachers, the available resources, and the demographic characteristics of the student body. Such factors may change over time and vary among student groups.
Development of NAEP Survey Questionnaire Indices
While some survey questions are analyzed and reported individually (for example, the amount of books in students' homes), several questions on the same topic are combined into an index measuring a single underlying construct or concept.
The creation of NAEP survey questionnaire indices involved the following four main steps:
- Selection of constructs of interest. The selection of constructs of interest to be measured through the survey questionnaires was guided in part by the National Assessment Governing Board framework for collection and reporting of contextual information. In addition, NCES reviewed relevant literature on key contextual factors linked to student achievement in reading to identify the types of survey questions and constructs needed to examine these factors in the NAEP assessment.
- Question development. Survey questions were drafted, reviewed, and revised. Throughout the development process, the survey questions were reviewed by external advisory groups that included survey experts, subject-area experts, teachers, educational researchers, and statisticians. As noted above, some questions were drafted and revised with the intent of analyzing and reporting them individually; others were drafted and revised with the intent of combining them into indices measuring constructs of interest.
- Evaluation of questions. New and revised survey questions underwent pretesting whereby a small sample of participants (students, teachers, and school administrators) are interviewed to identify potential issues with their understanding of the questions and their ability to provide reliable and valid responses. Some questions were dropped or further revised based on the pretesting results. The questions were then further pretested among a larger group of participants and responses were analyzed. The overall distribution of responses was examined to evaluate whether participants were answering the questions as expected. Relationships between survey responses and student performance were also examined. A method known as factor analysis was used to examine the empirical relationships among questions to be included in the indices measuring constructs of interest. Factor analysis can show, based on relationships among responses to the questions, how strongly the questions "group together" as a measure of the same construct. Convergent and discriminant validity of the construct with respect to other constructs of interest were also examined. If the construct of interest had the expected pattern of relationships and non-relationships, the construct validity of the factor as representing the intended index was supported.
- Index scoring. Using the item response theory (IRT) partial credit scaling model, index scores were estimated from students' responses and transformed onto a scale which ranged from 0 to 20. As a reporting aid, each index scale was divided into low, moderate, and high index score categories. The cut points for the index score categories were determined based on the average response to the set of survey questions in each index. In general, high average responses to individual questions correspond to high index score values, and low average responses to individual questions correspond to low index score values. As an example, for a set of index survey questions with five response categories (such as not at all, a little bit, somewhat, quite a bit, and very much), students with an average response of less than 3 (somewhat) would be classified as low on the index. Students with an average response greater than or equal to 3 (somewhat) to less than 4 (quite a bit) would be classified as moderate on the index. Finally, students with an average response of greater than or equal to 4 (quite a bit) would be classified as high on the index.