About the NAEP U.S. History Assessment

The National Assessment of Educational Progress (NAEP) in U.S. history is designed to measure students' knowledge of American history in the context of change and continuity in democracy, culture and society, technological and economic changes, and America's changing world role. Students answer a series of selected-response and open-ended questions based on these areas (or themes) in American history. Performance results are reported for students in the nation and disaggregated by various student characteristics.

In 2018, the NAEP U.S. history assessment transitioned from a paper-based assessment (PBA) to a digitally based assessment (DBA) at grade 8. A multistep process was used for the transition from PBA to DBA, which involved administering the assessment in both formats to randomly equivalent groups of students in 2018. The transition was designed and implemented with careful intent to preserve trend lines that show student performance over time. Thus, the results from the 2018 U.S. history assessment can be compared to results from previous years.

Survey Questionnaires

NAEP Survey Questionnaires

As part of the 2018 eighth-grade NAEP U.S. history assessment, students, teachers, and school administrators answered survey questionnaires. These questionnaires collect contextual information to provide a better understanding of educational experiences and factors that are related to students’ learning both in and outside of the classroom, and to allow for meaningful student group comparisons. Learn more about NAEP survey questionnaires.

The highlighted findings in this report demonstrate the range of information available from the 2018 NAEP U.S. history survey questionnaires. They do not provide a complete picture of students' learning experiences inside and outside of school. The NAEP U.S. history student, teacher, and school questionnaire data can be explored further using the NAEP Data Explorer. Explore the 2018 NAEP U.S. history student, teacher, and school questionnaires.

NAEP survey questionnaire responses provide additional information for understanding NAEP performance results. Although comparisons in students' performance are made based on student, teacher, and school characteristics and educational experiences, these results cannot be used to establish a cause-and-effect relationship between the characteristics or experiences and student achievement. NAEP is not designed to identify the causes of performance differences. Therefore, results must be interpreted with caution. There are many factors that may influence average student achievement, including local educational policies and practices, the quality of teachers, available resources, and the demographic characteristics of the student body. Such factors may change over time and vary among student groups.

Development of NAEP Survey Questionnaire Indices

While some survey questions are analyzed and reported individually (for example, amount of books in students’ homes), several questions on the same topic can sometimes be combined into an index measuring a single underlying construct or concept. The creation of 2018 indices involved the following four main steps:

  1. Selection of constructs of interest. The selection of constructs of interest to be measured through the survey questionnaires was guided in part by the National Assessment Governing Board framework for collection and reporting of contextual information. In addition, NCES reviewed relevant literature on key contextual factors linked to student achievement in U.S. history to identify the types of survey questions and constructs needed to examine these factors in the NAEP assessment.
  2. Question development. Survey questions were drafted, reviewed, and revised. Throughout the development process, the survey questions were reviewed by external advisory groups that included survey experts, subject-area experts, teachers, educational researchers, and statisticians. As noted above, some questions were drafted and revised with the intent of analyzing and reporting them individually; others were drafted and revised with the intent of combining them into indices measuring constructs of interest.
  3. Evaluation of questions. New and revised survey questions underwent pretesting whereby a small sample of participants (students, teachers, and school administrators) is interviewed to identify potential issues with their understanding of the questions and their ability to provide reliable and valid responses. Some questions were dropped or further revised based on the pretesting results. The questions were then further pretested among a larger group of participants and responses were analyzed. The overall distribution of responses was examined to evaluate whether participants were answering the questions as expected. Relationships between survey responses and student performance were also examined. A method known as factor analysis was used to examine the empirical relationships among questions to be included in the indices measuring constructs of interest. Factor analysis can show, based on relationships among responses to the questions, how strongly the questions “group together” as a measure of the same construct. Convergent and discriminant validity of the construct with respect to other constructs of interest was also examined. If the construct of interest had the expected pattern of relationships and non-relationships, the construct validity of the factor as representing the intended index was supported.
  4. Index scoring. Using the item response theory (IRT) partial credit scaling model, index scores were estimated from students’ responses and transformed onto a scale which ranged from 0 to 20. As a reporting aid, each index scale was divided into low, moderate, and high index score categories. The cut points for the index score categories were determined based on the average response to the set of survey questions in each index. In general, high average responses to individual questions correspond to high index score values, and low average responses to individual questions correspond to low index score values. As an example, for a set of index survey questions with five response categories (such as not at all, a little bit, somewhat, quite a bit, and very much), students with an average response of less than 3 (somewhat) would be classified as low on the index. Students with an average response greater than or equal to 3 (somewhat) to less than 4 (quite a bit) would be classified as moderate on the index. Finally, students with an average response of greater than or equal to 4 (quite a bit) would be classified as high on the index.