About the NAEP Civics Assessment
The National Assessment of Educational Progress (NAEP) in civics is designed to measure the civics knowledge and skills that are critical to the responsibilities of citizenship in America. Students answer a series of selected-response and constructed-response questions designed to measure their knowledge and understanding of civics. Performance results are reported for students in the nation and disaggregated by various student characteristics.
The 2022 NAEP civics assessment at grade 8 was administered by the National Center for Education Statistics (NCES) as a digitally based assessment. Read more about the NAEP Digitally Based Civics Assessment.
NAEP Survey Questionnaires
As part of the 2022 eighth-grade NAEP civics assessment, students, teachers, and school administrators responded to survey questionnaires. These questionnaires collect contextual information to provide a better understanding of educational experiences and factors that are related to students’ learning both in and outside of the classroom, and to allow for meaningful student group comparisons. Explore the 2022 NAEP civics student, teacher, and school questionnaires. Learn more about NAEP survey questionnaires.
In addition to collecting contextual information on general and subject-specific topics typically covered in the NAEP student, teacher, and school questionnaires, survey questions were administered in 2022 to collect information from grade 8 students about remote learning experiences during the 2020–21 school year. Information was also collected from school administrators and teachers of eighth-graders about how they have responded to academic disruptions arising from the COVID-19 pandemic during the 2020–21 and 2021–22 school years.
The highlighted findings in this report demonstrate the range of information available from the 2022 NAEP civics survey questionnaires. They do not provide a complete picture of students' learning experiences inside and outside of school. The NAEP civics student, teacher, and school questionnaire data can be explored further using the NAEP Data Explorer.
NAEP survey questionnaire responses provide additional information for understanding NAEP performance results. Although comparisons in students' performance are made based on student, teacher, and school characteristics and educational experiences, these results cannot be used to establish a cause-and-effect relationship between the characteristics or experiences and student achievement. NAEP is not designed to identify the causes of performance differences. Therefore, results must be interpreted with caution. There are many factors that may influence average student achievement, including local educational policies and practices, the quality of teachers, available resources, and the demographic characteristics of the student body. Such factors may change over time and vary among student groups.
Development of NAEP Survey Questionnaire Indices
While some survey questions are analyzed and reported individually (for example, number of books in students’ homes), several questions on the same topic can sometimes be combined with an index measuring a single underlying construct or concept. The creation of NAEP survey questionnaire indices involves the following four main steps:
- Selection of constructs of interest. The selection of constructs of interest to be measured through the survey questionnaires was guided in part by the National Assessment Governing Board framework for collection and reporting of contextual information. In addition, NCES reviewed relevant literature on key contextual factors linked to student achievement in civics to identify the types of survey questions and constructs needed to examine these factors in the NAEP assessment.
- Question development. Survey questions were drafted, reviewed, and revised. Throughout the development process, the survey questions were reviewed by external advisory groups that included survey experts, subject-area experts, teachers, educational researchers, and statisticians. Some questions were drafted and revised with the intent of analyzing and reporting them individually while others were drafted and revised with the intent of combining them into indices measuring constructs of interest.
- Evaluation of questions. New and revised survey questions underwent pretesting whereby small samples of participants (students, teachers, and school administrators) are interviewed to identify potential issues with their understanding of the questions and their ability to provide reliable and valid responses. Some questions were dropped or further revised based on the pretesting results. The questions were then further pretested among a larger group of participants and responses were analyzed. The overall distribution of responses was examined to evaluate whether participants were answering the questions as expected. Relationships between survey responses and student performance were also examined. A method known as factor-analysis was used to examine the empirical relationships among questions to be included in the indices measuring constructs of interest. Factor analysis can show, based on relationships among responses to the questions, how strongly the questions “group together” as a measure of the same construct. Convergent-and-discriminant-validity of the construct with respect to other constructs of interest was also examined. If the construct of interest had the expected pattern of relationships and non-relationships, the construct validity of the factor as representing the intended index was supported.
- Index scoring. Using the item response theory (IRT) partial credit scaling model, index scores were estimated from students’ responses and transformed onto a scale that ranged from 0 to 20. As a reporting aid, each index scale was divided into low, moderate, and high index score categories. The cut points for the index score categories were determined based on the average response to the set of survey questions in each index. In general, high average responses to individual questions correspond to high index score values, and low average responses to individual questions correspond to low index score values. As an example, for a set of index survey questions with five response categories (such as not at all, a little bit, somewhat, quite a bit, and very much), students with an average response of less than 3 (somewhat) would be classified as low on the index. Students with an average response greater than or equal to 3 (somewhat) to less than 4 (quite a bit) would be classified as moderate on the index. Finally, students with an average response of greater than or equal to 4 (quite a bit) would be classified as high on the index.