Overview of The Nation's Report Card
Permission to Use and Cite Information
q9 What subjects does NAEP assess, and how are the subjects chosen?
Since its inception in 1969, National Assessment of Educational Progress (NAEP) assessments have been conducted in numerous academic subjects, including the arts, civics, economics, geography, mathematics, reading, science, U.S. history, and writing.
Since 1988, the National Assessment Governing Board has been responsible for selecting the subject areas to be assessed. Furthermore, the Governing Board oversees creation of the frameworks that underlie the assessments and the specifications that guide the development of the assessment instruments. The framework for each subject area is determined through a collaborative development process that involves teachers, curriculum specialists, subject-matter specialists, school administrators, parents, and members of the general public.
Beginning with the 2003 assessments, national assessments are conducted every two years in reading and mathematics at grades 4 and 8. Results from these assessments are released six months after administration. The assessments are conducted in reading and mathematics in the same year, and initial results for grades 4 and 8 are released in the fall of that year. Results from all other assessments (including the twelfth-grade assessment in reading and mathematics) are released about one year after administration, usually in the spring of the following year.
Back to Top
How many students participate?
The NAEP assessments are administered to representative samples of students rather than the entire national, state, or district populations. Each assessment, based on its design, samples different numbers of students.
See the number of schools and students that participated in the recent NAEP assessments:
See more information about the sample sizes and target populations for the recent NAEP assessments:
Has the demographic distribution of students changed since the trend lines started?
q15What are the current race/ethnicity categories for NAEP assessments?
In compliance with new standards from the U.S. Office of Management and Budget for collecting and reporting data on race/ethnicity, additional information was collected beginning in 2011 so that results could be reported separately for Asian students, Native Hawaiian/Other Pacific Islander students, and students identifying with two or more races. In earlier assessment years, results for Asian and Native Hawaiian/Other Pacific Islander students were combined into a single Asian/Pacific Islander category.
As of 2011, all of the students participating in NAEP are identified as one of the following seven racial/ethnic categories:
Students identified as Hispanic were classified as Hispanic in subsequent years even if they were also identified with another racial/ethnic group. Students who identified with two or more of the other racial/ethnic groups (e.g., White and Black) would have been classified as "other" and reported as part of the "unclassified" category prior to 2011, but from 2011 on were classified as "Two or More Races." Results for these students are presented under the "Two or More Races" category in the graphics and tables in the report.
When comparing the results for racial/ethnic groups to years prior to 2011, results for Asian and Native Hawaiian/Other Pacific Islander students were combined into a single Asian/Pacific Islander category for all previous assessment years.
How are students with disabilities (SD) and English learners (EL) included in the NAEP assessments?
Although every effort is made to include as many students as possible, different jurisdictions have different exclusion policies, and those policies may have changed over time. Because SD and EL students typically score lower than students not categorized as SD or EL, jurisdictions that are more inclusive—that is, jurisdictions that assess greater percentages of these students—may have lower average scores than if they had a less inclusive policy.
See the percentages of students identified, excluded, and assessed in recent NAEP assessments:
What testing accommodations does NAEP offer?
q20What are the National Assessment Governing Board inclusion goals?
The National Assessment Governing Board, which sets policy for NAEP, has been exploring ways to ensure that NAEP continues to appropriately include as many students as possible and to do so in a consistent manner for all jurisdictions assessed and reported. In March 2010, the Governing Board adopted a new policy, NAEP Testing and Reporting on Students with Disabilities and English Learners. This policy was the culmination of work with experts in testing and curriculum, and those who work with exceptional children and students learning to speak English. The policy aims to:
How can I look at sample questions from the assessment?
How are results reported?
Students’ performance on main NAEP assessments is reported as scale scores and as the percentages of students at or above three NAEP achievement levels (NAEP Basic, NAEP Proficient, and NAEP Advanced).
Average scores are reported on a 0 to 500 scale for reading, mathematics at grades 4 and 8, history, and geography; or on a 0 to 300 scale for science, writing, civics, and mathematics at grade 12. Scores at five percentiles on each scale provide results for lower-performing students (at the 10th and 25th percentiles), middle-performing students (at the 50th percentile), and higher-performing students (at the 75th and 90th percentiles) in that subject.
NAEP achievement levels are performance standards showing what students at the NAEP Basic, NAEP Proficient and NAEP Advanced levels should know and be able to do. Based on recommendations from policymakers, educators, and members of the general public, the National Assessment Governing Board sets specific NAEP achievement levels for each subject area and grade assessed. View the NAEP achievement level descriptions for mathematics and reading.
Because NAEP scales and achievement levels are developed independently for each subject, results cannot be compared across subjects.
Students’ performance on the NAEP long-term trend assessments is reported as scale scores (on 0 to 500 scales in mathematics and reading) and as the percentages of students attaining performance levels that correspond to five points on the scale (150, 200, 250, 300, and 350). In each subject, the performance of 9-year-olds tends to concentrate within the lower three performance levels, 13-year-olds within the middle three levels, and 17-year-olds within the top three levels. Read more about the long-term trend performance levels.NAEP results are not reported for individual students or schools, but are available for selected student groups (e.g., by gender, race/ethnicity, etc.) and based on responses to student, teacher, and school questionnaires.
q18How can a score change be significant for one group, but a similar or larger change not be significant for another group?
q6Is participation in NAEP voluntary?
Federal law specifies that NAEP is voluntary for every student, school, school district, and state. However, federal law also requires all states that receive Title I funds to participate in NAEP reading and mathematics assessments at fourth and eighth grades. Similarly, school districts that receive Title I funds and are selected for the NAEP sample are also required to participate in NAEP reading and mathematics assessments at fourth and eighth grades. Learn more about NAEP and why participation is important.
q7Are the data confidential?
Federal law dictates complete privacy for all test takers and their families. Under the National Assessment of Educational Progress Authorization Act (Public Law 107-279 III, section 303), the Commissioner of the National Center for Education Statistics (NCES) is charged with ensuring that NAEP tests do not question test takers about personal or family beliefs or make information about their personal identity publicly available.
After publishing NAEP reports, NCES makes data available to researchers but withholds students' names and other identifying information. The names of all participating students are not allowed to leave the schools after NAEP assessments are administered. Because it might be possible to deduce from data the identities of some NAEP schools, researchers must promise, under penalty of fines and jail terms, to keep these identities confidential.
q8Does NAEP report individual or school-level scores?
No. By design, information is not available at the individual student or school levels. Reports traditionally disclose state, regional, and national results. In 2002, NAEP began to report (on a trial basis) results from several large urban districts (Trial Urban District Assessments), after the release of state and national results. Because NAEP is a large-group assessment, each student takes only a small part of the overall assessment. In most schools, only a small portion of the total grade enrollment is selected to take the assessment, and these students may not reliably or validly represent the total school population. Only when the student scores are aggregated at the state or national level are the data considered reliable and valid estimates of what students know and can do in the content area; consequently, school- or student-level results are never reported.
q13What information is available on individual state performance?
How are state tests different from NAEP?
Most state tests measure student performance on the state's own curriculum standards, that is, on what policymakers and citizens consider important for students to know and be able to do. State tests allow comparisons of results over time within the state, and, in most cases, give individual student scores so that parents can know how their child is performing. State tests do not provide comparisons of results with other states or the nation. NAEP is the only assessment that allows comparison of results from one state with another, or with results for the rest of the nation. The NAEP program helps states answer such questions as: How does the performance of students in my state compare with the performance of students in other states with similar resources or students? How does my state's performance compare with the region's? Are my state's gains in student performance keeping up with the pace of improvement in other states? The term "proficiency" used in relation to performance on state tests does not have the same meaning as the term NAEP Proficient on the NAEP achievement levels, because the criteria used to determine proficiency are different. Together, state achievement tests and NAEP help educators and policymakers develop a comprehensive picture of student performance.
What is the NAEP Trial Urban District Assessment (TUDA)?
The Trial Urban District Assessment (TUDA) is a special project of the National Center for Education Statistics, the National Assessment Governing Board, and the Council of the Great City Schools to determine the feasibility of reporting district-level results for the National Assessment of Educational Progress (NAEP). TUDA results in mathematics and reading are based on representative samples of students in grades 4 and 8 in each participating urban district. The District Profiles Tool provides tables and maps that compare urban districts based on the average scale scores for selected groups of public school students within a single assessment year or between two assessment years.
q16How many districts participate in TUDA each year, and how are they chosen?
Districts are invited by the National Assessment Governing Board to participate in the assessment based on a selection process that considers a number of factors including the district's size and racial/ethnic diversity. For example, districts eligible to participate in the TUDA assessments must be large cities with a population of 250,000 or more in addition to having a majority (50 percent or more) of their student population being Black or Hispanic or eligible for the National School Lunch program. The maximum number of districts participating in a given assessment year is based on the level of Congressional funding for the program.
How do the samples for TUDA contribute to state results?
Students in the TUDA samples are also included as part of the state and national samples. For example, the results reported for students in Boston also contribute to the results reported for Massachusetts and to the results for the nation. The districts' results are weighted so that their contribution to the state results reflects the actual proportion of students in the population.
Just as the national public school sample is used as a benchmark for comparing results for states, results for urban districts are compared to results from large cities nationwide. Results for large cities are for students in public schools located in the urbanized areas of cities with populations of 250,000 or more. Large city is not synonymous with "inner city." Schools in participating TUDA districts are also included in the results for large cities, even though some districts include some schools not classified as large city schools. Students in the TUDA districts represent nearly one-half of the students who attend schools in large cities nationally. The comparison to students in large cities is made because the demographic characteristics of those students are most like the characteristics of students in the urban districts. Both the districts and large cities overall generally have higher concentrations of Black or Hispanic students, lower-income students, and English learners than in the nation as a whole.
Back to Top