Overview of The Nation's Report Card
Trial Urban District Assessment
What subjects does NAEP assess, and how are the subjects chosen?
Since its inception in 1969, National Assessment of Educational Progress (NAEP) assessments have been conducted in numerous academic subjects, including the arts, civics, economics, geography, mathematics, reading, science, U.S. history, and writing.
Since 1988, the National Assessment Governing Board has been responsible for selecting the subject areas to be assessed. Furthermore, the Governing Board oversees creation of the frameworks that underlie the assessments and the specifications that guide the development of the assessment instruments. The framework for each subject area is determined through a collaborative development process that involves teachers, curriculum specialists, subject-matter specialists, school administrators, parents, and members of the general public.
Beginning with the 2003 assessments, national assessments are conducted every two years in reading and mathematics at grades 4 and 8. Results from these assessments are released six months after administration. The assessments are conducted in reading and mathematics in the same year, and initial results for grades 4 and 8 are released in the fall of that year. Results from all other assessments (including the twelfth-grade assessment in reading and mathematics) are released about one year after administration, usually in the spring of the following year.
How many students participate?
The NAEP assessments are administered to representative samples of students rather than the entire national, state, or district populations. Each assessment, based on its design, samples different numbers of students. Nationally representative samples of about 376,000 fourth-graders, 341,000 eighth-graders, and 92,000 twelfth-graders were assessed in either mathematics or reading in 2013. Results are reported for public and private school students in the nation, and for public school students in all 50 states, the District of Columbia, and Department of Defense schools at grades 4 and 8 and 13 states participating in the state pilot program at grade 12.
See the number of schools and students that participated in the recent NAEP assessments:
See more information about the sample sizes and target populations for the recent NAEP assessments:
How has the demographic distribution of students changed between 1990 or 1992 (when the trend lines started) and 2013?
The proportion of Hispanic students has more than doubled between the early 1990s and 2012. At the same time, the proportion of White students has decreased from approximately three-quarters of the population to less than two-thirds. See the NAEP Data Explorer for complete data on changes in the student distribution.
What are the race/ethnicity categories for the 2013 NAEP assessments?
In compliance with new standards from the U.S. Office of Management and Budget for collecting and reporting data on race/ethnicity, additional information was collected beginning in 2011 so that results could be reported separately for Asian students, Native Hawaiian/Other Pacific Islander students, and students identifying with two or more races.
As of 2011, all of the students participating in NAEP are identified as one of the following seven racial/ethnic categories:
- Black (includes African American)
- Hispanic (includes Latino)
- Native Hawaiian/Other Pacific Islander
- American Indian/Alaska Native
- Two or more races
When comparing the results for racial/ethnic groups from 2013 to earlier assessment years, results for Asian and Native Hawaiian/Other Pacific Islander students were combined into a single Asian/Pacific Islander category for all previous assessment years.
How are students with disabilities (SD) and English language learners (ELL) included in the NAEP assessments?
The NAEP program has always endeavored to assess all students selected as a part of its sampling process. In all NAEP assessments (with the exception of the arts assessment), accommodations are provided as necessary for students with disabilities
and/or English language learners (ELL)
in NAEP of an SD or ELL student is encouraged if that student (a) participated in the regular state academic assessment in the subject being tested, and (b) if that student can participate in NAEP with the accommodations NAEP allows. Even if the student did not participate in the regular state assessment, or if he/she needs accommodations NAEP does not allow, school staff are asked whether that student could participate in NAEP with the allowable accommodations
Although every effort is made to include as many students as possible, different jurisdictions have different exclusion policies, and those policies may have changed over time. Because SD and ELL students typically score lower than students not categorized as SD or ELL, jurisdictions that are more inclusive—that is, jurisdictions that assess greater percentages of these students—may have lower average scores than if they had a less inclusive policy.
See the percentage of students identified, excluded, and assessed in recent NAEP assessments:
What testing accommodations does NAEP offer?
NAEP allows students with disabilities (SD) and English language learners (ELL) to use most of the testing accommodations that they receive for state or district tests. Accommodations are adaptations to standard testing procedures that remove barriers to participation in assessments without changing what is being tested. Accommodations were first made available in the main NAEP assessments in 1996 and in the long-term trend assessments in 2004. Examples of such accommodations are extended time and small-group or one-on-one administration. In the mathematics assessment, NAEP does not allow the use of calculators except on booklets specifically allowing calculator usage, as that accommodation would alter what is being tested (i.e., the student's ability to do arithmetic operations). NAEP does offer bilingual (English and Spanish) test booklets for the mathematics assessment. Accommodations not allowed in the reading assessment include giving the assessment in a language other than English or reading the reading passages aloud to the student. Extending testing over several days is not allowed for any of the NAEP assessments because NAEP administrators are in each school only one day.
Some of the testing accommodations that are provided to SD/ELL students in NAEP paper-and-pencil assessments are part of the universal design of the computer-based assessment, which seeks to make the assessment available to all students. For example, the font size adjustment feature available to all students taking the computer-based assessment is comparable to the large-print assessment book accommodation in the paper-and-pencil assessment, and the digital text-to-speech component takes the place of the read-aloud accommodation for paper-and-pencil assessments. However, there are still some accommodations available to SD and ELL students taking the computer-based writing assessment that are not available to other students, such as extended time and breaks.
What are the Governing Board inclusion goals?
The Governing Board, which sets policy for NAEP, has been exploring ways to ensure that NAEP continues to appropriately include as many students as possible and to do so in a consistent manner for all jurisdictions assessed and reported. In March 2010, the Governing Board adopted a new policy, NAEP Testing and Reporting on Students with Disabilities and English Language Learners. This policy was the culmination of work with experts in testing and curriculum, and those who work with exceptional children and students learning to speak English. The policy aims to:
- Maximize participation of sample students in NAEP;
- Reduce variation in exclusion rates for SD and ELL students across states and districts;
- Develop uniform national rules for including students in NAEP; and
- Ensure that NAEP is fully representative of SD and ELL students.
How can I look at sample questions from the assessment?
Released questions from all the NAEP assessments are available in the NAEP Questions Tool. This application also provides results for each state and district on all released NAEP questions.
How are results reported?
Students’ performance on main NAEP assessments is reported as scale scores and as the percentages of students at or above three achievement levels (Basic, Proficient, and Advanced).
Average scores are reported on a 0–500 scale for reading, mathematics at grades 4 and 8, history, and geography; or on a 0–300 scale for science, writing, civics, and mathematics at grade 12. Scores at five percentiles on each scale provide results for lower- performing students (at the 10th and 25th percentiles), middle-performing students (at the 50th percentile), and higher-performing students (at the 75th and 90th percentiles) in that subject.
Achievement levels are performance standards showing what students at the Basic, Proficient and Advanced levels should know and be able to do. Based on recommendations from policymakers, educators, and members of the general public, the National Assessment Governing Board sets specific achievement levels for each subject area and grade assessed. View the achievement level descriptions for mathematics and reading.
Because NAEP scales and achievement levels are developed independently for each subject, results cannot be compared across subjects.
Students’ performance on the NAEP long-term trend assessments is reported as scale scores (on 0–500 scales in mathematics and reading) and as the percentages of students attaining performance levels that correspond to five points on the scale (150, 200, 250, 300, and 350). In each subject, the performance of 9-year olds tends to concentrate within the lower three performance levels, 13-year olds within the middle three levels, and 17-year olds within the top three levels. Read more about the long-term trend performance levels.
NAEP results are not reported for individual students or schools, but are available for selected student groups (e.g., by gender, race/ethnicity, etc.) and based on responses to student, teacher, and school questionnaires.
Back to Top
How can a score change be significant for one group, but a similar or larger change not be significant for another group?
Scale score and percentage estimates (like averages and percentages) all have a margin of error associated with them. These margins of error are called standard errors. Comparisons over time or between groups are based on statistical tests that consider both the size of the differences between estimates and the standard errors of the two estimates being compared. Estimates based on smaller groups are likely to have larger standard errors. When an estimate has a large standard error, a numerical difference that seems large may not be statistically significant. For example, a 4-point change in the average score for large White students may be statistically significant, while an 8-point change for American Indian/Alaska Native students may not be. Standard errors for all results are available in the NAEP Data Explorer.
Is participation in NAEP voluntary?
Federal law specifies that NAEP is voluntary for every student, school, school district, and state. However, federal law also requires all states that receive Title I funds to participate in NAEP reading and mathematics assessments at fourth and eighth grades. Similarly, school districts that receive Title I funds and are selected for the NAEP sample are also required to participate in NAEP reading and mathematics assessments at fourth and eighth grades. Learn more about NAEP and why participation is important.
Are the data confidential?
Federal law dictates complete privacy for all test takers and their families. Under the National Assessment of Educational Progress Authorization Act (Public Law 107-279 III, section 303), the Commissioner of the National Center for Education Statistics (NCES) is charged with ensuring that NAEP tests do not question test takers about personal or family beliefs or make information about their personal identity publicly available.
After publishing NAEP reports, NCES makes data available to researchers but withholds students' names and other identifying information. The names of all participating students are not allowed to leave the schools after NAEP assessments are administered. Because it might be possible to deduce from data the identities of some NAEP schools, researchers must promise, under penalty of fines and jail terms, to keep these identities confidential.
Does NAEP report individual or school-level scores?
No. By design, information is not available at the individual student or school levels. Reports traditionally disclose state, regional, and national results. In 2002, NAEP began to report (on a trial basis) results from several large urban districts (Trial Urban District Assessments), after the release of state and national results. Because NAEP is a large-group assessment, each student takes only a small part of the overall assessment. In most schools, only a small portion of the total grade enrollment is selected to take the assessment, and these students may not reliably or validly represent the total school population. Only when the student scores are aggregated at the state or national level are the data considered reliable and valid estimates of what students know and can do in the content area; consequently, school- or student-level results are never reported.
What information is available on individual state performance?
There are a variety of tools available to further explore the state results. The state profiles page provides data for each state and links to one-page, printable summaries of state performance (known as "snapshots"). The state comparisons page provides tables and maps that compare states and jurisdictions based on the average scale scores for selected groups of public school students within a single assessment year or between two assessment years. The NAEP Data Explorer allows users to search for state results by student demographic groups and hundreds of other variables. Trend data in mathematics and reading are available for all 50 states back to 2003, and for most states back to the first state assessment in the 1990s at grades 4 and 8 and back to 2009 for 11 states at grade 12.
How are state tests different from NAEP?
Most state tests measure student performance on the state's own curriculum standards, that is, on what policymakers and citizens consider important for students to know and be able to do. State tests allow comparisons of results over time within the state, and, in most cases, give individual student scores so that parents can know how their child is performing. State tests do not provide comparisons of results with other states or the nation. NAEP is the only assessment that allows comparison of results from one state with another, or with results for the rest of the nation. The NAEP program helps states answer such questions as: How does the performance of students in my state compare with the performance of students in other states with similar resources or students? How does my state's performance compare with the region's? Are my state's gains in student performance keeping up with the pace of improvement in other states? The term "proficiency" used in relation to performance on state tests does not have the same meaning as the term Proficient on the NAEP achievement levels, because the criteria used to determine proficiency are different. Together, state achievement tests and NAEP help educators and policymakers develop a comprehensive picture of student performance.
What is the NAEP Trial Urban District Assessment (TUDA)?
The Trial Urban District Assessment (TUDA) is a special project of the National Center for Education Statistics, the National Assessment Governing Board, and the Council of the Great City Schools to determine the feasibility of reporting district-level results for the National Assessment of Educational Progress (NAEP). The 2013 assessment marks the seventh assessment in reading since 2002 and the sixth assessment in mathematics since 2003. TUDA results in mathematics and reading are based on representative samples of 1,100 to 2,300 public school students at grade 4 and 900 to 2,100 public school students at grade 8 in each participating urban district in 2013. The District Profiles Tool provides tables and maps that compare urban districts based on the average scale scores for selected groups of public school students within a single assessment year or between two assessment years.
How many districts participate in TUDA each year, and how are they chosen?
A total of 21 urban districts participated in the 2013 Trial Urban District Assessment (TUDA). Districts are invited by the National Assessment Governing Board to participate in the assessment based on a selection process that considers a number of factors including the district's size and racial/ethnic diversity. For example, districts eligible to participate in the TUDA assessments must be large cities with a population of 250,000 or more in addition to having a majority (50 percent or more) of their student population being Black or Hispanic or eligible for the National School Lunch program. The maximum number of districts participating in a given assessment year is based on the level of Congressional funding for the program.
How do the samples for TUDA contribute to state results?
Students in the TUDA samples are also included as part of the state and national samples. For example, the results reported for students in Boston also contribute to the results reported for Massachusetts and to the results for the nation. The districts' results are weighted so that their contribution to the state results reflects the actual proportion of students in the population.
What are "large cities" and why are they used as a point of comparison?
Just as the national public school sample is used as a benchmark for comparing results for states, results for urban districts are compared to results from large cities nationwide. Results for large cities are for students in public schools located in the urbanized areas of cities with populations of 250,000 or more. Large city is not synonymous with "inner city." Schools in participating TUDA districts are also included in the results for large cities, even though some districts (Atlanta, Austin, Charlotte, Cleveland, Fresno, Houston, Jefferson County, Los Angeles, and Miami-Dade) include some schools not classified as large city schools. Students in the 21 TUDA districts represent nearly half of the students who attend schools in large cities nationally. The comparison to students in large cities is made because the demographic characteristics of those students are most like the characteristics of students in the urban districts. Both the districts and large cities overall generally have higher concentrations of Black or Hispanic students, lower-income students, and English language learners than in the nation as a whole.