About the NAEP Mathematics Assessment
The National Assessment of Educational Progress (NAEP) is a congressionally mandated project administered by the National Center for Education Statistics (NCES) within the U.S. Department of Education and is the largest continuing and nationally representative assessment of what our nation's students know and can do in select subjects. NCES first administered NAEP in 1969 to measure student achievement nationally. The National Assessment of Educational Progress (NAEP) mathematics assessment at grades 4 and 8 is a digitally based assessment administered on tablets. The NAEP mathematics assessment measures students' knowledge and skills in mathematics and their ability to solve problems in mathematical and real-world contexts. Results are reported for the nation overall, for states and jurisdictions, and for districts participating in the Trial Urban District Assessment (TUDA).
Reporting the Results
NAEP began administering assessments periodically in the 1990s and administered the mathematics assessment every two years beginning in 2003. NAEP mathematics results are reported as average scores on a 0â€“500 scale and as percentages of students performing at or above three achievement levels: NAEP Basic, NAEP Proficient, and NAEP Advanced. Because NAEP scores and achievement levels are developed independently for each subject, results cannot be compared across subjects. In addition, although average scores are reported on a 0â€“500 scale at both grades 4 and 8, the scale scores were derived separately and therefore scores cannot be compared across grades. Read more about the NAEP scaling process in the Technical Documentation.
Results are reported for students overall and for selected demographic groups such as by race/ethnicity, gender, and studentsâ€™ eligibility for the National School Lunch Program (NSLP). Results for the NSLP have been reported since 2003 when the quality of the data on students' eligibility for the program improved. As a result of the passage of the Healthy, Hunger-Free Kids Act of 2010, schools can use a new universal meal service option, the â€œCommunity Eligibility Provisionâ€ (CEP). Through CEP, eligible schools can provide meal service to all students at no charge, regardless of economic status and without the need to collect eligibility data through household applications. CEP became available nationwide in the 2014â€“2015 school year; as a result, the percentage of students categorized as eligible for NSLP has increased in comparison to 2013. Therefore, readers should interpret NSLP trend results with caution.
Read more about how student groups are defined and how to interpret NAEP results from the mathematics assessment.
NAEP reports results using widely accepted statistical standards; findings are reported based on a statistical significance level set at .05 with appropriate adjustments for multiple comparisons. Only those differences that are found to be statistically significant are referred to as â€œhigherâ€ or â€œlower.â€ When state/jurisdiction results are compared to the nation, appropriate adjustments are made for part-whole comparisons. In addition, a part-whole relationship exists between the TUDA district samples and large city, state, and national samples. Therefore, when individual TUDA district results are compared to results for large city, a state, or the nation, the significance tests appropriately reflect this dependency.
Comparisons over time of scores and percentages or between groups are based on statistical tests that consider both the size of the difference and the standard errors of the two statistics being compared. Standard errors are margins of error and estimates based on smaller groups are likely to have larger margins of error. For example, a 2-point change in the average score for the nation may be statistically significant, while a 2-point score change for a state is not, due to the size of the standard errors for the score estimate. The size of the standard errors may also be influenced by other factors such as the degree to which the assessed students are representative of the entire population. Standard errors for the estimates presented in this report are available in the NAEP Data Explorer (NDE). For the 2019 and 2017 analyses, an additional component was included for the standard error calculation when linking scores across the two delivery modes.
Average scores and percentages of students are presented as whole numbers in the report; however, the statistical comparison tests are based on unrounded numbers. In some cases, the scores or the percentages have the same whole number values, but they are statistically different from each other. For example, the average score of fourth-grade students who were identified as English language learners (ELL) was 243 in 2019, which was statistically different from the score of 243 in 2017. The â€œCustomize data tablesâ€ link at the bottom of the page provides data tables from the NDE. The tables offer detailed information on more precise values for the scores and percentages and explain how the two comparison estimates differ from each other.
A scale score that is significantly higher or lower in comparison to an earlier assessment year is reliable evidence that student performance has changed. NAEP is not, however, designed to identify the causes of change in student performance. Although comparisons are made in studentsâ€™ performance based on demographic characteristics and educational experiences, the comparisons cannot be used to establish a cause-and-effect relationship between the characteristic or experience and achievement. Many factors may influence student achievement, including educational policies and practices, available resources, and the demographic characteristics of the student body. Such factors may change over time and vary among student groups.