IES Blog

Institute of Education Sciences

An open letter to Superintendents, as summer begins

If the blockbuster attendance at last month’s Summer Learning and Enrichment Collaborative convening is any sign, many of you are in the midst of planning—or have already started to put in place—your plans for summer learning. As you take time to review resources from the Collaborative and see what’s been learned from the National Summer Learning Project, I’d like to add one just one more consideration to your list: please use this summer as a chance to build evidence about “what works” to improve outcomes for your students. In a word: evaluate!

Given all the things that need to be put in place to even make summer learning happen, it’s fair to ask why evaluation merits even a passing thought.   

I’m urging you to consider building evidence about the outcomes of your program through evaluation because I can guarantee you that, in about a year, someone to whom you really want to give a fulsome answer will ask “so, what did we accomplish last summer?” (Depending upon who they are, and what they care about, that question can vary. Twists can include “what did students learn” or business officers’ pragmatic “what bang did we get for that buck.”)

When that moment comes, I want you to be able to smile, take a deep breath, and rattle off the sort of polished elevator speech that good data, well-analyzed, can help you craft. The alternative—mild to moderate panic followed by an unsatisfying version of “well, you know, we had to implement quickly”—is avoidable. Here’s how.

  1. Get clear on outcomes. You probably have multiple goals for your summer learning programs, including those that are academic, social-emotional, and behavioral. Nonetheless, there’s probably a single word (or a short phrase) that completes the following sentence: “The thing we really want for our students this summer is …” It might be “to rebuild strong relationships between families and schools,” “to be physically and emotionally safe,” or “to get back on track in math.” Whatever it is, get clear on two things: (1) the primary outcome(s) of your program and (2) how you will measure that outcome once the summer comes to an end. Importantly, you should consider outcome measures that will be available for both program participants and non-participants so that you can tell the story about the “value add” of summer learning. (You can certainly also include measures relevant only to participants, especially ones that help you track whether you are implementing your program as designed.)
  2. Use a logic model. Logic models are the “storyboard” of your program, depicting exactly how its activities will come together to cause improvement in the student outcomes that matter most. Logic models force program designers to be explicit about each component of their program and its intended impact. Taking time to develop a logic model can expose potentially unreasonable assumptions and missing supports that, if added, would make it more likely that a program succeeds. If you don’t already have a favorite logic model tool, we have resources available for free!  
  3. Implement evidence-based practices aligned to program outcomes. A wise colleague (h/t Melissa Moritz) recently reminded me that a summer program is the “container” (for lack of a better word) in which other learning experiences and educationally purposeful content are packaged, and that there are evidence-based practices for the design and delivery of both. (Remember: “evidence-based practices” run the gamut from those that demonstrate a rationale to those supported by promising, moderate, or strong evidence.) As you are using the best available evidence to build a strong summer program, don’t forget to ensure you’re using evidence-based practices in service of the specific outcomes you want those programs to achieve. For example, if your primary goal for students is math catch-up, then the foundation of your summer program should be an evidence-based Tier I math curriculum. If it is truly important that students achieve the outcome you’ve set for them, then they’re deserving of evidence-based educational practices supported by an evidence-based program design!
  4. Monitor and support implementation. Once your summer program is up and running, it’s useful to understand just how well your plan—the logic model you developed earlier—is playing out in real life. If staff trainings were planned, did they occur and did everyone attend as scheduled? Are activities occurring as intended, with the level of quality that was hoped for? Is attendance and engagement high? Monitoring implementation alerts you to where things may be “off track,” flagging where more supports for your team might be helpful. And, importantly, it can provide useful context for the outcomes you observe at the end of the summer. If you don't already have an established protocol for using data as part of continuous improvement, free resources are available!
  5. Track student attendance. If you don’t know who—specifically—participated in summer learning activities, describing how well those activities “worked” can get tricky. Whether your program is full-day, half-day, in-person, hybrid, or something else, develop a system to track (1) who was present, (2) on what days, and (3) for how long. Then, store that information in your student information system (or another database) where it can be accessed later. 
  6. Analyze and report your data, with an explicit eye toward equity. Data and data analysis can help you tell the story of your summer programming. Given the disproportionate impact COVID has had on students that many education systems have underserved, placing equity at the center of your planned analyses is critical. For example:
    • Who—and who did not—participate in summer programs? Data collected to monitor attendance should allow you to know who (specifically) participated in your summer programs. With that information, you can prepare simple tables that show the total number of participants and that total broken down by important student subgroups, such as gender, race/ethnicity, or socioeconomic status. Importantly, those data for your program should be compared with similar data for your school or district (as appropriate). Explore, for example, whether there are one or more populations disproportionately underrepresented in your program and the implications for the work both now and next summer.
    • How strong was attendance? Prior research has suggested that students benefit the most from summer programs when they are “high attenders.” (Twenty or more days out programs’ typical 25 to 30 total days.) Using your daily, by-student attendance data, calculate attendance intensity for your program’s participants overall and by important student subgroups. For example, what percentage of students attended between 0 and 24%, 25% to 49%, 50% to 74%, and 75% or more days?
    • How did important outcomes vary between program participants and non-participants? At the outset of the planning process, you identified one or more outcomes you hoped students would achieve by participating in your program and how you’d measure them. In the case of a “math catch-up” program, for example, you might be hoping that more summer learning participants get a score of “on-grade level” at the outset of the school year than their non-participating peers, potentially promising evidence that the program might have offered a benefit. Disaggregating these results by student subgroups when possible highlights whether the program might have been more effective for some students than others, providing insight into potential changes for next year’s work.     
    • Remember that collecting and analyzing data is just a means to an end: learning to inform improvement. Consider how involving program designers and participants—including educators, parents, and students—in discussions about what was learned as a result of your analyses can be used to strengthen next year’s program.
  7. Ask for help. If you choose to take up the evaluation mantle to build evidence about your summer program, bravo! And know that you do not have to do it alone. First, think locally. Are you near a two-year or four-year college? Consider contacting their education faculty to see whether they’re up for a collaboration. Second, explore whether your state has a state-wide “research hub” for education issues (e.g., Delaware, Tennessee) that could point you in the direction of a state or local evaluation expert. Third, connect with your state’s Regional Educational Lab or Regional Comprehensive Center for guidance or a referral. Finally, consider joining the national conversation! If you would be interested in participating in an Evaluation Working Group, email my colleague Melissa Moritz at melissa.w.moritz@ed.gov.

Summer 2021 is shaping up to be one for the record books. For many students, summer is a time for rest and relaxation. But this year, it will also be a time for reengaging students and families with their school communities and, we hope, a significant amount of learning. Spending time now thinking about measuring that reengagement and learning—even in simple ways—will pay dividends this summer and beyond.

My colleagues and at the U.S. Department of Education are here to help, and we welcome your feedback. Please feel free to contact me directly at matthew.soldner@ed.gov.

Matthew Soldner
Commissioner, National Center for Education Evaluation and Regional Assistance Agency

Announcing the Condition of Education 2021 Release

NCES is pleased to present the 2021 edition of the Condition of Education, an annual report mandated by the U.S. Congress that summarizes the latest data on education in the United States. This report uses data from across the center and from other sources and is designed to help policymakers and the public monitor educational progress.

Beginning in 2021, individual indicators can be accessed online on the newly redesigned Condition of Education Indicator System website. A synthesis of key findings from these indicators can be found in the Report on the Condition of Education, a more user-friendly PDF report.

A total of 86 indicators are included in this year’s Condition of Education, 55 of which were updated this year. As in prior years, these indicators present a range of topics from prekindergarten through postsecondary education, as well as labor force outcomes and international comparisons. Additionally, this year’s 55 updated indicators include 17 indicators on school crime and safety.

For the 2021 edition of the Condition of Education, most data were collected prior to 2020, either during the 2018–19 academic year or in fall 2019. Therefore, with some exceptions, this year’s report presents findings from prior to the coronavirus pandemic.

At the elementary and secondary level (prekindergarten through grade 12), the data show that 50.7 million students were enrolled in public schools fall 2018, the most recent year for which data were available at the time this report was written. Public charter school enrollment accounted for 7 percent (3.3 million students) of these public school enrollments, more than doubling from 3 percent (1.6 million students) in 2009. In 2019, U.S. 4th- and 8th-grade students scored above the scale centerpoint (500 out of 1000) on both the math and science assessments in the Trends in International Mathematics and Science Study (TIMSS).

In 2020, 95 percent of 25- to 29-year-olds had at least a high school diploma or equivalent, while 39 percent had a bachelor’s or higher degree. These levels of educational attainment are associated with economic outcomes, such as employment and earnings. For example, among those working full time, year round, annual median earnings in 2019 were 59 percent higher for 25- to 34-year-olds with a bachelor’s or higher degree than for those with a high school diploma or equivalent.

In addition to regularly updated annual indicators, this year’s two spotlight indicators highlight early findings on the educational impact of the coronavirus pandemic from the Household Pulse Survey (HPS).

  • The first spotlight examines distance learning at the elementary and secondary level at the beginning of the 2020–21 academic year. Overall, among adults with children under 18 in the home enrolled in school, two-thirds reported in September 2020 that classes had been moved to a distance learning format using online resources. In order to participate in these remote learning settings, students must have access to computers and the internet. More than 90 percent of adults with children in their household reported that one or both of these resources were always or usually available to children for educational purposes in September 2020. At the same time, 59 percent of adults reported that computers were provided by the child’s school or district, while 4 percent reported that internet access was paid for by the child’s school or district. Although higher percentages of lower income adults reported such assistance, this did not eliminate inequalities in access to these resources by household income.
  • The second spotlight examines changes in postsecondary education plans for fall 2020 in response to the coronavirus pandemic. Among adults 18 years old and over who had household members planning to take classes in fall 2020 from a postsecondary institution, 45 percent reported that the classes at least one household member planned would be in different formats in the fall (e.g., formats would change from in-person to online), 31 percent reported that all plans to take classes in the fall had been canceled for at least one household member, and 12 percent reported that at least one household member would take fewer classes in the fall. Some 28 percent reported no change in fall plans to take postsecondary classes for at least one household member. The two most frequently cited reasons for the cancellation of plans were having the coronavirus or having concerns about getting the coronavirus (46 percent), followed by not being able to pay for classes/educational expenses because of changes to income from the pandemic (42 percent).

The Condition of Education also includes an At a Glance section, a Reader’s Guide, a Glossary, and a Guide to Sources, all of which provide additional background information. Each indicator includes references to the source data tables used to produce the indicator.

As new data are released throughout the year, indicators will be updated and made available online.

In addition to publishing the Condition of Education, NCES produces a wide range of other reports and datasets designed to help inform policymakers and the public about significant trends and topics in education. More information about the latest activities and releases at NCES may be found on our website or by following us on Twitter, Facebook, and LinkedIn.

 

By James L. Woodworth, NCES Commissioner

Recognizing Asian and Pacific Islander Educators with the National Teacher and Principal Survey (NTPS)

May is Asian American and Pacific Islander Heritage Month, which celebrates the achievements of Asian/Pacific Islander Americans and immigrants and the many ways they have contributed to the United States.

In honor of Asian and Native Hawaiian/Pacific Islander1 educators who help students learn every day, here are some selected facts and figures from the 2017–18 National Teacher and Principal Survey (NTPS). The NTPS collects data about public and private K–12 schools in the United States from the perspective of the teachers and principals who staff them. These data were collected in 2017–18, prior to the coronavirus pandemic.

 

Composition of U.S. K12 Public and Private Schools: 201718

  • Although Native Hawaiian/Pacific Islander teachers and principals are important members of school communities, they comprise a relatively small percentage of public and private school educators overall. Less than 1 percent of either public or private school teachers (0.2 and 0.1 percent,2 respectively) and principals (0.2 percent and 0.3 percent,3 respectively) were Native Hawaiian/Pacific islander.

Figure 1. Percentage distribution of all teachers and principals who are Asian and Native Hawaiian/Pacific Islander, by school type: 201718

! Interpret data with caution. The coefficient of variation (CV) for this estimate is between 30 and 50 percent.
NOTE: Race categories exclude persons of Hispanic ethnicity.
SOURCE: U.S. Department of Education, National Center for Education Statistics, National Teacher and Principal Survey (NTPS), "Public School Teacher and Private School Teacher Data File, Public School Principal and Private School Principal Data File," 2017–18


Community and K12 School Characteristics: 201718

  • A higher percentage of Asian teachers worked in city schools than in most other community types (i.e., suburb, town, and rural) in 2017–18. There were some differences by school type (i.e., public vs. private).4 For example, teacher employment patterns in both school types were similar at rural schools and city schools but different at suburban schools.
  • Higher percentages of Asian teachers worked in both public and private city schools (3.1 and 3.8 percent, respectively) than in public and private rural schools (0.5 and 0.8 percent, respectively) (figure 2).
  • Although a lower percentage of Asian private school teachers worked in suburban schools (2.3 percent) than in city schools (3.8 percent), there was no significant difference in the percentage of Asian public school teachers who worked in suburban versus city schools.

Figure 2. Percentage distribution of all teachers who are Asian and Native Hawaiian/Pacific Islander, by school type and community type: 201718

# Rounds to zero
! Interpret data with caution. The coefficient of variation (CV) for this estimate is between 30 and 50 percent.
‡ Reporting standards not met. The coefficient of variation (CV) for this estimate is 50 percent or greater (i.e., the standard error is 50 percent or more of the estimate) or the response rate is below 50 percent.
NOTE: Race categories exclude persons of Hispanic ethnicity.
SOURCE: U.S. Department of Education, National Center for Education Statistics, National Teacher and Principal Survey (NTPS), "Public School Teacher and Private School Teacher Data File," 2017–18


In honor of Asian American and Pacific Islander Heritage Month, NCES would like to thank Asian and Pacific Islander educators nationwide who play vital roles in our education system.

The data in this blog would not be possible without the participation of teachers, principals, and school staff in the NTPS. We are currently conducting the 2020–21 NTPS to learn more about teaching experiences during the pandemic. If you were contacted about participating in the 2020–21 NTPS and have questions, please email ntps@census.gov or call 1-888-595-1338.

For more information about the National Teacher and Principal Survey (NTPS), please visit https://nces.ed.gov/surveys/ntps/. More findings and details are available in the NTPS schoolteacher, and principal reports.

 

[1] The NTPS definition of “Asian American or Native Hawaiian/Pacific Islander” is synonymous with the Library of Congress’ term “Asian/Pacific Islander.” The Library of Congress, one of the sponsors of the heritage month, states that Asian/Pacific encompasses all of the Asian continent and the Pacific islands of Melanesia (New Guinea, New Caledonia, Vanuatu, Fiji and the Solomon Islands), Micronesia (Marianas, Guam, Wake Island, Palau, Marshall Islands, Kiribati, Nauru and the Federated States of Micronesia) and Polynesia (New Zealand, Hawaiian Islands, Rotuma, Midway Islands, Samoa, American Samoa, Tonga, Tuvalu, Cook Islands, French Polynesia and Easter Island). Note that the Hawaiian Islands are included as “Pacific islands” in their definition but are named independently in the NTPS definition, and that only Asian or Native Hawaiian/Pacific Islander respondents who also indicated that they were not Hispanic, which includes Latino, are included in this definition.

[2] Interpret data with caution. The coefficient of variation (CV) for this estimate is between 30 percent and 50 percent (i.e., the standard error is at least 30 percent and less than 50 percent of the estimate).

[3] Interpret data with caution. The coefficient of variation (CV) for this estimate is between 30 percent and 50 percent (i.e., the standard error is at least 30 percent and less than 50 percent of the estimate).

[4] Given the size of the Native Hawaiian/Pacific Islander teacher and principal populations in the NTPS, granular differences about where Native Hawaiian/Pacific Islander teachers and principals were more often employed is difficult to produce from a sample survey because of sample sizes.

 

By Julia Merlin, NCES

Identifying Virtual Schools Using the Common Core of Data (CCD)

With the sudden changes in education due to the coronavirus pandemic, virtual instruction is in the spotlight more than ever before. Prior to the pandemic, there were already increasing numbers of virtual public schools that offered instructional programs to those that may have difficulty accessing or attending traditional brick-and-mortar schools. Even before the pandemic, some schools and districts were using virtual instruction in new ways, such as switching to virtual instruction on snow days rather than cancelling school. Throughout the pandemic, schools and districts have been relying more heavily on virtual instruction than ever before.

Since school year (SY) 2013–14, the Common Core of Data (CCD) has included a school-level virtual status flag, which has changed over time. For SY 2020–21, the Department of Education instructed states to classify schools that are normally brick-and-mortar schools but are operating remotely during the pandemic as supplemental virtual (see table below).

 

SY 201314 Through SY 201516

Virtual status is a Yes/No flag, meaning that a school was either virtual or not virtual based on the following definition: “A public school that offers only instruction in which students and teachers are separated by time and/or location, and interaction occurs via computers and/or telecommunications technologies. A virtual school generally does not have a physical facility that allows students to attend classes on site.”

 

SY 201617 and Onward

NCES changed the virtual status flag to be more nuanced. Rather than just a Yes/No flag, the reported value indicates virtual status on a spectrum using the following values:

 

Permitted Value Abbreviation

Definition

FULLVIRTUAL

Exclusively virtual. All instruction offered by the school is virtual. This does not exclude students and teachers meeting in person for field trips, school-sponsored social events, or assessment purposes. All students receive all instruction virtually. Prior to SY 2019–20, this value was labeled as “Fully virtual.”

FACEVIRTUAL

Primarily virtual. The school’s major purpose is to provide virtual instruction to students, but some traditional classroom instruction is also provided. Most students receive all instruction virtually. Prior to SY 2019–20, this value was labeled as “Virtual with face to face options.”

SUPPVIRTUAL

Supplemental virtual. Instruction is directed by teachers in a traditional classroom setting; virtual instruction supplements face-to-face instruction by teachers. Students vary in the extent to which their instruction is virtual.

NOTVIRTUAL

No virtual instruction. The school does not offer any virtual instruction.  No students receive any virtual instruction. Prior to SY 2019–20, this value was labeled as “Not virtual.”

 

Generally, data users should treat the value “FULLVIRTUAL” (exclusively virtual) under the new approach as the equivalent of Virtual=Yes in the old approach. The virtual flag is a status assigned to a school as of October 1 each school year. 

The number of exclusively virtual schools has increased in the past several years. In SY 2013–14, there were a total of 478 exclusively virtual schools reported in CCD (approximately 0.5% of all operational schools). In SY 2019–20 there were 691 schools (approximately 0.7% of all operational schools) that were exclusively virtual. The student enrollment in exclusively virtual schools also increased from 199,815 students in SY 2013–14 to 293,717 in SY 2019–20, which is an increase from 0.4% of the total student enrollment in public schools to 0.6%.

Of the 691 virtual schools in SY 2019–20, 590 were reported as “regular” schools, meaning they offered a general academic curriculum rather than one focused on special needs or vocational education, 218 were charter schools, and 289 were high schools. Of the 8,673 schools that were reported as either primary virtual or supplemental virtual, 7,727 were regular schools, 624 were charter schools, and 4,098 were high schools.

To see tables summarizing the above data, visit our Data Tables web page and select the nonfiscal tables.

To learn more about the CCD, visit our web page. For more information about how to access CCD data, including tips for using the District and School Locators and the Elementary and Secondary Information System, read the blog post “Accessing the Common Core of Data (CCD).” You can also access the raw data files for additional information about public elementary and secondary schools. Enrollment and staff data for SY 2020–21 are currently being collected, processed, and verified and could be released by spring 2022.

 

By Patrick Keaton, NCES

Highlights of 2015–16 and 2016–17 School-Level Finance Data

NCES annually publishes comprehensive data on the finances of public elementary and secondary schools through the Common Core of Data (CCD). For many years, these data have been released at the state level through the National Public Education Financial Survey (NPEFS) and at the school district level through the Local Education Agency (School District) Finance Survey (F-33).

Policymakers, researchers, and the public have long voiced concerns about the equitable distribution of school funding within and across districts. School-level finance data provide reliable and unbiased measures that can be utilized to compare how resources are distributed among schools within districts.

Education spending data are now available for 15 states[1] at the school level through the School-Level Finance Survey (SLFS), which NCES has been conducting annually since 2014.[2] In November 2018, the Office of Management and Budget (OMB) approved changes to the SLFS wherein variables have been added to make the SLFS directly analogous to the F-33 Survey and to the Every Student Succeeds Act (ESSA) provisions on reporting expenditures per pupil at the school and district levels.

Below are some key findings from the recently released NCES report Highlights of School-Level Finance Data: Selected Findings From the School-Level Finance Survey (SLFS) School Years 2015–16 (FY 16) and 2016–17 (FY 17).

 

Eight of the 15 states participating in the SLFS are able to report school-level expenditure data requested by the survey for a high percentage of their schools.

The initial years of the SLFS have consistently demonstrated that most states can report detailed school‑level spending data for the vast majority of their schools. In school year (SY) 2016–17 (FY 2017), most states participating in the SLFS (8 out of 15) reported school-level finance data for at least 95 percent of their schools (figure 1). With the exception of New Jersey,[3] all states were able to report at least partial SLFS finance data for more than 78 percent of their schools, ranging from 79 percent of schools in Colorado to 99 percent of schools in Oklahoma. In addition, the percentage of students covered by SLFS reporting was more than 99 percent in 9 of the 15 participating states. 


Figure 1. Percentage of students covered and percentage of schools with fiscal data reported in the School-Level Finance Survey (SLFS), by participating state: FY 2017


 

The SLFS can be used to evaluate school-level expenditure data based on various descriptive school characteristics.

The SLFS allows data users to not only view comparable school-level spending data but also evaluate differences in school-level spending based on a variety of school characteristics. In the report, SY 2016–17 (FY 2017) SLFS data were evaluated by charter status and urbanicity. Key findings from this evaluation include the following:

  • Median teacher salaries[4] in charter schools were lower than median teacher salaries in noncharter schools in all 7 states that met the standards for reporting teacher salaries for both charter and noncharter schools (figure 2).
  • School expenditures were often higher in cities and suburbs than in towns and rural areas. Median teacher salaries, for example, were highest for schools in either cities or suburbs in 9 of the 10 states that met the standards for reporting teacher salaries in each of the urbanicities (city, suburb, town, and rural) (figure 3).  

Figure 2. Median teacher salary for operational public elementary and secondary schools, by school charter status and reporting state: FY 2017


Figure 3. Median teacher salary for operational public elementary and secondary schools, by school urbanicity and reporting state: FY 2017


Median technology‑related expenditures per pupil were also highest for schools in either cities or suburbs in 9 of the 11 states that met the standards for reporting technology-related expenditures in each of the urbanicities, with schools in cities reporting the highest median technology-related expenditures per pupil in 6 of those states.

 

The SLFS can be used to evaluate and compare school-level expenditure data by various poverty indicators.

The report also evaluates and compares school-level spending by school poverty indicators, such as Title I eligibility and school neighborhood poverty level. Key findings from this evaluation include the following:

  • In SY 2016–17 (FY 2017), median teacher salaries were slightly lower for Title I eligible schools than for non-Title I eligible schools in 7 of the 8 states where standards were met for reporting both Title I eligible and non-Title I eligible schools. However, median personnel salaries per pupil were slightly lower for Title I eligible schools than for non-Title I eligible schools in only 2 of the 8 states where reporting standards were met.    
  • Median personnel salaries per pupil for SY 2016–17 were higher for schools in high‑poverty neighborhoods than for schools in low-poverty neighborhoods in 7 of the 12 states where standards were met for reporting school personnel salaries.

 

To learn more about these and other key findings from the SY 2015–16 and 2016–17 SLFS data collections, read the full report. The corresponding data files for these collections will be released later this year.


[1] The following 15 states participated in the SY 2015–16 and 2016–17 SLFS: Alabama, Arkansas, Colorado, Florida, Georgia, Kentucky, Louisiana, Maine, Michigan, New Jersey, North Carolina, Ohio, Oklahoma, Rhode Island, and Wyoming.

[2] Spending refers to “current expenditures,” which are expenditures for the day-to-day operation of schools and school districts for public elementary/secondary education. For the SY 2015–16 and 2016–17 data collections referenced in this blog, the SLFS did not collect complete current expenditures; the current expenditures collected for those years included expenditures most typically accounted for at the school level, such as instructional staff salaries, student support services salaries, instructional staff support services salaries, school administration salaries, and supplies and purchased services. As of SY 2017–18, the SLFS was expanded to collect complete current expenditures.

[3] In New Jersey, detailed school-level finance reporting is required for only its “Abbott” districts, which comprised only 31 of the state’s 699 school districts in SY 2016–17.

[4] “Median teacher salaries” are defined as the median of the schools’ average teacher salary. A school’s average teacher salary is calculated as the teacher salary expenditures reported for the school divided by the number of full-time-equivalent (FTE) teachers at the school. Note that this calculation differs from calculating the median of salaries across all teachers at the school, as the SLFS does not collect or report salary data at the teacher level.

 

By Stephen Cornman, NCES