IES Blog

Institute of Education Sciences

Leveraging Economic Data to Understand the Education Workforce

The Digest of Education Statistics recently debuted 13 new tables on K–12 employment and wages from a data source that is new to the Digest—the Occupational Employment Wage Statistics (OEWS) program of the Bureau of Labor Statistics (BLS). NCES’s Annual Reports and Information Staff conducted an extensive review of existing and emerging data sources and found that BLS’s OEWS program provides high-quality, detailed, and timely data that are suitable to inform policymaking in education and workforce development.1 In this blog post, we share why we added this new data source, how we evaluated and prepared these data, and our future plans to expand on these efforts.

 

Need for Education Workforce Data

NCES recognized that education stakeholders need more granular and timely data on the condition of the education workforce to inform decisionmaking. In the wake of the coronavirus pandemic, school districts are looking to address critical staffing needs. According to NCES’s School Pulse Panel, entering the 2023–24 school year (SY), just under half of U.S. public schools reported feeling understaffed and had a need for special education teachers, transportation staff, and mental health professionals.

Since staffing needs and labor markets vary from district to district and state to state, it is important that we create national- and state-level tabulations for specific occupations, including those of special interest since the pandemic, like bus drivers, social workers, and special education teachers. Similarly, we want to be able to provide annual data updates so stakeholders can make the most up-to-date decisions possible.

Annual Digest table updates, coupled with detailed occupational and state-level data, will provide relevant and timely information on employment and wage trends that will be valuable in current and future efforts to address teacher and staff retention and recruitment. See below for a list of the new Digest tables.

  • National-level employment and annual wages
  • Selected teaching occupations (211.70)
  • Selected noninstructional occupations (213.70)
  • State-level employment and annual wages
  • Preschool teachers (211.70a)
  • Kindergarten teachers (211.70b)
  • Elementary school teachers (211.70c)
  • Middle school teachers (211.70d)
  • Secondary school teachers (211.70e)
  • Kindergarten and elementary special education teachers (211.70f)
  • Middle school special education teachers (211.70g)
  • Secondary school special education teachers (211.70h)
  • Substitute teachers (211.70i)
  • Teaching assistants (211.70j)
  • All occupations in the Elementary and Secondary Education industry (213.75)

 

Strengths of OEWS

OEWS and the Digest tables are aligned with the Federal Committee on Statistical Methodology’s Data Quality Framework, specifically the principles of objectivity (standardization), utility (granularity and timeliness), and integrity (data quality).


Standardization

OEWS produces employment and wage estimates using standardized industry and occupational classifications. Using the North American Industry Classification System, establishments are grouped into categories—called industries—based on their primary business activities. Like industries, occupations are organized into groups or categories based on common job duties (using the Standard Occupational Classification). Occupations that are common to K–12 schools can also be found in other industries, and the OEWS provides both cross-industry estimates and industry-specific estimates for just Elementary and Secondary Education industry. To provide the most relevant and comparable data for education stakeholders, NCES chose to focus on distinct occupational estimates for the Elementary and Secondary Education industry, since all establishments (e.g., school boards, school districts) provide the same services: instruction or coursework for basic preparatory education (typically K–12).2     

Another advantage of the OEWS data is the ability to examine specific detailed occupations, like elementary school teachers, secondary school teachers, and education administrators. Digest tables include estimates for specific instructional and noninstructional occupations, which allows users to make comparisons among teachers and staff with similar job responsibilities, providing opportunities for more targeted decisionmaking.


Granularity

In addition to data on detailed occupations, OEWS data provide data at national and state and levels, allowing for comparisons across geographies. National-level Digest tables include estimates for public and private education employers.3 Publicly funded charter schools run by private establishments are included in private ownership estimates, as they can be managed by parents, community groups, or private organizations. Public ownership is limited to establishments that are run by federal, state, or local governments. State-level Digest tables provide more localized information covering labor markets for the 50 states, the District of Columbia, Puerto Rico, Guam, and the U.S. Virgin Islands.
   

Timeliness and Data Quality

OEWS data are updated annually from a sample of about 1.1 million establishments’ data collected over a 3-year period. The OEWS sample is drawn from an administrative list of public and private companies and organizations that is estimated to cover about 95 percent of jobs.4 When employers respond to OEWS, they report from payroll data that are maintained as a part of regular business operations and typically do not require any additional collections or calculations. Payroll data reflect wages paid by employers for a job, which has a commonly accepted definition across employers or industries. This allows for more accurate comparisons of annual wages for a particular job. In contrast, when wages are self-reported by a respondent in person-level or household surveys, the reported data may be difficult to accurately code to a specific industry or detailed occupation, and there is greater chance of recall error by the respondent. Additionally, OEWS provides specialized respondent instructions for elementary and secondary schools and postsecondary institutions that accommodate the uniqueness of what educators do and how they are paid. These instructions enable precise coding of the occupations commonly found in these industries and a more precise and consistent reporting of wages of workers with a variety of schedules (e.g., school year vs. annual, part time vs. full time).   

OEWS uses strict quality control and confidentiality measures and strong sampling and estimation methodologies.5 BLS also partners with state workforce agencies to facilitate the collection, coding, and quality review of OEWS data. States’ highly trained staff contribute local knowledge, establish strong respondent relationships, and provide detailed coding expertise to further ensure the quality of the data. 

After assessing the strengths of the OEWS data, the Digest team focused on the comparability of the data over time to ensure that the data would be best suited for stakeholder needs and have the most utility. First, we checked for changes to the industrial and occupational classifications. Although there were no industrial changes, the occupational classifications of some staff occupations—like librarians, school bus drivers, and school psychologists—did change. In those cases, we only included comparable estimates in the tables.

Second, all new Digest tables include nonoverlapping data years to account for the 3-year collection period. While users cannot compare wages in 2020 with 2021 and 2022, they can explore data from 2016, 2019, and 2022. Third, the Digest tables present estimates for earlier data years to ensure the same estimation method was used to produce estimates over time.6 Finally, we did not identify any geographical, scope, reference period, or wage estimation methodology changes that would impact the information presented in tables. These checks ensured we presented the most reliable and accurate data comparisons.

 

Next Steps  

The use of OEWS data in the Digest is a first step in harnessing the strength of BLS data to provide more relevant and timely data, leading to a more comprehensive understanding of the education workforce. NCES is investigating ways we can partner with BLS to further expand these granular and timely economic data, meeting a National Academies of Science, Engineering, and Medicine recommendation to collaborate with other federal agencies and incorporate data from new sources to provide policy-relevant information. We plan to explore the relationship between BLS data and NCES data, such as the Common Core of Data, and increase opportunities for more detailed workforce analyses.

NCES is committed to exploring new data sources that can fill important knowledge gaps and expand the breadth of quality information available to education stakeholders. As we integrate new data sources and develop new tabulations, we will be transparent about our evaluation processes and the advantages and limitations of sources. We will provide specific examples of how information can be used to support evidence-based policymaking. Additionally, NCES will continue to investigate new data sources that inform economic issues related to education. For example, we plan to explore Post-Secondary Employment Outcomes to better understand education-to-employment pathways. We are investigating sources for building and land use data to assess the condition and utilization of school facilities. We are also looking for opportunities to integrate diverse data sources to expand to new areas of the education landscape and to support timelier and more locally informed decisionmaking.
 

How will you use the new Digest tables? Do you have suggestions for new data sources? Let us know at ARIS.NCES@ed.gov.

 

By Josue DeLaRosa, Kristi Donaldson, and Marie Marcum, NCES


[1] See these frequently asked questions for a description of current uses, including economic development planning and to project future labor market needs.

[2] Although most of the K–12 instructional occupations are in the Elementary and Secondary Education industry, both instructional and noninstructional occupations can be found in others (e.g., Colleges, Universities, and Professional Schools; Child Care Services). See Educational Instruction and Library Occupations for more details. For example, preschool teachers differ from some of the other occupations presented in the Digest tables, where most of the employment is in the Child Care Services industry. Preschool teachers included in Digest tables reflect the employment and average annual wage of those who are employed in the Elementary and Secondary Education industry, not all preschool teachers.

[3] Note that estimates do not consider differences that might exist between public and private employers, such as age and experience of workers, work schedules, or cost of living.

[4] This includes a database of businesses reporting to state unemployment insurance (UI) programs. For more information, see Quarterly Census of Employment and Wages.

[5] See Occupational Employment and Wage Statistics for more details on specific methods.

[6] Research estimates are used for years prior to 2021, and Digest tables will not present estimates prior to 2015, the first year of revised research estimates. See OEWS Research Estimates by State and Industry for more information.

Making Meaning Out of Statistics

By Dr. Peggy G. Carr, NCES Commissioner

The United States does not have a centralized statistical system like Canada or Sweden, but the federal statistical system we do have now speaks largely with one voice thanks to the Office of Management and Budget’s U.S. Chief Statistician, the Evidence Act of 2018, and proposed regulations to clearly integrate extensive and detailed OMB statistical policy directives into applications of the Act. The Evidence Act guides the work of the federal statistical system to help ensure that official federal statistics, like those we report here at NCES, are collected, analyzed, and reported in a way that the public can trust. The statistics we put out, such as the number and types of schools in the United States, are the building blocks upon which policymakers make policy, educators plan the future of schooling, researchers develop hypotheses about how education works, and parents and the public track the progress of the education system. They all need to know they can trust these statistics—that they are accurate and unbiased, and uninfluenced by political interests or the whims of the statistical methodologist producing the numbers. Through the Evidence Act and our work with colleagues in the federal statistical system, we’ve established guidelines and standards for what we can say, what we won’t say, and what we can’t say. And they help ensure that we do not drift into territory that is beyond our mission.

Given how much thought NCES and the federal statistical system more broadly has put into the way we talk about our statistics, a recent IES blog post, “Statistically Significant Doesn't Mean Meaningful, naturally piqued my interest. I thought back to a question on this very topic that I had on my Ph.D. qualifying statistical comprehensive essay exam. I still remember nailing the answer to that question all these years later. But it’s a tough one—the difference between “statistically significant” and “meaningful” findings—and it’s one that cuts to the heart of the role of statistical agencies in producing numbers that people can trust.

I want to talk about the blog post—the important issue it raises and the potential solution it proposes—as a way to illustrate key differences in how we, as a federal agency producing statistics for the public, approach statistics and how researchers sometimes approach statistics. Both are properly seeking information but often for very different purposes requiring different techniques. And I want to say I was particularly empathetic with the issues raised in the blog post given my decades of background managing the National Assessment of Educational Progress (NAEP) and U.S. participation in major international assessments like the Program for International Student Assessment (PISA). In recent years, given NAEP’s large sample size, it is not unheard of for two estimates (e.g., average scores) to round to the same whole number, and yet be statistically different. Or, in the case of U.S. PISA results, for scores to be 13 points apart, but yet not be statistically different. So, the problem that the blog post raises is both long standing and quite familiar to me.


The Problem   

Here’s the knotty problem the blog post raises: Sometimes, when NCES says there’s no statistically significant difference between two numbers, some people think we are saying there’s no difference between those two numbers at all. For example, on the 2022 NAEP, we estimated an average score of 212 for the Denver Public School District in grade 4 reading. That score for Denver in 2019 was 217. When we reported the 2022 results, we said that there was no statistically significant difference between Denver’s grade 4 reading scores between 2019 and 2022 even though the estimated scores in the two years were 5 points apart. This is because the Denver scores in 2019 and 2022 were estimates based on samples of students and we could not conclude that if we assessed every single Denver fourth-grader in both years that we wouldn’t have found, say, that the scores were 212 in both years. NAEP assessments are like polls: there is uncertainty (a margin of error) around the results. Saying that there was no statistically significant difference between two estimates is not the same as saying that there definitely was no difference. We’re simply saying we don’t have enough evidence to say for sure (or nearly sure) there was a difference.

Making these kinds of uncertain results clear to the public can be very difficult, and I applaud IES for raising the issue and proposing a solution. Unfortunately, the proposed solution—a “Bayesian” approach that “borrows” data from one state to estimate scores for another and that relies more than we are comfortable with, as a government statistical agency, on the judgment of the statistician running the analysis—can hurt more than help.


Two Big Concerns With a Bayesian Approach for Releasing NAEP Results

Two Big Concerns With a Bayesian Approach for NAEP

Big Concern #1: It “borrows” information across jurisdictions, grades, and subjects.

Big Concern #2: The statistical agency decides the threshold for what’s “meaningful.”

Let me say more about the two big concerns I have about the Bayesian approach proposed in the IES blog post for releasing NAEP results. And, before going into these concerns, I want to emphasize that these are concerns specifically with using this approach to release NAEP results. The statistical theory on which Bayesian methods are based is central to our estimation procedures for NAEP. And you’ll see later that we believe there are times when the Bayesian approach is the right statistical approach for releasing results.


Big Concern #1: The Proposed Approach Borrows Information Across Jurisdictions, Grades, and Subjects

The Bayesian approach proposed in the IES blog post uses data on student achievement in one state to estimate performance in another, performance at grade 8 to estimate performance at grade 4, and performance in mathematics to estimate performance in reading. The approach uses the fact that changes in scores across states often correlate highly with each other. Certainly, when COVID disrupted schooling across the nation, we saw declines in student achievement across the states. In other words, we saw apparent correlations. The Bayesian approach starts from an assumption that states’ changes in achievement correlate with each other and uses that to predict the likelihood that the average score for an individual state or district has increased or decreased. It can do the same thing with correlations in changes in achievement across subjects and across grade levels—which also often correlate highly. This is a very clever approach for research purposes.

However, it is not an approach that official statistics, especially NAEP results, should be built upon. In a country where curricular decisions are made at the local level and reforms are targeted at specific grade levels and in specific subjects, letting grade 8 mathematics achievement in, say, Houston influence what we report for grade 4 reading in, say, Denver, would be very suspect. Also, if we used Houston results to estimate Denver results, or math results to estimate reading results, or grade 8 results to estimate grade 4 results, we might also miss out on chances of detecting interesting differences in results.


Big Concern #2: The Bayesian Approach Puts the Statistical Agency in the Position of Deciding What’s “Meaningful”

A second big concern is the extent to which the proposed Bayesian approach would require the statisticians at NCES to set a threshold for what would be considered a “meaningful” difference. In this method, the statistician sets that threshold and then the statistical model reports out the probability that a reported difference is bigger or smaller than that threshold. As an example, the blog post suggests 3 NAEP scale score points as a “meaningful” change and presents this value as grounded in hard data. But in reality, the definition of a “meaningful” difference is a judgment call. And making the judgment is messy. The IES blog post concedes that this is a major flaw, even as it endorses broad application of these methods: “Here's a challenge: We all know how the p<.05 threshold leads to ‘p-hacking’; how can we spot and avoid Bayesian bouts of ‘threshold hacking,’ where different stakeholders argue for different thresholds that suit their interests?”

That’s exactly the pitfall to avoid! We certainly do our best to tell our audiences, from lay people to fellow statisticians, what the results “mean.” But we do not tell our stakeholders whether changes or differences in scores are large enough to be deemed "meaningful," as this depends on the context and the particular usage of the results.

This is not to say that we statisticians don’t use judgement in our work. In fact, the “p<.05” threshold for statistical significance that is the main issue the IES blog post has with reporting of NAEP results is a judgement. But it’s a judgement that has been widely established across the statistics and research worlds for decades and is built into the statistical standards of NCES and many other federal statistical agencies. And it’s a judgement specific to statistics: It’s meant to help account for margins of error when investigating if there is a difference at all—not a judgement about whether the difference exceeds a threshold to count as “meaningful.” By using this widely established standard, readers don’t have to wonder, “is NAEP setting its own standards?” or, perhaps more important, “is NAEP telling us, the public, what is meaningful?” Should the “p<.05” standard be revisited? Maybe. As, I note below, this is a question that is often asked in the statistical community. Should NCES and NAEP go on their own and tell our readers what is a meaningful result? No. That’s for our readers to decide.


What Does the Statistical Community Have to Say?

The largest community of statistical experts in the United States—the American Statistical Association (ASA)—has a lot to say on this topic. In recent years, they grappled with the p-value dilemma and put out a statement in 2016 that described misuses of tests of statistical significance. An editorial that later appeared in the American Statistician (an ASA journal) even recommended eliminating the use of statistical significance and the so-called “p-values” on which they are based. As you might imagine, there was considerable debate in the statistical and research community as a result. So in 2019, the president of the ASA convened a task force, which clarified that the editorial was not an official ASA policy. The task force concluded: “P-values are valid statistical measures that provide convenient conventions for communicating the uncertainty inherent in quantitative results. . . . Much of the controversy surrounding statistical significance can be dispelled through a better appreciation of uncertainty, variability, multiplicity, and replicability.”

In other words: Don't throw the baby out with the bathwater!


So, When Should NCES Use a Bayesian Approach?

Although I have been arguing against the use of a Bayesian approach for the release of official NAEP results, there’s much to say for Bayesian approaches when you need them. As the IES blog post notes, the Census Bureau uses a Bayesian method in estimating statistics for small geographic areas where they do not have enough data to make a more direct estimation. NCES has also used similar Bayesian methods for many years, where appropriate. For example, we have used Bayesian approaches to estimate adult literacy rates for small geographic areas for 20 years, dating back to the National Assessment of Adult Literacy (NAAL) of 2003. We use them today in our “small area estimates” of workplace skill levels in U.S. states and counties from the Program for the International Assessment of Adult Competencies (PIAAC). And when we do, we make it abundantly clear that these are indirect, heavily model-dependent estimates.

In other words, the Bayesian approach is a valuable tool in the toolbox of a statistical agency. However, is it the right tool for producing official statistics, where samples, by design, meet the reporting standards for producing direct estimates? The short answer is “no.”


Conclusion

Clearly and accurately reporting official statistics can be a challenge, and we are always looking for new approaches that can help our stakeholders better understand all the data we collect. I began this blog post noting the role of the federal statistical system and our adherence to high standards of objectivity and transparency, as well as our efforts to express our sometimes-complicated statistical findings as accurately and clearly as we can. IES has recently published another blog post describing some great use cases for Bayesian approaches, as well as methodological advances funded by our sister center, the National Center for Education Research. But the key point I took away from this blog post was that the Bayesian approach was great for research purposes, where we expect the researcher to make lots of assumptions (and other researchers to challenge them). That’s research, not official statistics, where we must stress clarity, accuracy, objectivity, and transparency.  

I will end with a modest proposal. Let NCES stick to reporting statistics, including NAEP results, and leave questions about what is meaningful to readers . . . to the readers!