NCES Blog

National Center for Education Statistics

High Job Satisfaction Among Teachers, but Leadership Matters

By Lauren Musu-Gillette

Are teachers satisfied with their jobs? Overall, the answer appears to be yes. However, a recent NCES report highlights that teacher job satisfaction differs by school characteristics.

Newly released data shows that at least 9 out of 10 teachers reported that they were satisfied with their jobs in 2003–04, 2007–08, and 2011–12. A higher percentage of private school teachers than public school teachers reported that they were satisfied with their jobs in all of these years.


Percent of teachers reporting they were satisfied in their jobs: School years 2003–04, 2007–08, and 2011–12

NOTE: “Satisfied” teachers are those who responded “strongly agree” or “somewhat agree” to the statement: “I am generally satisfied with being a teacher at this school.”
SOURCE: U.S. Department of Education, National Center for Education Statistics, Schools and Staffing Survey (SASS).


Differences in teacher job satisfaction also emerged based on perceptions of administrative support.[i] In 2011–12, a higher percentage of teachers who believed that the administration in their schools was supportive were satisfied with their jobs. Among teachers who felt that the administration in their schools was supportive, 95 percent were satisfied with their jobs. This was 30 percentage points higher than the percentage of teachers did not feel the administration was supportive. This pattern was seen in private schools as well and is consistent with previous research that demonstrates the importance of schools administrators to teachers’ working conditions.[ii]   


Percent of satisfied teachers, by their perceptions of administrative support: School year 2011–12

NOTE: “Satisfied” teachers are those who responded “strongly agree” or “somewhat agree” to the statement: “I am generally satisfied with being a teacher at this school.”
SOURCE: U.S. Department of Education, National Center for Education Statistics, Schools and Staffing Survey (SASS).


[i] Support was measured by teachers’ agreement or disagreement with the statement “The school administration’s behavior toward the staff is supportive and encouraging.”
[ii] Ladd, H. F. (2011). Teachers’ Perceptions of Their Working Conditions: How Predictive of Planned and Actual Teacher Movement? Educational Evaluation and Policy Analysis, 33(2): 235-261.

Learning to Use the Data: Online Dataset Training Modules

By Andy White

NCES provides a wealth of data online for users to access. However, the breadth and depth of the data can be overwhelming to first time users, and, sometimes, even for more experienced users. In order to help our users learn how to access, navigate, and use NCES datasets, we’ve developed a series of online training modules.

The Distance Learning Dataset Training  (DLDT) resource is an online, interactive tool that allows users to learn about NCES data across the education spectrum and evaluate it for suitability for specific  research purposes. The DLDT program at NCES has developed a growing number of online training modules for several NCES complex sample survey and administrative datasets.  The modules teach users about the intricacies of various datasets, including what the data represent, how the data are collected, the sample design, and considerations for analysis to help users in conducting successful analyses. 

The DLDT is also a teaching tool that can be used by individuals both in and out of the classroom to learn about NCES complex sample survey and administrative data collections and appropriate analysis methods.

There are two types of NCES DLDT modules available: common modules and dataset-specific modules. The common modules help users broadly understand NCES data across the education spectrum, introduce complex survey methods, and explain how to acquire NCES micro-data. The dataset-specific modules introduce and educate users about particular datasets. The available modules are listed below and more information can be found on the DLDT website

 

AVAILABLE DLDT MODULES

Common Modules

  • Introduction to the NCES Distance Learning Dataset Training System
  • Introduction to the NCES Datasets
  • Analyzing NCES Complex Survey Data
  • Statistical Analysis of NCES Datasets Employing a Complex Sample Design
  • Acquiring Micro-level NCES Data
  • DataLab Tools: QuickStats, PowerStats, and TrendStats

Dataset-Specific Modules

  • Common Core of Data (CCD)
  • Early Childhood Longitudinal Study Birth Cohort (ECLS-B)
  • Early Childhood Longitudinal Study Kindergarten Class of 1998-1999 (ECLS-K)
  • Early Secondary Longitudinal Studies (1972 – 2000)
    • National Longitudinal Study of 1972 (NLS-72)
    • High School and Beyond (HS&B)
    • National Education Longitudinal Study of 1988 (NELS:88)
  • Educational Longitudinal Study of 2002 (ELS:2002)
  • High School Longitudinal Study of 2009 (HSLS:09)
  • Integrated Postsecondary Education Data System (IPEDS)
  • National Assessment of Educational Progress (NAEP)
    • Main, State, and Long-Term Trend NAEP
    • NAEP High School Transcript Study (HSTS)
    • National Indian Education Study (NIES)
  • National Household Education Survey Program (NHES)
  • Postsecondary Education Sample Survey Datasets
    • National Postsecondary Student Aid Study (NPSAS)
    • Beginning Postsecondary Student Longitudinal Study (BPS)
    • Baccalaureate and Beyond Longitudinal Study (B&B)
  • Private School Universe Survey (PSS)
  • Schools and Staffing Survey (SASS)
    • Teacher Follow-up Survey (TFS)
    • Principal Follow-up Survey (PFS)
    • Beginning Teacher Longitudinal Study (BTLS)
  • School Survey On Crime and Safety (SSOCS)
  • International Activities Program Studies Datasets
    • Progress in International Reading Literacy Study (PIRLS)
    • Trends in International Mathematics and Science Study (TIMSS)
    • Program for International Student Assessment (PISA)
    • Program for the International Assessment of Adult Competencies (PIAAC)

Modules under Construction

  • Accessing NCES Data via the Web
  • Fast Response Survey System (FRSS)
  • NCES Longitudinal Studies
  • NCES High School Transcript Collections
  • Mapping Education Data (MapED)
  • Postsecondary Education Quick Information System (PEQIS)

Statistical Concepts in Brief: Embracing the Errors

By Lauren Musu-Gillette

EDITOR’S NOTE: This is part of a series of blog posts about statistical concepts that NCES uses as a part of its work.

Many of the important findings in NCES reports are based on data gathered from samples of the U.S. population. These sample surveys provide an estimate of what data would look like if the full population had participated in the survey, but at a great savings in both time and costs.  However, because the entire population is not included, there is always some degree of uncertainty associated with an estimate from a sample survey. For those using the data, knowing the size of this uncertainty is important both in terms of evaluating the reliability of an estimate as well as in statistical testing to determine whether two estimates are significantly different from one another.

NCES reports standard errors for all data from sample surveys. In addition to providing these values to the public, NCES uses them for statistical testing purposes. Within annual reports such as the Condition of Education, Indicators of School Crime and Safety, and Trends in High School Drop Out and Completion Rates in the United States, NCES uses statistical testing to determine whether estimates for certain groups are statistically significantly different from one another. Specific language is tied to the results of these tests. For example, in comparing male and female employment rates in the Condition of Education, the indicator states that the overall employment rate for young males 20 to 24 years old was higher than the rate for young females 20 to 24 years old (72 vs. 66 percent) in 2014. Use of the term “higher” indicates that statistical testing was performed to compare these two groups and the results were statistically significant.

If differences between groups are not statistically significant, NCES uses the phrases “no measurable differences” or “no statistically significant differences at the .05 level”. This is because we do not know for certain that differences do not exist at the population level, just that our statistical tests of the available data were unable to detect differences. This could be because there is in fact no difference, but it could also be due to other reasons, such as a small sample size or large standard errors for a particular group. Heterogeneity, or large amounts of variability, within a sample can also contribute to larger standard errors.

Some of the populations of interest to education stakeholders are quite small, for example, Pacific Islander or American Indian/Alaska Native students. As a consequence, these groups are typically represented by relatively small samples, and their estimates are often less precise than those of larger groups. These less precise estimates can often be reflected in larger standard errors for these groups. For example, in the table above the standard error for White students who reported having been in 0 physical fights anywhere is 0.70 whereas the standard error is 4.95 for Pacific Islander students and 7.39 for American Indian/Alaska Native students. This means that the uncertainty around the estimates for Pacific Islander and American Indian/Alaska Native students is much larger than it is for White students. Because of these larger standard errors, differences between these groups that may seem large may not be statistically significantly different. When this occurs, NCES analysts may state that large apparent differences are not statistically significant. NCES data users can use standard errors to help make valid comparisons using the data that we release to the public.

Another example of how standard errors can impact whether or not sample differences are statistically significant can be seen when comparing NAEP scores changes by state. Between 2013 and 2015, mathematics scores changed by 3 points between for fourth-grade public school students in Mississippi and Louisiana. However, this change was only significant for Mississippi. This is because the standard error for the change in scale scores for Mississippi was 1.2, whereas the standard error for Louisiana was 1.6. The larger standard error, and therefore larger degree of uncertainly around the estimate, factor into the statistical tests that determine whether a difference is statistically significant. This difference in standard errors could reflect the size of the samples in Mississippi and Louisiana, or other factors such as the degree to which the assessed students are representative of the population of their respective states. 

Researchers may also be interested in using standard errors to compute confidence intervals for an estimate. Stay tuned for a future blog where we’ll outline why researchers may want to do this and how it can be accomplished.

Where can I find information about the condition of education in the United States?

By Grace Kena

The National Center for Education Statistics submits a report to Congress on the condition of education every year by June 1st. The Condition of Education provides a comprehensive look at the state and progress of education in the United States. Although The Condition of Education was first produced in 1975, the origins of the report date back to the creation of the first federal department of education in 1867. Its first major publication, the Annual Report of the Commissioner of Education, covered data for 1869-70. Today’s Condition of Education report is presented to Congress and the White House annually. In addition, the indicators are updated regularly online, with a convenient site designed for mobile devices. By visiting The Condition of Education website, you can access the latest indicators, download the full Congressional report for the current and prior years, and watch short videos about recent findings and highlights.

The Condition of Education covers early childhood through postbaccalaureate education, and addresses topics relevant to a broad spectrum of education stakeholders. The report contains text and graphics on dozens of educational indicators that describe student characteristics, participation in special programs, achievement, completion rates, as well as characteristics of teachers, schools, and colleges. Economic indicators show the success that students have in finding employment after their education, and present information on their earnings. In addition to core indicators of perennial interest and supplemental indicators on other special topics, the Condition features spotlight indicators with an in depth focus on emerging issues and new data. Taken together, these indicators provide valuable information about the progress of our education system in addressing such key policy concerns as improving graduation rates, closing gaps in student achievement, and promoting educational equity.

For more information and access to the indicators, see The Condition of Education 2016. You can also learn more about The Condition of Education in the video below, or see other videos on specific topics of interest on the NCES YouTube Channel.

This blog was originally posted on June 1, 2015 and was updated on May 26, 2016

What Are the Characteristics of Students Who Have Ever Been Suspended or Expelled From School?

By Lauren Musu-Gillette

Suspensions and expulsions from school are often associated with negative academic outcomes, such as lower levels of achievement and higher dropout rates.[i] Using data from the High School Longitudinal Study of 2009 (HSLS:2009), NCES recently published a new spotlight feature in Indicators of School Crime and Safety that shows that a greater percentage of students who are suspended or expelled have low engagement in school and are less academically successful.  

While there is a large body of research on this topic, this is the first time that the nationally representative HSLS study has been used to examine outcomes for and characteristics of suspended and expelled youth. The comparisons presented here cannot be used to establish a cause-and-effect relationship, but the longitudinal nature of the dataset could provide researchers an analytical path to understanding how these relationships have unfolded over time.

Research shows that students’ attitudes toward school are associated with their academic outcomes, and that schools with a supportive climate have lower rates of delinquency, including suspensions and expulsions.[ii] As part of the HSLS:2009 data collection, students reported on their school engagement[iii] and sense of school belonging[iv] in the fall of their ninth-grade year (2009). A greater percentage of students who were suspended or expelled between 2009 and 2012 were reported low school engagement entering high school. A similar pattern was seen with regard to a sense of belonging in school.


 Percentage of fall 2009 ninth-graders who were ever suspended or expelled through spring 2012, by school engagement and sense of school belonging: 2012

1A school engagement scale was constructed based on students' responses to questions about how frequently they went to class without homework done, without pencil or paper, without books, or late.

2A school belonging scale was constructed based on the extent to which students agreed or disagreed that they felt safe at school, that they felt proud of being part of the school, that there were always teachers or other adults at school they could talk to if they had a problem, that school was often a waste of time, and that getting good grades was important to them.

Source: U.S. Department of Education, National Center for Education Statistics, High School Longitudinal Study of 2009 (HSLS:2009).


The percentages of students who had ever been suspended or expelled were higher for those students with lower grade point averages (GPAs). Nearly half of students with a cumulative high school GPA below 2.0 had ever been suspended or expelled and just 11 percent had a GPA of 3.0 or higher. Additionally, as of 2013, a higher percentage of students who had not completed high school than of students who had completed high school had ever been suspended or expelled (54 vs. 17 percent).


Percentage of fall 2009 ninth-graders who were ever suspended or expelled through spring 2012, by cumulative high school grade point average and high school completion status: 2013

Source: U.S. Department of Education, National Center for Education Statistics, High School Longitudinal Study of 2009 (HSLS:2009).


Differences in the demographic characteristics of students who had ever been suspended or expelled were similar to those found in other datasets, such as the Civil Rights Data Collection (CRDC). Characteristics of youth in the HSLS study who were ever suspended or expelled include:

  • A higher percentage of males (26 percent) than of females (13 percent) were ever suspended or expelled.
  • A higher percentage of Black students (36 percent) than of Hispanic (21 percent), White (14 percent), and Asian students (6 percent) had ever been suspended or expelled.
  • A higher percentage of students of Two or more races (26 percent) and Hispanic students had ever been suspended or expelled than White students.
  • A lower percentage of Asian students than of students of any other race/ethnicity with available data had ever been suspended or expelled.

For more information on the characteristics of students who have ever been suspended or expelled, please see the full spotlight in Indicators of School Crime and Safety 2015.


[i] Christle, C.A., Nelson, C.M., and Jolivette, K. (2004). School Characteristics Related to the Use of Suspension. Education and the Treatment of Children, 27(4): 509-526.; Skiba, R.J., Michael, R.S., Nardo, A.C., and Peterson, R.L. (2002). The Color of Discipline: Sources of Gender and Racial Disproportionality in School Punishment. Urban Review, 34(4): 317-342.

[ii] Morrison, G.M., Robertson, L., Laurie, B., and Kelly, J. (2002). Protective Factors Related to Antisocial Behavior Trajectories.Journal of Clinical Psychology, 58(3): 277-290; Christle, C.A., Jolivette, K., and Nelson, C.M. (2005). Breaking the School to Prison Pipeline: Identifying School Risk and Protective Factors for Youth Delinquency. Exceptionality, 13(2): 69-88.

[iii] School engagement measured how frequently students went to class without homework done, without pencil or paper, without books, or late.

[iv] Sense of school belonging was measured based on the extent to which students agreed or disagreed that they felt safe at school, that they felt proud of being part of the school, that there were always teachers or other adults at school they could talk to if they had a problem, that school was often a waste of time, and that getting good grades was important to them.