Skip Navigation

Postsecondary
Adult Education
RETRIEVE TABLE:

LAUNCH BY DATASET

QuickStats QuickStats
Select a Dataset to Launch in QuickStats
PowerStats PowerStats
Select a Dataset to Launch in PowerStats
TrendStats TrendStats
Select a Dataset to Launch in TrendStats

VIEW ALL DATASETS

Info Questions? Contact NCES
nces.info@rti.org

All Topics

Topics

  • Attendance and Enrollment
    • Enrollment Intensity
    • Patterns
  • Education History
    • Academic Experiences
    • Academic Performance
    • Admissions
    • Assessments
    • Field of Study
    • Outcomes
    • Persistence and Attainment
    • Programs and Courses
    • STEM
    • Transcripts
    • Transfer
  • Educational Transitions
    • High school to college
    • Preschool to elementary school
  • Employment
    • Employment characteristics
    • History
    • Status
    • While enrolled
  • Faculty and Staff
    • Compensation and Benefits
    • Education and Training
    • Experiences and Attitudes
    • Faculty Characteristics
    • Institutional Characteristics
    • Tenure
  • Finances
    • Application
    • Borrowing and Debt
    • Cost and Net Price
    • Debt
    • Expenses
    • Federal Aid
    • Grants
    • Income
    • Loans
    • Non-Federal Aid
    • Support
    • Work study
  • Parents and Family
    • Dependency and Marital Status
    • Parent Expectations, Attitudes, and Beliefs
    • Parental Involvement
    • Student's Parents
    • Student's Spouse and Dependents
  • Pre-K and K-12 Staff
    • K-12 Staff
    • Pre-K Staff
    • School Principals
    • Security Staff
  • School and Institutional Characteristics
    • Admissions and Tuition
    • Attendance and Enrollment
    • Classroom Settings, Sizes, and Organization
    • Crime and Safety
    • Demographics
    • Facilities
    • Institution/School Type, Level, and Sector
    • School Practices and Programs
    • Technology Use
  • School Districts
    • District Characteristics
    • Hiring and Compensation
  • Special Education
    • Programs and Services
    • Teachers and Staffing
  • Staffing
    • Number of Teachers and Staff
    • Vacancies
  • Student Characteristics
    • Demographics
    • Disabilities
    • Military or Public Service
    • Residence and Migration
  • Teachers and Teaching
    • Compensation and Benefits
    • Credentials
    • Demographics
    • Education and Training
    • Experiences, Performance, and Attitudes
    • Professional Development
9
68,69
10
72
11
54,20,31
12
56
13
71,53,1,32
14
121,82,51,24,35,36
15
122,83,52,12,37,38
21
130,129
23
132,131
1
48
9
68,69
10
72
11
54,20,31
12
56
13
71,53,1,32
14
121,82,51,24,35,36
15
122,83,52,12,37,38
21
130,129
23
132,131
9
68,69
10
72
11
54,20,31
12
56
13
71,53,1,32
14
121,82,51,24,35,36
15
122,83,52,12,37,38
22
133
23
132,131
1
48
9
68,69
10
72
11
54,20,31
12
56
13
71,53,1,32
14
121,82,51,24,35,36
15
122,83,52,12,37,38
22
133
23
132,131
5
59,60,61,117,118,119,114,115,116,111,112,113
10
72
11
54,20,31
12
56
13
71,53,1,32
14
121,82,51,24,35,36
15
122,83,52,12,37,38
20
127
1
48
9
68,69
10
72
11
54,20,31
12
56
13
71,53,1,32
14
121,82,51,24,35,36
15
122,83,52,12,37,38
10
72
11
54,20,31
12
56
13
71,53,1,32
14
121,82,51,24,35,36
15
122,83,52,12,37,38
22
133
9
68,69
10
72
11
54,20,31
12
56
13
71,53,1,32
22
133
9
68,69
10
72
11
54,20,31
12
56
13
71,53,1,32
14
121,82,51,24,35,36
15
122,83,52,12,37,38
22
133
1
48
9
68,69
10
72
11
54,20,31
12
56
13
71,53,1,32
14
121,82,51,24,35,36
15
122,83,52,12,37,38
22
133
9
68,69
10
72
11
54,20,31
12
56
13
71,53,1,32
14
121,82,51,24,35,36
15
122,83,52,12,37,38
9
68,69
10
72
11
54,20,31
12
56
13
71,53,1,32
9
68,69
10
72
11
54,20,31
12
56
13
71,53,1,32
14
121,82,51,24,35
9
68,69
10
72
13
71,53,1,32
14
121,82,51,24,35,36
1
48
9
68,69
10
72
11
54,20,31
12
56
13
71,53,1,32
14
121,82,51,24,35,36
15
122,83,52,12,37,38
22
133
23
132,131
11
54,20,31
12
56
13
32
15
122,83,52,12,37,38
22
133
23
132,131
9
68,69
10
72
11
54,20,31
12
56
13
71,53,1,32
14
121,82,51,24,35,36
15
122,83,52,12,37,38
22
133
23
132,131
11
54,20,31
12
56
13
71,53,1,32
14
121,82,51,24,35,36
15
122,83,52,12,37,38
22
133
16
28
17
29
16
28
16
28
16
28
17
29
16
28
17
29
11
54,20,31
12
56
13
71,53,1,32
14
121,82,51,24,35,36
15
122,83,52,12,37,38
11
54,20,31
12
56
13
71,53,1,32
14
121,82,51,24,35,36
15
122,83,52,12,37,38
11
54,20,31
12
56
13
71,53,1,32
14
121,82,51,24,35,36
15
122,83,52,12,37,38
11
54,20,31
12
56
13
71,53,1,32
11
54,20,31
12
56
13
71,53,1,32
14
121,82,51,24,35,36
15
122,83,52,12,37,38
11
54,20,31
12
56
13
71,53,1,32
14
121,82,51,24,35,36
15
122,83,52,12,37,38
11
54,20,31
12
56
13
71,53,1,32
14
121,82,51,24,35,36
15
122,83,52,12,37,38
9
68,69
10
72
11
54,20,31
12
56
13
71,53,1,32
14
121,82,51,24,35,36
15
122,83,52,12,37,38
22
133
11
54,20,31
12
56
13
71,53,1,32
14
121,82,51,24,35,36
15
122,83,52,12,37,38
11
54,20,31
12
56
13
71,53,1,32
14
121,82,51,24,35,36
15
122,83,52,12,37,38
11
54,20,31
12
56
13
71,53,1,32
14
121,82,51,24,35,36
15
122,83,52,12,37,38
11
54,20,31
12
56
13
71,53,1,32
14
121,82,51,24,35,36
15
122,83,52,12,37,38
22
133
9
68,69
10
72
11
54,20,31
12
56
13
71,53,1,32
14
121,82,51,24,35,36
15
122,83,52,12,37,38
21
130,129
22
133
23
132,131
9
68,69
10
72
21
130,129
22
133
23
132,131
1
48
3
62,63,64,90,91,92,87,88,89,93,94,95
4
65,66,67,102,103,104,99,100,101,96,97,98
8
128,70,74,73
18
125
19
126
22
133
23
132,131
1
48
9
68,69
10
72
11
54,20,31
12
56
13
71,53,1,32
14
121,82,51,24,35,36
15
122,83,52,12,37,38
21
130,129
22
133
23
132,131
9
68,69
11
54,20,31
12
56
13
71,53,1,32
14
121,82,51,24,35,36
15
122,83,52,12,37,38
21
130,129
22
133
23
132,131
2
75
5
59,60,61,117,118,119,114,115,116,111,112,113
6
58,110,109,108
7
57,107,106,105
20
127
1
48
21
130,129
4
65,66,67,102,103,104,99,100,101,96,97,98
19
126
8
128,70,74,73
1
48
5
59,60,61,117,118,119,114,115,116,111,112,113
11
54,20,31
12
56
13
71,53,1,32
14
121,82,51,24,35,36
15
122,83,52,12,37,38
20
127
2
75
5
59,60,61,117,118,119,114,115,116,111,112,113
8
128,70,74,73
20
127
21
130,129
23
132,131
1
48
2
75
4
65,66,67,102,103,104,99,100,101,96,97,98
5
59,60,61,117,118,119,114,115,116,111,112,113
6
58,110,109,108
19
126
20
127
3
62,63,64,90,91,92,87,88,89,93,94,95
4
65,66,67,102,103,104,99,100,101,96,97,98
8
128,70,74,73
18
125
19
126
5
59,60,61,117,118,119,114,115,116,111,112,113
8
128,70,74,73
9
68,69
10
72
11
54,20,31
12
56
13
71,53,1,32
14
121,82,51,24,35,36
15
122,83,52,12,37,38
20
127
7
57,107,106,105
8
128,70,74,73
17
29
9
68,69
10
72
11
54,20,31
12
56
13
71,53,1,32
14
121,82,51,24,35,36
15
122,83,52,12,37,38
16
28
17
29
1
48
2
75
4
65,66,67,102,103,104,99,100,101,96,97,98
5
59,60,61,117,118,119,114,115,116,111,112,113
6
58,110,109,108
19
126
20
127
7
57,107,106,105
23
132,131
1
48
6
58,110,109,108
6
58,110,109,108
1
48
5
59,60,61,117,118,119,114,115,116,111,112,113
8
128,70,74,73
20
127
1
48
5
59,60,117,118,119,114,115,116,111,112,113
6
58,110,109,108
11
54,20,31
12
56
20
127
5
59,60,61,117,118,119,114,115,116,111,112,113
6
58,110,109,108
7
57,107,106,105
20
127
5
59,60,61,117,118,119,114,115,116,111,112,113
6
58,110,109,108
7
57,107,106,105
20
127
1
48
9
68,69
10
72
11
54,20,31
12
56
13
71,53,1,32
14
121,82,51,24,35,36
15
122,83,52,12,37,38
21
130,129
23
132,131
1
48
10
72
11
54,20,31
12
56
13
71,53,1,32
14
121,82,51,24,35,36
15
122,83,52,12,37,38
21
130,129
23
132,131
9
68,69
10
72
11
54,20,31
12
56
13
71,53,1,32
14
121,82,51,24,35,36
15
122,83,52,12,37,38
1
48
9
68,69
11
54,20,31
12
56
13
71,53,1,32
14
121,82,51,24,35,36
15
122,83,52,12,37,38
21
130,129
23
132,131
3
62,63,64,90,91,92,87,88,89,93,94,95
4
65,66,67,102,103,104,99,100,101,96,97,98
8
128,70,74,73
11
54,20,31
12
56
18
125
19
126
1
48
11
54,20,31
12
56
3
62,63,64,90,91,92,87,88,89,93,94,95
4
65,66,67,102,103,104,99,100,101,96,97,98
8
128,70,74,73
11
54,31
12
56
18
125
19
126
3
62,63,64,90,91,92,87,88,89,93,94,95
4
65,66,67,102,103,104,99,100,101,96,97,98
6
58,110,109,108
11
54,20,31
12
56
18
125
19
126
1
48
3
62,63,64,90,91,92,87,88,89,93,94,95
4
65,66,67,102,103,104,99,100,101,96,97,98
10
72
11
54,20,31
12
56
18
125
19
126
1
48
3
62,63,64,90,91,92,87,88,89,93,94,95
4
65,66,67,102,103,104,99,100,101,96,97,98
6
58,110,109,108
11
54,20,31
12
56
18
125
19
126
Matching Datasets

  • Dataset Available In:
  • Launch QuickStats
  • QuickStats
  • Launch PowerStats
  • PowerStats
  • Launch TrendStats
  • TrendStats



All Datasets

  • Dataset Available In:
  • Launch QuickStats
  • QuickStats
  • Launch PowerStats
  • PowerStats
  • Launch TrendStats
  • TrendStats



DATASET

Down

YEAR

Up

GROUP

Up

LAUNCH TOOLS

Title
Enter all the data here.

  • Dataset Available In:
  • Launch QuickStats
  • QuickStats
  • Launch PowerStats
  • PowerStats
  • Launch TrendStats
  • TrendStats



                    
  • Early Childhood Program Participation
  • National Postsecondary Student Aid Study, Undergraduate
  • National Postsecondary Student Aid Study, Graduate
  • Parent and Family Involvement in Education
  • School Survey on Crime and Safety
Pre-Elementary Education Longitudinal Study
PEELS
Pre-elementary students who received preschool special education services, as they progressed through the early elementary years
Preschool special education, Programs and services received, Transitions between preschool and elementary school, Function and performance in preschool, kindergarten, and elementary school
https://ies.ed.gov/ncser/projects/peels
482003/2008qsOnpsOntsOff3,000

Imputation

Imputation was conducted for selected items on the teacher questionnaire and parent interview data. In general, the item missing rate was low. The risk of imputation-related bias was judged to be minimal. The variance inflation due to imputation was also low due to the low imputation rate of 10 percent. Imputation for the supplemental sample increased the amount of data usable for analysis, offsetting the potential risk of bias.

The methods of imputation included: hot-deck imputation, regression, external data source, and a derivation method, based on the internal consistency of inter-related variables.

View methodology reportpeels_subject.pdf6.71 MBpeels_varname.pdf6.63 MB00148
Private School Universe Survey
PSS
Private schools
School Affiliation/Associations, Enrollment, Grades Taught, Staffing, General Information
https://nces.ed.gov/surveys/pss/
752011-2012qsOnpsOntsOff26,983

Weighting

The final weights are needed to have the estimates reflect the population of private schools when analyzing the data. The data from the area frame component were weighted to reflect the sampling rates (probability of selection) of the PSUs. Survey data from both the list and area frame components were adjusted for school nonresponse. The final weight for PSS data items is the product of the Base Weight and the Nonresponse Adjustment Factor, where:
  1. Base Weight is the inverse of the probability of selection of the school. The base weight is equal to one for all list-frame schools. For area-frame schools, the base weight is equal to the inverse of the probability of selecting the PSU in which the school resides.
  2. Nonresponse Adjustment Factor is an adjustment that accounts for school nonresponse. It is the weighted (base weight) ratio of the total eligible in-scope schools (interviewed schools plus noninterviewed schools) to the total responding in-scope schools (interviewed schools) within cells. Noninterviewed and out-of-scope cases are assigned a nonresponse adjustment factor of zero.

Because we have more information for list-frame schools, the cells used to compute the nonresponse adjustment were defined differently for list-frame and area-frame schools. For schools in the list frame, the cells were defined by affiliation (17 categories), locale type (4 categories), grade level (4 categories), Census region (4 categories), and enrollment (3 categories). The nonresponse adjustment cells for area frame schools were defined by three-level typology (3 categories) and grade level (4 categories). If the number of schools in a cell was fewer than 15 or the nonresponse adjustment factor was greater than 1.5, then that cell was collapsed into a similar cell. The variables used to collapse the cells and the collapse order varied according to whether the school was from the list or area frame and whether a school was a traditional or k-terminal school. The cells for traditional schools from the list frame were collapsed within enrollment category, locale type, grade level, and Census region. Cells for k-terminal schools from the list frame were collapsed within enrollment category, locale type, Census region, and affiliation. Cells for traditional schools from the area frame were collapsed within grade level and then within three-level typology. Cells for k-terminal schools from the area frame were collapsed within three level typology.

Imputation

After the data edit processing was complete, there were missing values within some records classified as interviews. These were cases where the respondent had not answered some applicable questionnaire items (and data for these items were not added in the pre-edit, consistency, or logic edit) or the response had been deleted during editing. Values were imputed to the missing data during imputation. Two types of imputation were employed: donor and analyst imputation.

Donor Imputation

In donor imputation, values were created by extracting data from the record for a sample case (donor) with similar characteristics, using a procedure known as the “sequential nearest neighbor hot deck” (Kalton and Kasprzyk 1982, 1986; Kalton 1983; Little and Rubin 1987; Madow, Olkin, and Rubin 1983). In order to match incomplete records to those with complete data, “imputation” variables that identify certain characteristics of the school that were deemed to be important to the reporting of the data in each item (e.g., religious affiliation, enrollment, school level of instruction) were used. Items were grouped according to the perceived relevance of the imputation variables to the data collected by the item. For example, school level of instruction was used for matching incomplete records and donors to fill item 16 (length of school year) but was not used for item 7 (students by race).

Analyst Imputation

After the donor imputation was completed, there were records that still had missing values for 64 items. These were cases where the imputation failed to create a value because there was no suitable record to use as a donor, or the value imputed was deleted because it was outside the acceptable range for the item or was inconsistent with other data on the same record, or the religious orientation or purpose, or the religious orientation or affiliation, was not reported (items 14a and 14c) and no previous PSS information was available.

For these cases, values were imputed by analysts to the items with missing data. That is, staff reviewed the data record, sample file record, and the questionnaire and identified a value consistent with the information from these sources for imputation.

pss2012_subject.pdf377 KBpss2012_varname.pdf368 KB00260
Schools and Staffing Survey, Teachers
SASS
Public and private school teachers
Class Organization, Education and Training, Certification, Professional Development, Working Conditions, School Climate and Teacher Attitudes, Employment and Background Information
https://nces.ed.gov/surveys/sass
622011-2012qsOnpsOntsOff42,000

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass12teachpub_subject.pdf5.60 MBsass12teachpub_varname.pdf5.50 MB1332
632011-2012qsOnpsOntsOff42,000

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass12teachpriv_subject.pdf4.90 MBsass12teachpriv_varname.pdf4.90 MB2332
642011-2012qsOnpsOntsOff42,000

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass12teachcombined_subject.pdf5.60 MBsass12teachcombined_varname.pdf5.55 MB3332
902007-2008qsOnpsOntsOff38,200

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass08teachpub_subject.pdf4.39 MBsass08teachpub_varname.pdf7.01 MB1337
912007-2008qsOnpsOntsOff6,000

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass08teachpriv_subject.pdf3.27 MBsass08teachpriv_varname.pdf5.11 MB2337
922007-2008qsOnpsOntsOff44,200

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass08teachcombined_subject.pdf3.25 MBsass08teachcombined_varname.pdf1.09 MB3337
872003-2004qsOnpsOntsOff43,200

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass04teachpub_subject.pdf4.39 MBsass04teachpub_varname.pdf4.40 MB13312
882003-2004qsOnpsOntsOff8,000

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass04teachpriv_subject.pdf6.73 MBsass04teachpriv_varname.pdf3.98 MB23312
892003-2004qsOnpsOntsOff51,200

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass04teachcombined_subject.pdf1.15 MBsass04teachcombined_varname.pdf1.17 MB33312
931999-2000qsOffpsOntsOff52,000

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass00teachpub_subject.pdf7.91 MBsass00teachpub_varname.pdf1.39 MB13316
941999-2000qsOffpsOntsOff52,000

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass00teachpriv_subject.pdf6.34 MBsass00teachpriv_varname.pdf1.38 MB23316
951999-2000qsOffpsOntsOff52,000

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass00teachcombined_subject.pdf6.60 MBsass00teachcombined_varname.pdf1.91 MB33316
Schools and Staffing Survey, Principals
SASS
Public and private school principals
Experience, Training, Education, and Professional Development, Goals and Decision Making, Teacher and Aide Professional Development, School Climate and Safety, Instructional Time, Working Conditions and Principal Perceptions, Teacher and School Performance
https://nces.ed.gov/surveys/sass
652011-2012qsOnpsOntsOff9,000

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass12prinpub_subject.pdf1.99 MBsass12prinpub_varname.pdf1.97 MB1343
662011-2012qsOnpsOntsOff9,000

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass12prinpriv_subject.pdf1.98 MBsass12prinpriv_varname.pdf1.90 MB2343
672011-2012qsOnpsOntsOff9,000

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass12princombined_subject.pdf1.92 MBsass12princombined_varname.pdf2.05 MB3343
1022007-2008qsOnpsOntsOff7,500

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass08prinpub_subject.pdf2.00 MBsass08prinpub_varname.pdf1.83 MB1348
1032007-2008qsOnpsOntsOff1,900

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass08prinpriv_subject.pdf1.80 MBsass08prinpriv_varname.pdf1.61 MB2348
1042007-2008qsOnpsOntsOff9,400

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass08princombined_subject.pdf1.86 MBsass08princombined_varname.pdf1.61 MB3348
992003-2004qsOnpsOntsOff8,100

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass04prinpub_subject.pdf553 KBsass04prinpub_varname.pdf2.20 MB13413
1002003-2004qsOnpsOntsOff2,400

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass04prinpriv_subject.pdf466 KBsass04prinpriv_varname.pdf445 KB23413
1012003-2004qsOnpsOntsOff10,500

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass04princombined_subject.pdf449 KBsass04princombined_varname.pdf433 KB33413
961999-2000qsOffpsOntsOff12,000

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass00prinpub_subject.pdf1.90 MBsass00prinpub_varname.pdf1.61 MB13417
971999-2000qsOffpsOntsOff12,000

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass00prinpriv_subject.pdf1.58 MBsass00prinpriv_varname.pdf1.25 MB23417
981999-2000qsOffpsOntsOff12,000

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass00princombined_subject.pdf1.48 MBsass00princombined_varname.pdf1.26 MB33417
Schools and Staffing Survey, Schools
SASS
Public and private schools
Teacher demand, teacher and principal characteristics, general conditions in schools, principals' and teachers' perceptions of school climate and problems in their schools, teacher compensation, district hiring and retention practices, basic characteristics of the student population
https://nces.ed.gov/surveys/sass
592011-2012qsOnpsOntsOff9,000

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass12schoolpub_subject.pdf520 KBsass12schoolpub_varname.pdf530 KB1351
602011-2012qsOnpsOntsOff9,000

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass12schoolpriv_subject.pdf720 KBsass12schoolpriv_varname.pdf675 KB2351
612011-2012qsOnpsOntsOff9,000

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass12schoolcombined_subject.pdf1.60 MBsass12schoolcombined_varname.pdf1.55 MB3351
1172007-2008qsOnpsOntsOff7,600

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass08schoolspub_subject.pdf2.23 MBsass08schoolspub_varname.pdf2.52 MB13520
1182007-2008qsOnpsOntsOff2,000

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass08schoolspriv_subject.pdf2.51 MBsass08schoolspriv_varname.pdf3.07 MB23520
1192007-2008qsOnpsOntsOff9,500

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass08schoolscombined_subject.pdf1.92 MBsass08schoolscombined_varname.pdf2.27 MB33520
1142003-2004qsOnpsOntsOff8,000

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass04schoolspublic_subject.pdf2.27 MBsass04schoolspublic_varname.pdf2.30 MB13519
1152003-2004qsOnpsOntsOff2,500

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass04schoolsprivate_subject.pdf3.26 MBsass04schoolsprivate_varname.pdf1.55 MB23519
1162003-2004qsOnpsOntsOff10,400

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass04schoolscombined_subject.pdf1.90 MBsass04schoolscombined_varname.pdf2.00 MB33519
1111999-2000qsOffpsOntsOff9,300

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass00schoolspublic_subject.pdf2.00 MBsass00schoolspublic_varname.pdf2.07 MB13518
1121999-2000qsOffpsOntsOff2,600

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass00schoolsprivate_subject.pdf2.89 MBsass00schoolsprivate_varname.pdf3.18 MB23518
1131999-2000qsOffpsOntsOff11,900

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass00schoolscombined_subject.pdf2.20 MBsass00schoolscombined_varname.pdf2.50 MB33518
Schools and Staffing Survey, Districts
SASS
Public school districts
Recruitment and Hiring of Staff, Principal and Teacher Compensation, Student Assignment, Graduation Requirements, Migrant Education, District Performance
https://nces.ed.gov/surveys/sass
582011-2012qsOnpsOntsOff4,500

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass12district_subject.pdf1.15 MBsass12district_varname.pdf1.10 MB3164
1102007-2008qsOnpsOntsOff4,600

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass08district_subject.pdf0.51 MBsass08district_varname.pdf0.53 MB31626
1092003-2004qsOnpsOntsOff4,400

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass04district_subject.pdf0.88 MBsass04district_varname.pdf0.93 MB31625
1081999-2000qsOffpsOntsOff4,700

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass00district_subject.pdf1.10 MBsass00district_varname.pdf0.68 MB31624
Schools and Staffing Survey, Library Media Centers
SASS
Library media centers
School information, Facilities, services, and policies, Staffing information, Technology and information literacy, Collections and expenditures
https://nces.ed.gov/surveys/sass
572011-2012qsOnpsOntsOff7,000

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass12LMC_subject.pdf675 KBsass12LMC_varname.pdf695 KB3175
1072007-2008qsOnpsOntsOff7,300

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass08LMC_subject.pdf0.59 MBsass08LMC_varname.pdf0.61 MB31723
1062003-2004qsOnpsOntsOff7,200

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass04LMC_subject.pdf0.80 MBsass04LMC_varname.pdf0.81 MB31722
1051999-2000qsOffpsOntsOff7,700

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

Three types of edits were performed on the SASS data: blanking, consistency, and logic edits. Blanking edits delete extraneous entries that result from respondents failing to follow skip patterns correctly and assign “missing” codes to items that respondents should have answered and didn’t. Consistency edits ensured that responses to related items were consistent and did not contradict other survey data. Finally, logic edits were performed, using information collected from the same questionnaire, associated questionnaires in the same school or district, or information from the sampling frame to fill missing items, where possible.
After blanking, consistency, and logic edits were completed, any missing items that remained were filled using imputation. Data were imputed from items found on questionnaires of the same type that had certain characteristics in common or from the aggregated answers of similar questionnaires. These records are called “donor records1,” and the method of imputation that involves imputing data from donor records is called “hot-deck2” imputation.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely. Please consult the survey methodology for more information.

1Donors were selected based on the type of data the donor would supply to the record undergoing imputation. Matching variables were selected based on their close relationship to the item requiring imputation, and a pool of donors was selected based on their answers to these matching variables.

2Goldring, R., Taie, S., Rizzo, L., Colby, D., and Fraser, A. (2013). User’s Manual for the 2011–12 Schools and Staffing Survey, Volume 1: Overview (NCES 2013-330). U.S. Department of Education. Washington, DC: National Center for Education Statistics.

View methodology informationsass00LMC_subject.pdf1.16 MBsass00LMC_varname.pdf1.18 MB31721
School Survey on Crime and Safety
SSOCS
Elementary and secondary schools
School Practices and Programs, Parent and Community Involvement at School, School Security, Staff Training, Limitations on Crime Prevention, Frequency of Crime and Violence, Frequency of hate and gang-related crimes, Disciplinary problems and actions
https://nces.ed.gov/surveys/ssocs
12832015-2016qsOnpsOntsOn3,500

Imputation

Completed SSOCS surveys contain some level of item nonresponse after the conclusion of the data collection phase. Imputation procedures were used to impute missing values of key items in SSOCS:2000 and missing values of all items in each subsequent SSOCS. All imputed values are flagged as such.
SSOCS:2004 and Beyond: In subsequent collections, imputation procedures were used to create values for all questionnaire items with missing data. This procedural change from SSOCS:2000 was implemented because the analysis of incomplete datasets may cause different users to arrive at different conclusions, depending on how the missing data are treated. The imputation methods used in SSOCS:2004 and later surveys were tailored to the nature of each survey item. Four methods were used: aggregate proportions, logical, best match, and clerical.


Weighting

Data are weighted to compensate for differential probabilities of selection and to adjust for the effects of nonresponse.
Sample weights allow inferences to be made about the population from which the sample units are drawn. Because of the complex nature of the SSOCS sample design, these weights are necessary to obtain population-based estimates, to minimize bias arising from differences between responding and nonresponding schools, and to calibrate the data to known population characteristics in a way that reduces sampling error.

An initial (base) weight was first determined within each stratum by calculating the ratio of the number of schools available in the sampling frame to the number of schools selected. Due to nonresponse, the responding schools did not necessarily constitute a random sample from the schools in the stratum. In order to reduce the potential of bias due to nonresponse, weighting classes were determined by using a statistical algorithm similar to CHAID (chi-square automatic interaction detector) to partition the sample such that schools within a weighting class were homogenous with respect to their probability of responding. The same predictor variables from the SSOCS:2004 CHAID analysis were used for SSOCS:2006: instructional level, region, enrollment size, percent minority, student-to-FTE teaching staff ratio, percentage of students eligible for free or reduced-price lunch, and number of full-time equivalent (FTE) teachers. When the number of responding schools in a class was sufficiently small, the weighting class was combined with another to avoid the possibility of large weights. After combining the necessary classes, the base weights were adjusted so that the weighted distribution of the responding schools resembled the initial distribution of the total sample.

The nonresponse-adjusted weights were then poststratified to calibrate the sample to known population totals. Two dimension margins were set up for the poststratification—(1) instructional level and school enrollment size; and (2) instructional level and locale—and an iterative process known as the raking ratio adjustment brought the weights into agreement with known control totals. Poststratification works well when the population not covered by the survey is similar to the covered population within each poststratum. Thus, to be effective, the variables that define the poststrata must be correlated with the variables of interest, they must be well measured in the survey, and control totals must be available for the population as a whole. All three requirements were satisfied by the aforementioned poststratification margins. Instructional level, school enrollment, and locale have been shown to be correlated with crime (Miller 2004).

Miller, A.K. (2004). Violence in U.S. Public Schools: 2000 School Survey on Crime and Safety (NCES 2004-314R). National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education. Washington, DC.


ssocs2016_subject.pdf375 KBssocs2016_varname.pdf375 KB00867
7032009-2010qsOnpsOntsOn2,600

Imputation

Completed SSOCS surveys contain some level of item nonresponse after the conclusion of the data collection phase. Imputation procedures were used to impute missing values of key items in SSOCS:2000 and missing values of all items in each subsequent SSOCS. All imputed values are flagged as such.
SSOCS:2004 and Beyond: In subsequent collections, imputation procedures were used to create values for all questionnaire items with missing data. This procedural change from SSOCS:2000 was implemented because the analysis of incomplete datasets may cause different users to arrive at different conclusions, depending on how the missing data are treated. The imputation methods used in SSOCS:2004 and later surveys were tailored to the nature of each survey item. Four methods were used: aggregate proportions, logical, best match, and clerical.


Weighting

Data are weighted to compensate for differential probabilities of selection and to adjust for the effects of nonresponse.
Sample weights allow inferences to be made about the population from which the sample units are drawn. Because of the complex nature of the SSOCS sample design, these weights are necessary to obtain population-based estimates, to minimize bias arising from differences between responding and nonresponding schools, and to calibrate the data to known population characteristics in a way that reduces sampling error.

An initial (base) weight was first determined within each stratum by calculating the ratio of the number of schools available in the sampling frame to the number of schools selected. Due to nonresponse, the responding schools did not necessarily constitute a random sample from the schools in the stratum. In order to reduce the potential of bias due to nonresponse, weighting classes were determined by using a statistical algorithm similar to CHAID (chi-square automatic interaction detector) to partition the sample such that schools within a weighting class were homogenous with respect to their probability of responding. The same predictor variables from the SSOCS:2004 CHAID analysis were used for SSOCS:2006: instructional level, region, enrollment size, percent minority, student-to-FTE teaching staff ratio, percentage of students eligible for free or reduced-price lunch, and number of full-time equivalent (FTE) teachers. When the number of responding schools in a class was sufficiently small, the weighting class was combined with another to avoid the possibility of large weights. After combining the necessary classes, the base weights were adjusted so that the weighted distribution of the responding schools resembled the initial distribution of the total sample.

The nonresponse-adjusted weights were then poststratified to calibrate the sample to known population totals. Two dimension margins were set up for the poststratification—(1) instructional level and school enrollment size; and (2) instructional level and locale—and an iterative process known as the raking ratio adjustment brought the weights into agreement with known control totals. Poststratification works well when the population not covered by the survey is similar to the covered population within each poststratum. Thus, to be effective, the variables that define the poststrata must be correlated with the variables of interest, they must be well measured in the survey, and control totals must be available for the population as a whole. All three requirements were satisfied by the aforementioned poststratification margins. Instructional level, school enrollment, and locale have been shown to be correlated with crime (Miller 2004).

Miller, A.K. (2004). Violence in U.S. Public Schools: 2000 School Survey on Crime and Safety (NCES 2004-314R). National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education. Washington, DC.


ssocs2010_subject.pdf565 KBssocs2010_varname.pdf365 KB00857
7432007-2008qsOnpsOntsOn2,560

Imputation

Completed SSOCS surveys contain some level of item nonresponse after the conclusion of the data collection phase. Imputation procedures were used to impute missing values of key items in SSOCS:2000 and missing values of all items in each subsequent SSOCS. All imputed values are flagged as such.
SSOCS:2004 and Beyond: In subsequent collections, imputation procedures were used to create values for all questionnaire items with missing data. This procedural change from SSOCS:2000 was implemented because the analysis of incomplete datasets may cause different users to arrive at different conclusions, depending on how the missing data are treated. The imputation methods used in SSOCS:2004 and later surveys were tailored to the nature of each survey item. Four methods were used: aggregate proportions, logical, best match, and clerical.


Weighting

Data are weighted to compensate for differential probabilities of selection and to adjust for the effects of nonresponse.
Sample weights allow inferences to be made about the population from which the sample units are drawn. Because of the complex nature of the SSOCS sample design, these weights are necessary to obtain population-based estimates, to minimize bias arising from differences between responding and nonresponding schools, and to calibrate the data to known population characteristics in a way that reduces sampling error.

An initial (base) weight was first determined within each stratum by calculating the ratio of the number of schools available in the sampling frame to the number of schools selected. Due to nonresponse, the responding schools did not necessarily constitute a random sample from the schools in the stratum. In order to reduce the potential of bias due to nonresponse, weighting classes were determined by using a statistical algorithm similar to CHAID (chi-square automatic interaction detector) to partition the sample such that schools within a weighting class were homogenous with respect to their probability of responding. The same predictor variables from the SSOCS:2004 CHAID analysis were used for SSOCS:2006: instructional level, region, enrollment size, percent minority, student-to-FTE teaching staff ratio, percentage of students eligible for free or reduced-price lunch, and number of full-time equivalent (FTE) teachers. When the number of responding schools in a class was sufficiently small, the weighting class was combined with another to avoid the possibility of large weights. After combining the necessary classes, the base weights were adjusted so that the weighted distribution of the responding schools resembled the initial distribution of the total sample.

The nonresponse-adjusted weights were then poststratified to calibrate the sample to known population totals. Two dimension margins were set up for the poststratification—(1) instructional level and school enrollment size; and (2) instructional level and locale—and an iterative process known as the raking ratio adjustment brought the weights into agreement with known control totals. Poststratification works well when the population not covered by the survey is similar to the covered population within each poststratum. Thus, to be effective, the variables that define the poststrata must be correlated with the variables of interest, they must be well measured in the survey, and control totals must be available for the population as a whole. All three requirements were satisfied by the aforementioned poststratification margins. Instructional level, school enrollment, and locale have been shown to be correlated with crime (Miller 2004).

Miller, A.K. (2004). Violence in U.S. Public Schools: 2000 School Survey on Crime and Safety (NCES 2004-314R). National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education. Washington, DC.


ssocs2008_subject.pdf1.96 MBssocs2008_varname.pdf912 KB00858
7332005-2006qsOnpsOntsOn2,720

Imputation

Completed SSOCS surveys contain some level of item nonresponse after the conclusion of the data collection phase. Imputation procedures were used to impute missing values of key items in SSOCS:2000 and missing values of all items in each subsequent SSOCS. All imputed values are flagged as such.
SSOCS:2004 and Beyond: In subsequent collections, imputation procedures were used to create values for all questionnaire items with missing data. This procedural change from SSOCS:2000 was implemented because the analysis of incomplete datasets may cause different users to arrive at different conclusions, depending on how the missing data are treated. The imputation methods used in SSOCS:2004 and later surveys were tailored to the nature of each survey item. Four methods were used: aggregate proportions, logical, best match, and clerical.


Weighting

Data are weighted to compensate for differential probabilities of selection and to adjust for the effects of nonresponse.
Sample weights allow inferences to be made about the population from which the sample units are drawn. Because of the complex nature of the SSOCS sample design, these weights are necessary to obtain population-based estimates, to minimize bias arising from differences between responding and nonresponding schools, and to calibrate the data to known population characteristics in a way that reduces sampling error.

An initial (base) weight was first determined within each stratum by calculating the ratio of the number of schools available in the sampling frame to the number of schools selected. Due to nonresponse, the responding schools did not necessarily constitute a random sample from the schools in the stratum. In order to reduce the potential of bias due to nonresponse, weighting classes were determined by using a statistical algorithm similar to CHAID (chi-square automatic interaction detector) to partition the sample such that schools within a weighting class were homogenous with respect to their probability of responding. The same predictor variables from the SSOCS:2004 CHAID analysis were used for SSOCS:2006: instructional level, region, enrollment size, percent minority, student-to-FTE teaching staff ratio, percentage of students eligible for free or reduced-price lunch, and number of full-time equivalent (FTE) teachers. When the number of responding schools in a class was sufficiently small, the weighting class was combined with another to avoid the possibility of large weights. After combining the necessary classes, the base weights were adjusted so that the weighted distribution of the responding schools resembled the initial distribution of the total sample.

The nonresponse-adjusted weights were then poststratified to calibrate the sample to known population totals. Two dimension margins were set up for the poststratification—(1) instructional level and school enrollment size; and (2) instructional level and locale—and an iterative process known as the raking ratio adjustment brought the weights into agreement with known control totals. Poststratification works well when the population not covered by the survey is similar to the covered population within each poststratum. Thus, to be effective, the variables that define the poststrata must be correlated with the variables of interest, they must be well measured in the survey, and control totals must be available for the population as a whole. All three requirements were satisfied by the aforementioned poststratification margins. Instructional level, school enrollment, and locale have been shown to be correlated with crime (Miller 2004).

Miller, A.K. (2004). Violence in U.S. Public Schools: 2000 School Survey on Crime and Safety (NCES 2004-314R). National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education. Washington, DC.


ssocs2006_subject.pdf8.82 MBssocs2006_varname.pdf3.58 MB00859
Education Longitudinal Study
ELS
Students who were high school sophomores in 2001-02 or high school seniors in 2003-04
Student and Family Background, School and Classroom Characteristics, High School Completion and Dropout Status, Postsecondary Education Choice and Enrollment, Postsecondary Attainment, Employment, Transition to Adult Roles
https://nces.ed.gov/surveys/els2002
682002qsOnpsOntsOff14,000 to 16,000

Imputation

Stochastic methods were used to impute the missing values for the ELS:2002 third follow-up data. Specifically, a weighted sequential hot-deck (WSHD) statistical imputation procedure (Cox 1980; Iannacchione 1982) using the final analysis weight (F3QWT) was applied to the missing values for the variables in table 12 in the order in which they are listed. The WSHD procedure replaces missing data with valid data from a donor record within an imputation class. In general, variables with lower item nonresponse rates were imputed earlier in the process.


View methodology reportels2002sophomores_subject.pdf7.58 MBels2002sophomores_varname.pdf7.49 MB32954
692002qsOffpsOntsOff14,000 to 16,000

Imputation

Stochastic methods were used to impute the missing values for the ELS:2002 third follow-up data. Specifically, a weighted sequential hot-deck (WSHD) statistical imputation procedure (Cox 1980; Iannacchione 1982) using the final analysis weight (F3QWT) was applied to the missing values for the variables in table 12 in the order in which they are listed. The WSHD procedure replaces missing data with valid data from a donor record within an imputation class. In general, variables with lower item nonresponse rates were imputed earlier in the process.


View methodology reportels2002seniors_subject.pdf6.22 MBels2002seniors_varname.pdf6.16 MB42954
High School Longitudinal Study
HSLS
Students who were high school freshmen in the fall of 2009
Student Background, Math and Science Education, Classroom Characteristics, The Changing Environment of High School, Postsecondary Education Choice and Enrollment, Transition to Adult Roles
https://nces.ed.gov/surveys/hsls09
722009qsOnpsOntsOff23,000

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, HSLS:09 data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in nonsampling errors. Data swapping and other forms of perturbation, implemented to protect respondent confidentiality, can also lead to inconsistencies.


Imputation

Stochastic methods were used to impute the missing values. Specifically, a weighted sequential hot-deck (WSHD; statistical) imputation procedure (Cox 1980; Iannacchione 1982) using the final student analysis weight (W2STUDENT) was applied to the missing values for variables. The WSHD procedure replaces missing data with valid data from a donor record (i.e., first follow-up student [item] respondent) within an imputation class. In general, variables with lower item nonresponse rates were imputed earlier in the process.


Skips and Missing Values

The HSLS:09 data were edited using procedures developed and implemented for previous studies sponsored by NCES Following data collection, the information collected in the student instrument was subjected to various quality control checks and examinations. These checks were to confirm that the collected data reflected appropriate skip patterns. Another evaluation examined all variables with missing data and substituted specific values to indicate the reason for the missing data. A variety of explanations are possible for missing data.


The table below shows codes for missing values used in HSLS:09. Please consult the methodology report (coming soon) for more information.


Description of missing data codes

Missing data code Description
-1 Don't know
-4 Item not administered: abbreviated interview
-5 Suppressed
-6 Component not applicable
-7 Item legitimate skip/NA
-8 Unit nonresponse
-9 Missing
hsls2009_subject.pdf5.34 MBhsls2009_varname.pdf8.91 MB001056
Baccalaureate and Beyond
B&B
Bachelor degree recipients who were surveyed at the time of graduation, one year after graduation, four years after graduation, and ten years after graduation
Outcomes for bachelor's degree recipients, Graduate and professional program access, Labor market experiences, Rates of return on investment in education, Post-baccalaureate education, Teacher preparation, Certifications and licenses, Enrollment while employed
https://nces.ed.gov/surveys/b&b
542008/2012qsOnpsOntsOff15,500

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, B&B:08/12 data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in nonsampling errors. B&B:08/12 has multiple sources of data for some variables (CPS, NLSDS, student interview, etc.), and reporting differences can occur in each. Data swapping and other forms of perturbation, implemented to protect respondent confidentiality, can also lead to inconsistencies.


Imputation

Variables with missing data were imputed for graduates who were respondents in a study wave . The imputation procedures employed a two-step process. The first step is a logical imputation . If the imputed value could be deduced from the logical relationships with other variables, then that information was used to impute the value for the recipient. The second step is weighted hot-deck imputation. This imputation procedure involves identifying a relatively homogenous group of observations, and, from within the group, selecting a random donor’s value to impute a value for the recipient.


Skips and Missing Values

The B&B: 08/12 data were edited using procedures developed and implemented for previous studies sponsored by NCES, including the base-year study, NPSAS:08. Following data collection, the information collected in the student instrument was subjected to various quality control checks and examinations. These checks were to confirm that the collected data reflected appropriate skip patterns. Another evaluation examined all variables with missing data and substituted specific values to indicate the reason for the missing data. A variety of explanations are possible for missing data.


The table below shows codes for missing values used in B&B:08/12. Please consult the First Look for more information.


Description of missing value codes

Missing data codeDescription
-1Don’t know
-2Independent student
-3Skipped
-9Missing

1In other words, if a graduate was a respondent in B&B:09, he or she will have no missing data for variables created as part of the B&B:09 wave. Similarly, if a graduate was a respondent in B&B:12, he or she will have no missing data for variables created as part of the B&B:12 wave, but may have missing data for variables created as part of the B&B:09 wave if he or she was not a respondent in B&B:09.

2Logical imputation is a process that aims to infer or deduce the missing values from answers to other questions.

3Sequential hot deck imputation involves defining imputation classes, which generally consist of a cross-classification of covariates, and then replacing missing values sequentially from a single pass through the survey data within the imputation classes. When this form of imputation is performed using the sampling weights, the procedure is called weighted sequential hot-deck imputation. This procedure takes into account the unequal probabilities of selection in the original sample to specify the expected number of times a particular respondent’s answer will be used as a donor. These expected selection frequencies are specified so that, over repeated applications of the algorithm, the weighted distribution of all values for that variable—imputed and observed—will resemble that of the target universe. While each respondent record may be selected for use as a hot-deck donor, the number of times a respondent record is used for imputation is controlled. To implement the weighted sequential hot-deck procedure, imputation classes and sorting variables that are relevant (strong predictors) for each item being imputed are defined. Imputation classes are developed by using a Chi-squared Automatic Interaction.

View methodology reportbb12_subject.pdf26.6 MBbb12_varname.pdf15.6 MB001149
202000/2001qsOffpsOntsOff10,000

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, B&B:01 data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in nonsampling errors.


Imputation

All variables with missing data were imputed. The imputation procedures employed a two-step process. The first step is a logical imputation1. If the imputed value could be deduced from the logical relationships with other variables, then that information was used to impute the value for the recipient. The second step is weighted hot-deck imputation.2 This imputation procedure involves identifying a relatively homogenous group of observations, and, from within the group, selecting a random donor’s value to impute a value for the recipient.


Skips and Missing Values

Both during and upon completion of data collection, edit checks were performed on the B&B:00/01 data file to confirm that the intended skip patterns were implemented during the interview. Following data collection, the information collected in CATI was subjected to various checks and examinations. These checks were intended to confirm that the database reflected appropriate skip-pattern relationships and different types of missing data by inserting special codes.


The Table below lists each missing value code and its associated meaning in the B&B:00/01 interview. For more information, see the Baccalaureate and Beyond Longitudinal Study (B&B:00/01) methodology report .


Description of missing data codes

Missing data code Description
-1 Don’t know (CATI variables), Data not available (CADE variables)
-2 Refused (CATI variables only)
-3 Not applicable (CADE and CATI variables only)
-4 B&B:97 nonrespondent not sampled
-6 Bad data, out of range
-7 Item was not reached (abbreviated and partial CATI interviews)
-8 Item was not reached due to a CATI error
-9 Data missing, reason unknown (CATI variables)
View methodology reportbb01_subject.pdf3.44 MBbb01_varname.pdf3.38 MB001132
311993/2003qsOnpsOntsOff11,200

Imputation

Variables used in cross-sectional estimates in the Baccalaureate and Beyond descriptive reports were imputed. The variables identified for imputation were used in the two B&B:93/03 descriptive reports (Bradburn, Nevill, and Forrest Cataldi 2006; Alt and Henke 2007). The imputations were performed in three steps. First, the interview variables were imputed using the sequential hot deck imputation method.1 This imputation procedure involves identifying a relatively homogenous group of observations, and within the group selecting a random donor’s value to impute a value for the recipient. Second, using the interview variables, including the newly imputed variable values, derived variables were constructed.


Skips and Missing Values

Both during and upon completion of data collection, edit checks were performed on the B&B:93/03 data file to confirm that the intended skip patterns were implemented during the interview. At the conclusion of data collection, special codes were added as needed to indicate the reason for missing data. Missing data within individual data elements can occur for a variety of reasons.


The Table below lists each missing value code and its associated meaning in the B&B:93/03 interview. For more information, see the Baccalaureate and Beyond Longitudinal Study (B&B:93/03) methodology report.


Description of missing data codes

Missing data code Description
-1 Missing
-2 Not applicable
-3 Skipped
-4 B&B:97 nonrespondent not sampled
-6 Uncodeable, out of range
-7 Not reached
-8 Item was not reached due to an error
-9 Missing, blank

1Sequential hot-deck imputation involves defining imputation classes, which generally consist of a cross-classification of covariates, and then replacing missing values sequentially from a single pass through the survey data within the imputation classes. When this form of imputation is performed using the sampling weights, the procedure is called weighted sequential hot-deck imputation. This procedure takes into account the unequal probabilities of selection in the original sample to specify the expected number of times a particular respondent’s answer will be used as a donor. These expected selection frequencies are specified so that, over repeated applications of the algorithm, the weighted distribution of all values for that variable—imputed and observed—will resemble that of the target universe. Under this methodology, while each respondent record has a chance to be selected for use as a hot-deck donor, the number of times a respondent record can be used for imputation will be controlled. To implement the weighted sequential hot-deck procedure, imputation classes and sorting variables that are relevant (strong predictor) for each item being imputed were defined. Imputation classes were developed by using a Chi-squared Automatic Interaction.

View methodology reportbb03_subject.pdf4.56 MBbb03_varname.pdf3.98 MB001151
Baccalaureate and Beyond, Graduate Students
B&B:GR
Bachelor degree recipients who were surveyed at the time of graduation, one year after graduation, four years after graduation, and ten years after graduation
Outcomes for bachelor's degree recipients, Graduate and professional program access, Labor market experiences, Rates of return on investment in education, Post-baccalaureate education, Teacher preparation, Certifications and licenses, Enrollment while employed
https://nces.ed.gov/surveys/b&b
561993/2003qsOffpsOntsOff4,000

Imputation

Variables used in cross-sectional estimates in the Baccalaureate and Beyond descriptive reports were imputed. The variables identified for imputation were used in the two B&B:93/03 descriptive reports (Bradburn, Nevill, and Forrest Cataldi 2006; Alt and Henke 2007). The imputations were performed in three steps. First, the interview variables were imputed using the sequential hot deck imputation method.1 This imputation procedure involves identifying a relatively homogenous group of observations, and within the group selecting a random donor’s value to impute a value for the recipient. Second, using the interview variables, including the newly imputed variable values, derived variables were constructed.


Skips and Missing Values

Both during and upon completion of data collection, edit checks were performed on the B&B:93/03 data file to confirm that the intended skip patterns were implemented during the interview. At the conclusion of data collection, special codes were added as needed to indicate the reason for missing data. Missing data within individual data elements can occur for a variety of reasons.


The Table below lists each missing value code and its associated meaning in the B&B:93/03 interview. For more information, see the Baccalaureate and Beyond Longitudinal Study (B&B:93/03) methodology report.


Description of missing data codes

Missing data code Description
-1 Missing
-2 Not applicable
-3 Skipped
-4 B&B:97 nonrespondent not sampled
-6 Uncodeable, out of range
-7 Not reached
-8 Item was not reached due to an error
-9 Missing, blank

1Sequential hot-deck imputation involves defining imputation classes, which generally consist of a cross-classification of covariates, and then replacing missing values sequentially from a single pass through the survey data within the imputation classes. When this form of imputation is performed using the sampling weights, the procedure is called weighted sequential hot-deck imputation. This procedure takes into account the unequal probabilities of selection in the original sample to specify the expected number of times a particular respondent’s answer will be used as a donor. These expected selection frequencies are specified so that, over repeated applications of the algorithm, the weighted distribution of all values for that variable—imputed and observed—will resemble that of the target universe. Under this methodology, while each respondent record has a chance to be selected for use as a hot-deck donor, the number of times a respondent record can be used for imputation will be controlled. To implement the weighted sequential hot-deck procedure, imputation classes and sorting variables that are relevant (strong predictor) for each item being imputed were defined. Imputation classes were developed by using a Chi-squared Automatic Interaction.

View methodology reportbb03_subject_students.pdf9.75 MBbb03_varname_students.pdf8.81 MB001252
Beginning Postsecondary Students
BPS
Beginning students who were surveyed at the end of their first year, and then three and six years after first starting in postsecondary education.
Students’ persistence, progress and attainment of a degree, Labor force experiences
https://nces.ed.gov/surveys/bps/
712012/2014qsOnpsOntsOff25,000

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, BPS:12/14 data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in nonsampling errors. BPS:12/14 has multiple sources of data for some variables (CPS, NLSDS, student interview, etc.), and reporting differences can occur in each. Data swapping and other forms of perturbation, implemented to protect respondent confidentiality, can also lead to inconsistencies.


Imputation

All variables with missing data were imputed. The imputation procedures employed a two-step process. The first step is a logical imputation1. If the imputed value could be deduced from the logical relationships with other variables, then that information was used to impute the value for the recipient. The second step is weighted hot-deck imputation.2 This imputation procedure involves identifying a relatively homogenous group of observations, and, from within the group, selecting a random donor’s value to impute a value for the recipient.


Skips and Missing Values

The BPS:12/14 data were edited using procedures developed and implemented for previous studies sponsored by NCES, including the base-year study, NPSAS:04. Following data collection, the information collected in the student instrument was subjected to various quality control checks and examinations. These checks were to confirm that the collected data reflected appropriate skip patterns. Another evaluation examined all variables with missing data and substituted specific values to indicate the reason for the missing data. A variety of explanations are possible for missing data.


The table below shows codes for missing values used in BPS:12/14. Please consult the methodology report (coming soon) for more information.


Description of missing data codes

Missing data code Description
-1 Not classified
-2 Not applicable
-3 Skipped
-9 Data missing

1Logical imputation is a process that aims to infer or deduce the missing values from answers to other questions.

2Sequential hot-deck imputation involves defining imputation classes, which generally consist of a cross-classification of covariates, and then replacing missing values sequentially from a single pass through the survey data within the imputation classes. When this form of imputation is performed using the sampling weights, the procedure is called weighted sequential hot-deck imputation. This procedure takes into account the unequal probabilities of selection in the original sample to specify the expected number of times a particular respondent’s answer will be used as a donor. These expected selection frequencies are specified so that, over repeated applications of the algorithm, the weighted distribution of all values for that variable—imputed and observed—will resemble that of the target universe. While each respondent record may be selected for use as a hot-deck donor, the number of times a respondent record is used for imputation is controlled. To implement the weighted sequential hot-deck procedure, imputation classes and sorting variables that are relevant (strong predictors) for each item being imputed are defined. Imputation classes are developed by using a Chi-squared Automatic Interaction.

View methodology reportbps2014_subject.pdf2.10 MBbps2014_varname.pdf2.63 MB001353
532004/2009qsOnpsOntsOff16,500

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, BPS:04/09 data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in nonsampling errors. BPS:04/09 has multiple sources of data for some variables (CPS, NLSDS, student interview, etc.), and reporting differences can occur in each. Data swapping and other forms of perturbation, implemented to protect respondent confidentiality, can also lead to inconsistencies.


Imputation

All variables with missing data were imputed. The imputation procedures employed a two-step process. The first step is a logical imputation1. If the imputed value could be deduced from the logical relationships with other variables, then that information was used to impute the value for the recipient. The second step is weighted hot-deck imputation.2 This imputation procedure involves identifying a relatively homogenous group of observations, and, from within the group, selecting a random donor’s value to impute a value for the recipient.


Skips and Missing Values

The BPS:04/09 data were edited using procedures developed and implemented for previous studies sponsored by NCES, including the base-year study, NPSAS:04. Following data collection, the information collected in the student instrument was subjected to various quality control checks and examinations. These checks were to confirm that the collected data reflected appropriate skip patterns. Another evaluation examined all variables with missing data and substituted specific values to indicate the reason for the missing data. A variety of explanations are possible for missing data.


The table below shows codes for missing values used in BPS:04/09. Please consult the methodology report (coming soon) for more information.


Description of missing data codes

Missing data code Description
-2 Independent student
-3 Skipped
-9 Data missing

1Logical imputation is a process that aims to infer or deduce the missing values from answers to other questions.

2Sequential hot-deck imputation involves defining imputation classes, which generally consist of a cross-classification of covariates, and then replacing missing values sequentially from a single pass through the survey data within the imputation classes. When this form of imputation is performed using the sampling weights, the procedure is called weighted sequential hot-deck imputation. This procedure takes into account the unequal probabilities of selection in the original sample to specify the expected number of times a particular respondent’s answer will be used as a donor. These expected selection frequencies are specified so that, over repeated applications of the algorithm, the weighted distribution of all values for that variable—imputed and observed—will resemble that of the target universe. While each respondent record may be selected for use as a hot-deck donor, the number of times a respondent record is used for imputation is controlled. To implement the weighted sequential hot-deck procedure, imputation classes and sorting variables that are relevant (strong predictors) for each item being imputed are defined. Imputation classes are developed by using a Chi-squared Automatic Interaction.

View methodology reportbps2009_subject.pdf7.50 MBbps2009_varname.pdf6.20 MB001333
11996/2001qsOnpsOntsOff12,000

Imputation

Logical imputations were performed where items were missing but their values could be implicitly determined.


Skips and Missing Values

During and following data collection, the CATI/CAPI data were reviewed to confirm that the data collected reflected the intended skip-pattern relationships. At the conclusion of data collection, special codes were inserted in the database to reflect the different types of missing data. There are a variety of explanations for missing data within individual data elements.


The table below shows codes for missing values used in BPS:01. Please consult the methodology report for more information.


Description of missing data codes

Missing data code Description
-1 Don’t know
-2 Refused
-3 Legitimate skip (item was intentionally not collected because variable was not applicable to this student)
-6 Bad data, out of range, uncodeable userexit string
-7 Not reached
-8 Missing, CATI error
-9 Missing

View methodology reportbps2001_subject.pdf9.20 MBbps2001_varname.pdf7.10 MB001334
321990/1994qsOnpsOntsOff6,600

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, BPS:94 data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in nonsampling errors. BPS:94 has multiple sources of data for some variables (CPS, NLSDS, student interview, etc.), and reporting differences can occur in each. Data swapping and other forms of perturbation, implemented to protect respondent confidentiality, can also lead to inconsistencies.


Imputation

All variables with missing data were imputed. The imputation procedures employed a two-step process. The first step is a logical imputation1. If the imputed value could be deduced from the logical relationships with other variables, then that information was used to impute the value for the recipient. The second step is weighted hot-deck imputation.2 This imputation procedure involves identifying a relatively homogenous group of observations, and, from within the group, selecting a random donor’s value to impute a value for the recipient.


Skips and Missing Values

The BPS:94 data were edited using procedures developed and implemented for previous studies sponsored by NCES. Following data collection, the information collected in the student instrument was subjected to various quality control checks and examinations. These checks were to confirm that the collected data reflected appropriate skip patterns. Another evaluation examined all variables with missing data and substituted specific values to indicate the reason for the missing data.

A variety of explanations are possible for missing data.



The table below shows codes for missing values used in BPS:94. Please consult the methodology report  for more information.


Description of missing data codes

         
Missing data code Description
-2 Independent student
-3 Skipped
-9 Data missing
View methodology reportbps1994_subject.pdf4.34 MBbps1994_varname.pdf4.17 MB001335
National Postsecondary Student Aid Study, Undergraduate
NPSAS:UG
Students who were undergraduates at the time of interview
General demographics, Types of aid and amounts received, Cost of attending college, Combinations of work, study, and borrowing, Enrollment patterns
https://nces.ed.gov/surveys/npsas
12112016qsOnpsOntsOn89,000

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

All variables with missing data were imputed. The imputation procedures employed a two-step process. The first step is a logical imputation1. If the imputed value could be deduced from the logical relationships with other variables, then that information was used to impute the value for the recipient. The second step is weighted hot-deck imputation.2 This imputation procedure involves identifying a relatively homogenous group of observations, and, from within the group, selecting a random donor’s value to impute a value for the recipient.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely.


The table below shows the set of reserve codes for missing values used in NPSAS 2016. Please consult the data file documentation report for more information.


Description of missing data codes

Missing data code Description
-3 Skipped
-9 Missing

1Logical imputation is a process that aims to infer or deduce the missing values from answers to other questions.

2Sequential hot-deck imputation involves defining imputation classes, which generally consist of a cross-classification of covariates, and then replacing missing values sequentially from a single pass through the survey data within the imputation classes. When this form of imputation is performed using the sampling weights, the procedure is called weighted sequential hot-deck imputation. This procedure takes into account the unequal probabilities of selection in the original sample to specify the expected number of times a particular respondent’s answer will be used as a donor. These expected selection frequencies are specified so that, over repeated applications of the algorithm, the weighted distribution of all values for that variableimputed and observedwill resemble that of the target universe. While each respondent record may be selected for use as a hot-deck donor, the number of times a respondent record is used for imputation is controlled. To implement the weighted sequential hot-deck procedure, imputation classes and sorting variables that are relevant (strong predictors) for each item being imputed are defined. Imputation classes are developed by using a Chi-squared Automatic Interaction.

View methodology reportnpsas2016ug_subject.pdf8.7 MBnpsas2016ug_varname.pdf6.7 MB001462
8212012qsOnpsOntsOn95,000

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in nonsampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Missing Values and Imputation

Following data collection, the data are subjected to various consistency and quality control checks before release. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely.


Except for data that were missing for cases to which they did not apply (e.g., whether a spouse is enrolled in college for unmarried students) and in a small number of items describing institutional characteristics, missing data were imputed using a two-step process. The first step is a logical imputation.1 If a value could be calculated from the logical relationships with other variables, then that information was used to impute the value for the observation with a missing value. The second step is weighted hot deck imputation.2 This procedure involves identifying a relatively homogenous group of observations, and, from within the group, selecting a random donor's value to impute a value for the observation with a missing value.


The table below shows the set of missing value codes for missing values that were not imputed in NPSAS:12. More information is available from the NPSAS:12 Data File Documentation (http://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2014182).


Description of missing value codes

Missing data codeDescription
-1Not classified
-2Not applicable
-3Skipped
-9Missing

1Logical imputation is a process that aims to infer or deduce the missing values from values for other items.

2Sequential hot deck imputation involves defining imputation classes, which generally consist of a cross-classification of covariates, and then replacing missing values sequentially from a single pass through the survey data within the imputation classes. When this form of imputation is performed using the sampling weights, the procedure is called weighted sequential hot deck imputation. This procedure takes into account the unequal probabilities of selection in the original sample to specify the expected number of times a particular respondent's answer will be used as a donor. These expected selection frequencies are specified so that, over repeated applications of the algorithm, the weighted distribution of all values for that variable—imputed and observed—will resemble that of the target universe. While each respondent record may be selected for use as a hot deck donor, the number of times a respondent record is used for imputation is controlled. To implement the weighted sequential hot deck procedure, imputation classes and sorting variables that are relevant (strong predictors) for each item being imputed are defined. Imputation classes are developed by using the chi-square automatic interaction detection algorithm.

View methodology reportnpsas2012ug_subject.pdf6.90 MBnpsas2012ug_varname.pdf5.45 MB001436
5112008qsOnpsOntsOn113,500

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

All variables with missing data were imputed. The imputation procedures employed a two-step process. The first step is a logical imputation1. If the imputed value could be deduced from the logical relationships with other variables, then that information was used to impute the value for the recipient. The second step is weighted hot-deck imputation.2 This imputation procedure involves identifying a relatively homogenous group of observations, and, from within the group, selecting a random donor’s value to impute a value for the recipient.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely.


The table below shows the set of reserve codes for missing values used in NPSAS 2008. Please consult the methodology report for more information.


Description of missing data codes

Missing data code Description
-1 Not classified
-2 Not applicable
-6 Out of range
-8 Item was not reached due to an error
-9 Missing

1Logical imputation is a process that aims to infer or deduce the missing values from answers to other questions.

2Sequential hot-deck imputation involves defining imputation classes, which generally consist of a cross-classification of covariates, and then replacing missing values sequentially from a single pass through the survey data within the imputation classes. When this form of imputation is performed using the sampling weights, the procedure is called weighted sequential hot-deck imputation. This procedure takes into account the unequal probabilities of selection in the original sample to specify the expected number of times a particular respondent’s answer will be used as a donor. These expected selection frequencies are specified so that, over repeated applications of the algorithm, the weighted distribution of all values for that variable—imputed and observed—will resemble that of the target universe. While each respondent record may be selected for use as a hot-deck donor, the number of times a respondent record is used for imputation is controlled. To implement the weighted sequential hot-deck procedure, imputation classes and sorting variables that are relevant (strong predictors) for each item being imputed are defined. Imputation classes are developed by using a Chi-squared Automatic Interaction.

View methodology reportnpsas2008ug_subject.pdf8.10 MBnpsas2008ug_varname.pdf6.40 MB001437
2412004qsOnpsOntsOn79,900
 

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

The imputation procedures employed a two-step process. In the first step, the matching criteria and imputation classes that were used to stratify the dataset were identified such that all imputation was processed independently within each class. In the second step, the weighted sequential hot deck process1 was implemented, whereby missing data were replaced with valid data from donor records that match the recipients with respect to the matching criteria. Variables requiring imputation were not imputed simultaneously. However, some variables that were related substantively were grouped together into blocks, and the variables within a block were imputed simultaneously. Basic demographic variables were imputed first using variables with full information to determine the matching criteria. The order in which variables were imputed was also determined to some extent by the substantive nature of the variables. For example, basic demographics (such as age) were imputed first and these were used to process education variables (such as student level and enrollment intensity) which in turn were used to impute the financial aid variables (such as aid receipt and loan amounts).


Skips and Missing Values

Edit checks were performed on the NPSAS:04 student interview data and CADE data, both during and upon completion of data collection, to confirm that the intended skip patterns were implemented in both instruments. At the conclusion of data collection, special codes were added as needed to indicate the reason for missing data. Missing data within individual data elements can occur for a variety of reasons.


The table below shows the set of reserve codes for missing values used in NPSAS 2004. Please consult the methodology report for more information.


Description of missing data codes

Missing data code Description
-1 Not classified
-3 Legitimate skip
-9 Missing

1Sequential hot-deck imputation involves defining imputation classes, which generally consist of a cross-classification of covariates, and then replacing missing values sequentially from a single pass through the survey data within the imputation classes. When this form of imputation is performed using the sampling weights, the procedure is called weighted sequential hot-deck imputation. This procedure takes into account the unequal probabilities of selection in the original sample to specify the expected number of times a particular respondent’s answer will be used as a donor. These expected selection frequencies are specified so that, over repeated applications of the algorithm, the weighted distribution of all values for that variable—imputed and observed—will resemble that of the target universe. While each respondent record may be selected for use as a hot-deck donor, the number of times a respondent record is used for imputation is controlled. To implement the weighted sequential hot-deck procedure, imputation classes and sorting variables that are relevant (strong predictors) for each item being imputed are defined. Imputation classes are developed by using a Chi-squared Automatic Interaction.

View methodology reportnpsas2004ug_subject.pdf7.75 MBnpsas2004ug_varname.pdf6.00 MB001438
3512000qsOffpsOntsOn50,000

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, NPSAS:00 data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in nonsampling errors.


Imputation

All variables with missing data were imputed. The imputation procedures employed a two-step process. The first step is a logical imputation1. If the imputed value could be deduced from the logical relationships with other variables, then that information was used to impute the value for the recipient. The second step is weighted hot-deck imputation.2 This imputation procedure involves identifying a relatively homogenous group of observations, and, from within the group, selecting a random donor’s value to impute a value for the recipient.


Skips and Missing Values

The NPSAS:00 data were edited using procedures developed and implemented for previous studies sponsored by NCES. Following data collection, the information collected in the student instrument was subjected to various quality control checks and examinations. These checks were to confirm that the collected data reflected appropriate skip patterns. Another evaluation examined all variables with missing data and substituted specific values to indicate the reason for the missing data. A variety of explanations are possible for missing data.


 

The table below shows codes for missing values used in NPSAS:00 Please consult the methodology report  for more information.


Description of missing data codes

         
Missing data code Description
-2 Independent student
-3 Skipped
-9 Data missing
View methodology reportnpsas2000ug_subject.pdf8.68 MBnpsas2000ug_varname.pdf7.25 MB001439
3611996qsOffpsOntsOn41,500

Imputation

Values for 22 analysis variables were imputed. The variables were imputed using a weighted hot deck procedure, with the exception of estimated family contribution (EFC), which was imputed through a multiple regression approach.The weighed hot deck imputation procedure involves identifying a relatively homogenous group of observations, and, from within the group, selecting a random donor’s value to impute a value for the recipient.


Skips and Missing Values

The NPSAS:96 data were edited using procedures developed and implemented for previous studies sponsored by NCES. Following data collection, the information collected in the student instrument was subjected to various quality control checks and examinations. These checks were to confirm that the collected data reflected appropriate skip patterns. Another evaluation examined all variables with missing data and substituted specific values to indicate the reason for the missing data. A variety of explanations are possible for missing data.


 

The table below shows codes for missing values used in NPSAS:96 Please consult the methodology report  for more information.


Description of missing data codes

             
Missing data code Description
-1 Don't know
-2 Refused
-3 Skipped
-8 Data source not available
-9 Data missing
View methodology reportnpsas1996ug_subject.pdf3.47 MBnpsas1996ug_varname.pdf3.09 MB001440
National Postsecondary Student Aid Study, Graduate
NPSAS:GR
Students who were graduate and first-professional students at the time of interview
General demographics, Types of aid and amounts received, Cost of attending college, Combinations of work, study, and borrowing, Enrollment patterns
https://nces.ed.gov/surveys/npsas
12222016qsOnpsOntsOn24,000

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

All variables with missing data were imputed. The imputation procedures employed a two-step process. The first step is a logical imputation1. If the imputed value could be deduced from the logical relationships with other variables, then that information was used to impute the value for the recipient. The second step is weighted hot-deck imputation.2 This imputation procedure involves identifying a relatively homogenous group of observations, and, from within the group, selecting a random donor’s value to impute a value for the recipient.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely.


The table below shows the set of reserve codes for missing values used in NPSAS 2016. Please consult the data file documentation report for more information.


Description of missing data codes

Missing data code Description
-3 Skipped
-9 Missing

1Logical imputation is a process that aims to infer or deduce the missing values from answers to other questions.

2Sequential hot-deck imputation involves defining imputation classes, which generally consist of a cross-classification of covariates, and then replacing missing values sequentially from a single pass through the survey data within the imputation classes. When this form of imputation is performed using the sampling weights, the procedure is called weighted sequential hot-deck imputation. This procedure takes into account the unequal probabilities of selection in the original sample to specify the expected number of times a particular respondent’s answer will be used as a donor. These expected selection frequencies are specified so that, over repeated applications of the algorithm, the weighted distribution of all values for that variableimputed and observedwill resemble that of the target universe. While each respondent record may be selected for use as a hot-deck donor, the number of times a respondent record is used for imputation is controlled. To implement the weighted sequential hot-deck procedure, imputation classes and sorting variables that are relevant (strong predictors) for each item being imputed are defined. Imputation classes are developed by using a Chi-squared Automatic Interaction.

View methodology reportnpsas2016gr_subject.pdf6.6 MBnpsas2016gr_varname.pdf5.4 MB001563
8322012qsOnpsOntsOn16,000

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in nonsampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Missing Values and Imputation

Following data collection, the data are subjected to various consistency and quality control checks before release. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely.


Except for data that were missing for cases to which they did not apply (e.g., whether a spouse is enrolled in college for unmarried students) and in a small number of items describing institutional characteristics, missing data were imputed using a two-step process. The first step is a logical imputation.1 If a value could be calculated from the logical relationships with other variables, then that information was used to impute the value for the observation with a missing value. The second step is weighted hot deck imputation.2 This procedure involves identifying a relatively homogenous group of observations, and, from within the group, selecting a random donor's value to impute a value for the observation with a missing value.


The table below shows the set of missing value codes for missing values that were not imputed in NPSAS:12. More information is available from the NPSAS:12 Data File Documentation (http://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2014182).


Description of missing value codes

Missing data codeDescription
-1Not classified
-2Not applicable
-3Skipped
-9Missing

1Logical imputation is a process that aims to infer or deduce the missing values from values for other items.

2Sequential hot deck imputation involves defining imputation classes, which generally consist of a cross-classification of covariates, and then replacing missing values sequentially from a single pass through the survey data within the imputation classes. When this form of imputation is performed using the sampling weights, the procedure is called weighted sequential hot deck imputation. This procedure takes into account the unequal probabilities of selection in the original sample to specify the expected number of times a particular respondent's answer will be used as a donor. These expected selection frequencies are specified so that, over repeated applications of the algorithm, the weighted distribution of all values for that variable—imputed and observed—will resemble that of the target universe. While each respondent record may be selected for use as a hot deck donor, the number of times a respondent record is used for imputation is controlled. To implement the weighted sequential hot deck procedure, imputation classes and sorting variables that are relevant (strong predictors) for each item being imputed are defined. Imputation classes are developed by using the chi-square automatic interaction detection algorithm.

View methodology reportnpsas2012gr_subject.pdf1.47 MBnpsas2012gr_varname.pdf4.20 MB001541
5222008qsOnpsOntsOn14,200

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

All variables with missing data were imputed. The imputation procedures employed a two-step process. The first step is a logical imputation.1 If the imputed value could be deduced from the logical relationships with other variables, then that information was used to impute the value for the recipient. The second step is weighted hot-deck imputation.2 This imputation procedure involves identifying a relatively homogenous group of observations, and, from within the group, selecting a random donor’s value to impute a value for the recipient.


Skips and Missing Values

Following data collection, the data are subjected to various consistency and quality control checks before release for use by analysts. One important check is examining all variables with missing data and substituting specific values to indicate the reason for the missing data. For example, an item may not have been applicable to some groups of respondents, a respondent may not have known the answer to a question, or a respondent may have skipped the item entirely.


The table below shows the set of reserve codes for missing values used in NPSAS 2008. Please consult the methodology report for more information.


Description of missing data codes

Missing data code Description
-1 Not classified
-3 Not applicable
-6 Out of range
-8 Item was not reached due to an error
-9 Missing

1Logical imputation is a process that aims to infer or deduce the missing values from answers to other questions.

2Sequential hot-deck imputation involves defining imputation classes, which generally consist of a cross-classification of covariates, and then replacing missing values sequentially from a single pass through the survey data within the imputation classes. When this form of imputation is performed using the sampling weights, the procedure is called weighted sequential hot-deck imputation. This procedure takes into account the unequal probabilities of selection in the original sample to specify the expected number of times a particular respondent’s answer will be used as a donor. These expected selection frequencies are specified so that, over repeated applications of the algorithm, the weighted distribution of all values for that variable—imputed and observed—will resemble that of the target universe. While each respondent record may be selected for use as a hot-deck donor, the number of times a respondent record is used for imputation is controlled. To implement the weighted sequential hot-deck procedure, imputation classes and sorting variables that are relevant (strong predictors) for each item being imputed are defined. Imputation classes are developed by using a Chi-squared Automatic Interaction.

View methodology reportnpsas2008gr_subject.pdf1.02 MBnpsas2008gr_varname.pdf748 KB001542
1222004qsOnpsOntsOn10,900
 

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in non-sampling errors. Data swapping and other forms of perturbation can lead to inconsistencies.


Imputation

The imputation procedures employed a two-step process. In the first step, the matching criteria and imputation classes that were used to stratify the dataset were identified such that all imputation was processed independently within each class. In the second step, the weighted sequential hot deck process1 was implemented, whereby missing data were replaced with valid data from donor records that match the recipients with respect to the matching criteria. Variables requiring imputation were not imputed simultaneously. However, some variables that were related substantively were grouped together into blocks, and the variables within a block were imputed simultaneously. Basic demographic variables were imputed first using variables with full information to determine the matching criteria. The order in which variables were imputed was also determined to some extent by the substantive nature of the variables. For example, basic demographics (such as age) were imputed first and these were used to process education variables (such as student level and enrollment intensity) which in turn were used to impute the financial aid variables (such as aid receipt and loan amounts).


Skips and Missing Values

Edit checks were performed on the NPSAS:04 student interview data and CADE data, both during and upon completion of data collection, to confirm that the intended skip patterns were implemented in both instruments. At the conclusion of data collection, special codes were added as needed to indicate the reason for missing data. Missing data within individual data elements can occur for a variety of reasons.


The table below shows the set of reserve codes for missing values used in NPSAS 2004. Please consult the methodology report for more information.


Description of missing data codes

Missing data code Description
-1 Not classified
-3 Legitimate skip
-9 Missing

1Sequential hot-deck imputation involves defining imputation classes, which generally consist of a cross-classification of covariates, and then replacing missing values sequentially from a single pass through the survey data within the imputation classes. When this form of imputation is performed using the sampling weights, the procedure is called weighted sequential hot-deck imputation. This procedure takes into account the unequal probabilities of selection in the original sample to specify the expected number of times a particular respondent’s answer will be used as a donor. These expected selection frequencies are specified so that, over repeated applications of the algorithm, the weighted distribution of all values for that variable—imputed and observed—will resemble that of the target universe. While each respondent record may be selected for use as a hot-deck donor, the number of times a respondent record is used for imputation is controlled. To implement the weighted sequential hot-deck procedure, imputation classes and sorting variables that are relevant (strong predictors) for each item being imputed are defined. Imputation classes are developed by using a Chi-squared Automatic Interaction.

View methodology reportnpsas2004gr_subject.pdf1.06 MBnpsas2004gr_varname.pdf787 KB001543
3722000qsOffpsOntsOn12,000

Perturbation

To protect the confidentiality of NCES data that contain information about specific individuals, NPSAS:00 data were subject to perturbation procedures to minimize disclosure risk. Perturbation procedures, which have been approved by the NCES Disclosure Review Board, preserve the central tendency estimates but may result in slight increases in nonsampling errors.


Imputation

All variables with missing data were imputed. The imputation procedures employed a two-step process. The first step is a logical imputation1. If the imputed value could be deduced from the logical relationships with other variables, then that information was used to impute the value for the recipient. The second step is weighted hot-deck imputation.2 This imputation procedure involves identifying a relatively homogenous group of observations, and, from within the group, selecting a random donor’s value to impute a value for the recipient.


Skips and Missing Values

The NPSAS:00 data were edited using procedures developed and implemented for previous studies sponsored by NCES. Following data collection, the information collected in the student instrument was subjected to various quality control checks and examinations. These checks were to confirm that the collected data reflected appropriate skip patterns. Another evaluation examined all variables with missing data and substituted specific values to indicate the reason for the missing data. A variety of explanations are possible for missing data.


 

The table below shows codes for missing values used in NPSAS:00 Please consult the methodology report  for more information.


Description of missing data codes

         
Missing data code Description
-2 Independent student
-3 Skipped
-9 Data missing
View methodology reportnpsas2000gr_subject.pdf1.71 MBnpsas2000gr_varname.pdf1.43 MB001544
3821996qsOffpsOntsOn7,000

Imputation

Values for 22 analysis variables were imputed. The variables were imputed using a weighted hot deck procedure, with the exception of estimated family contribution (EFC), which was imputed through a multiple regression approach.The weighed hot deck imputation procedure involves identifying a relatively homogenous group of observations, and, from within the group, selecting a random donor’s value to impute a value for the recipient.


Skips and Missing Values

The NPSAS:96 data were edited using procedures developed and implemented for previous studies sponsored by NCES. Following data collection, the information collected in the student instrument was subjected to various quality control checks and examinations. These checks were to confirm that the collected data reflected appropriate skip patterns. Another evaluation examined all variables with missing data and substituted specific values to indicate the reason for the missing data. A variety of explanations are possible for missing data.


 

The table below shows codes for missing values used in NPSAS:96 Please consult the methodology report  for more information.


Description of missing data codes

             
Missing data code Description
-1 Don''t know
-2 Refused
-3 Skipped
-8 Data source not available
-9 Data missing
View methodology reportnpsas1996gr_subject.pdf2.53 MBnpsas1996gr_varname.pdf2.13 MB001545
National Study of Postsecondary Faculty
NSOPF
Postsecondary faculty
Workload, Equity issues, Involvement in undergraduate teaching, Relationship between teaching and research
https://nces.ed.gov/surveys/nsopf
282004qsOnpsOntsOff26,100

Perturbation

A restricted faculty-level data file was created for release to individuals who apply for and meet standards for such data releases. While this file does not include personally identifying information (i.e., name and Social Security number), other data (i.e., institution, Integrated Postsecondary Education Data System [IPEDS] ID, demographic information, and salary data) may be manipulated in such a way to seem to identify data records corresponding to a particular faculty member. To protect further against such situations, some of the variable values were swapped between faculty respondents. This procedure perturbed and added additional uncertainty to the data. Thus, associations made among variable values to identify a faculty respondent may be based on the original or edited, imputed and/or swapped data. For the same reasons, the data from the institution questionnaire were also swapped to avoid data disclosure.


Imputation

Item imputation for the faculty questionnaire was performed in several steps. In the first step, the missing values of gender, race, and ethnicity were filled—using cold-deck imputation1— based on the sampling frame information or institution record data. These three key demographic variables were imputed prior to any other variables since they were used as key predictors for all other variables on the data file. After all logical2 and cold-deck imputation procedures were performed, the remaining variables were imputed using the weighted sequential hot-deck method.3 Initially, variables were separated into two groups: unconditional and conditional variables. The first group (unconditional) consisted of variables that applied to all respondents, while the second group (conditional) consisted of variables that applied to only a subset of the respondents. That is, conditional variables were subject to “gate” questions. After this initial grouping, these groups were divided into finer subgroups. After all variables were imputed, consistency checks were applied to the entire faculty data file to ensure that the imputed values did not conflict with other questionnaire items, observed or imputed. This process involved reviewing all of the logical imputation and editing rules as well.


Skips and Missing Values

During and following data collection, the data were reviewed to confirm that the data collected reflected the intended skip-pattern relationships. At the conclusion of data collection, special codes were inserted in the database to reflect the different types of missing data. There are a number of explanations for missing data; for example, the item may not have been applicable to certain respondents or a respondent may not have known the answer to the question. With the exception of the not applicable codes, missing data were stochastically imputed. Moreover, for hierarchical analyses and developing survey estimates for faculty members corresponding to sample institutions that provided faculty lists and responded to the institution survey, contextual weights were produced for such subsets of the responding faculty members.


The table below shows codes for missing values used in NSOPF:04. Please consult the methodology report for more information.


Description of missing data codes

Missing data code Description
-3 Legitimate skip
-7 Not reached
-9 Missing

1Cold-deck imputation involves replacing the missing values with data from sources such as data used for sampling frame construction. While resource intensive, these methods often obtain the actual value that is missing. Stochastic imputation methods, such as sequential hot-deck imputation, rely on the observed data to provide replacing values (donors) for records with missing values.

2Logical imputation is a process that aims to infer or deduce the missing values from answers to other questions.

3Sequential hot-deck imputation involves defining imputation classes, which generally consist of a cross-classification of covariates, and then replacing missing values sequentially from a single pass through the survey data within the imputation classes. When this form of imputation is performed using the sampling weights, the procedure is called weighted sequential hot-deck imputation. This procedure takes into account the unequal probabilities of selection in the original sample to specify the expected number of times a particular respondent’s answer will be used as a donor. These expected selection frequencies are specified so that, over repeated applications of the algorithm, the weighted distribution of all values for that variable—imputed and observed—will resemble that of the target universe. Under this methodology, while each respondent record has a chance to be selected for use as a hot-deck donor, the number of times a respondent record can be used for imputation will be controlled. To implement the weighted sequential hot-deck procedure, imputation classes and sorting variables that are relevant (strong predictor) for each item being imputed were defined. Imputation classes were developed by using a Chi-squared Automatic Interaction.

View methodology reportnsopf04_subject.pdf1.16 MBnsopf04_varname.pdf926 KB001646
National Study of Postsecondary Faculty, Institutions
NSOPF
Postsecondary institutions
Faculty tenure policies, Union representation, and Faculty attrition
https://nces.ed.gov/surveys/nsopf
292004qsOnpsOntsOff900
 

Imputation

The imputation process for the missing data from the institution questionnaire involved similar steps to those used for imputation of the faculty data. The missing data for variables were imputed using the weighted sequential hot-deck method.1 Analogous to the imputation process for the faculty data, the variables were partitioned into conditional and unconditional groups. The unconditional variables were sorted by percent missing and then imputed in the order from the lowest percent missing to the highest. The conditional group was partitioned into three subgroups based on the level of conditionality for each variable, and then imputed in that order. The imputation class for both unconditional and conditional variables consisted of the institution sampling stratum, and the sorting variables included the number of full-time and part-time faculty members.


Skips and Missing Values

During and following data collection, the data were reviewed to confirm that the data collected reflected the intended skip-pattern relationships. At the conclusion of data collection, special codes were inserted in the database to reflect the different types of missing data. There are a number of explanations for missing data; for example, the item may not have been applicable to certain respondents or a respondent may not have known the answer to the question. With the exception of the not applicable codes, missing data were stochastically imputed. Moreover, for hierarchical analyses and developing survey estimates for faculty members corresponding to sample institutions that provided faculty lists and responded to the institution survey, contextual weights were produced for such subsets of the responding faculty members.


The table below shows codes for missing values used in NSOPF:04. Please consult the methodology report for more information.


Description of missing data codes

Missing data code Description
-3 Legitimate skip
-7 Not reached
-9 Missing

1Sequential hot-deck imputation involves defining imputation classes, which generally consist of a cross-classification of covariates, and then replacing missing values sequentially from a single pass through the survey data within the imputation classes. When this form of imputation is performed using the sampling weights, the procedure is called weighted sequential hot-deck imputation. This procedure takes into account the unequal probabilities of selection in the original sample to specify the expected number of times a particular respondent’s answer will be used as a donor. These expected selection frequencies are specified so that, over repeated applications of the algorithm, the weighted distribution of all values for that variable—imputed and observed—will resemble that of the target universe. Under this methodology, while each respondent record has a chance to be selected for use as a hot-deck donor, the number of times a respondent record can be used for imputation will be controlled. To implement the weighted sequential hot-deck procedure, imputation classes and sorting variables that are relevant (strong predictor) for each item being imputed were defined. Imputation classes were developed by using a Chi-squared Automatic Interaction.

View methodology reportnsopf04inst_subject.pdf543 KBnsopf04inst_varname.pdf471 KB001747
National Teacher and Principal Survey, Public School Teachers
NTPS
Public school teachers
Class Organization, Education and Training, Certification, Professional Development, Working Conditions, School Climate and Teacher Attitudes, Employment and Background Information
https://nces.ed.gov/surveys/ntps
1252015-2016qsOnpsOntsOff8,300

Imputation

The NTPS used two main approaches to impute data. First, donor respondent methods, such as hot-deck imputation, were used. Second, if no suitable donor case could be matched, the few remaining items were imputed using mean or mode from groups of similar cases to impute a value to the item with missing data. Finally, in rare cases for which imputed values were inconsistent with existing questionnaire data or out of the range of acceptable values, Census Bureau analysts looked at the items and tried to determine an appropriate value.

Weighting

Weighting of the sample units was carried out to produce national estimates for public schools, principals, and teachers. The weighting procedures used in NTPS had three purposes: to take into account the school's selection probability; to reduce biases that may result from unit nonresponse; and to make use of available information from external sources to improve the precision of sample estimates.

ntps2016teachers_subject.pdf7.17 MBntps2016teachers_varname.pdf4.57 MB001864
National Teacher and Principal Survey, Public School Principals
NTPS
Public school principals
Experience, Training, Education, and Professional Development, Goals and Decision Making, Teacher and Aide Professional Development, School Climate and Safety, Instructional Time, Working Conditions and Principal Perceptions, Teacher and School Performance
https://nces.ed.gov/surveys/ntps
1262015-2016qsOnpsOntsOff8,300

Imputation

The NTPS used two main approaches to impute data. First, donor respondent methods, such as hot-deck imputation, were used. Second, if no suitable donor case could be matched, the few remaining items were imputed using mean or mode from groups of similar cases to impute a value to the item with missing data. Finally, in rare cases for which imputed values were inconsistent with existing questionnaire data or out of the range of acceptable values, Census Bureau analysts looked at the items and tried to determine an appropriate value.

Weighting

Weighting of the sample units was carried out to produce national estimates for public schools, principals, and teachers. The weighting procedures used in NTPS had three purposes: to take into account the school's selection probability; to reduce biases that may result from unit nonresponse; and to make use of available information from external sources to improve the precision of sample estimates.

ntps2016principals_subject.pdf1.53 MBntps2016principals_varname.pdf1.70 MB001965
National Teacher and Principal Survey, Public Schools
NTPS
Public schools
Teacher demand, teacher and principal characteristics, general conditions in schools, principals' and teachers' perceptions of school climate and problems in their schools, teacher compensation, district hiring and retention practices, basic characteristics of the student population
https://nces.ed.gov/surveys/ntps
1272015-2016qsOnpsOntsOff8,300

Imputation

The NTPS used two main approaches to impute data. First, donor respondent methods, such as hot-deck imputation, were used. Second, if no suitable donor case could be matched, the few remaining items were imputed using mean or mode from groups of similar cases to impute a value to the item with missing data. Finally, in rare cases for which imputed values were inconsistent with existing questionnaire data or out of the range of acceptable values, Census Bureau analysts looked at the items and tried to determine an appropriate value.

Weighting

Weighting of the sample units was carried out to produce national estimates for public schools, principals, and teachers. The weighting procedures used in NTPS had three purposes: to take into account the school's selection probability; to reduce biases that may result from unit nonresponse; and to make use of available information from external sources to improve the precision of sample estimates.

ntps2016schools_subject.pdf2.59 MBntps2016schools_varname.pdf3.35 MB002066
Early Childhood Program Participation
ECPP
Children who were enrolled in some type of childcare program
Children's participation, Relative care, Nonrelative care, Center-based care, Head Start and Early Head start programs, time spent in care, number of children and care providers
https://nces.ed.gov/nhes
13062016qsOnpsOntsOn5,800

Imputation

Four approaches to imputation were used in the NHES:2016: logic-based imputation, which was used whenever possible; unweighted sequential hot deck imputation, which was used for the majority of the missing data (i.e., for all variables that were not boundary and sort variables—described below); weighted random imputation, which was used for a small number of variables including boundary and sort variables; and manual imputation, which was used in a very small number of cases for a small number of variables.

For more information about these approaches, please see the NHES: 2016 Data File User's Manual.

ecpp2016_subject.pdfecpp2016_varname.pdf002169
12962012qsOnpsOntsOn7,900

Imputation

Three approaches to imputation were used in the NHES:2012: unweighted sequential hot deck imputation, which was used for the majority of the missing data, that is, for all variables that were not required for Interview Status Recode (ISR) classification, as described in chapter 4; weighted random imputation, which was used for a small number of variables; and manual imputation, which was used in a very small number of cases for most variables.

For more information about these approaches, please see the NHES: 2012 Data File User's Manual.

ecpp2012_subject.pdfecpp2012_varname.pdf002168
Adult Training and Education Survey
ATES
Adults who were enrolled in a training or literacy program
Education, Certifications and Licenses, Certificates, Work Experience Programs, Employment, Background
https://nces.ed.gov/nhes
1332016qsOnpsOntsOff47,700

Imputation

Four approaches to imputation were used in the NHES:2016: logic-based imputation, which was used whenever possible; unweighted sequential hot deck imputation, which was used for the majority of the missing data (i.e., for all variables that were not boundary and sort variables—described below); weighted random imputation, which was used for a small number of variables including boundary and sort variables; and manual imputation, which was used in a very small number of cases for a small number of variables.

For more information about these approaches, please see the NHES: 2016 Data File User's Manual.

ates2016_subject.pdf2.84 MBates2016_varname.pdf2.90 MB002270
Parent and Family Involvement in Education
PFI
Parents and families who were involved in their child's education
Children's schooling, Families and schools, Homework, Family activities, Health, Background, Household
https://nces.ed.gov/nhes
13272016qsOnpsOntsOn13,500

Imputation

Four approaches to imputation were used in the NHES:2016: logic-based imputation, which was used whenever possible; unweighted sequential hot deck imputation, which was used for the majority of the missing data (i.e., for all variables that were not boundary and sort variables—described below); weighted random imputation, which was used for a small number of variables including boundary and sort variables; and manual imputation, which was used in a very small number of cases for a small number of variables.

For more information about these approaches, please see the NHES: 2016 Data File User's Manual.

pfi2016_subject.pdf2.5 MBpfi2016_varname.pdf2.1 MB002372
13172012qsOnpsOntsOn17,200

Imputation

Three approaches to imputation were used in the NHES:2012: unweighted sequential hot deck imputation, which was used for the majority of the missing data, that is, for all variables that were not required for Interview Status Recode (ISR) classification, as described in chapter 4; weighted random imputation, which was used for a small number of variables; and manual imputation, which was used in a very small number of cases for most variables.

For more information about these approaches, please see the NHES: 2012 Data File User's Manual.

pfi2012_subject.pdfpfi2012_varname.pdf002371
Adult Training and Education Survey: 20162016Adult EducationqsOnpsOntsOff
Baccalaureate and Beyond: 1993/20031993/2003PostsecondaryqsOnpsOntsOff
Baccalaureate and Beyond: 1993/20031993/2003Adult EducationqsOnpsOntsOff
Baccalaureate and Beyond: 1993/2003 Graduate students1993/2003PostsecondaryqsOffpsOntsOff
Baccalaureate and Beyond: 1993/2003 Graduate students1993/2003Adult EducationqsOffpsOntsOff
Baccalaureate and Beyond: 2000/20012000/2001PostsecondaryqsOffpsOntsOff
Baccalaureate and Beyond: 2000/20012000/2001Adult EducationqsOffpsOntsOff
Baccalaureate and Beyond: 2008/20122008/2012PostsecondaryqsOnpsOntsOff
Baccalaureate and Beyond: 2008/20122008/2012Adult EducationqsOnpsOntsOff
Beginning Postsecondary Students: 1990/19941990/1994PostsecondaryqsOnpsOntsOff
Beginning Postsecondary Students: 1996/20011996/2001PostsecondaryqsOnpsOntsOff
Beginning Postsecondary Students: 2004/20092004/2009PostsecondaryqsOnpsOntsOff
Beginning Postsecondary Students: 2012/20142012/2014PostsecondaryqsOnpsOntsOff
Early Childhood Program Participation: 20122012P-12qsOnpsOntsOn6
Early Childhood Program Participation: 20162016P-12qsOnpsOntsOn6
Education Longitudinal Study of 20022002PostsecondaryqsOnpsOntsOff
Education Longitudinal Study of 20022002P-12qsOnpsOntsOff
High School Longitudinal Study of 20092009PostsecondaryqsOnpsOntsOff
High School Longitudinal Study of 20092009P-12qsOnpsOntsOff
National Postsecondary Student Aid Study: 1996 Graduate Students1996PostsecondaryqsOffpsOntsOn2
National Postsecondary Student Aid Study: 1996 Undergraduates1996PostsecondaryqsOffpsOntsOn1
National Postsecondary Student Aid Study: 2000 Graduate Students2000PostsecondaryqsOffpsOntsOn2
National Postsecondary Student Aid Study: 2000 Undergraduates2000PostsecondaryqsOffpsOntsOn1
National Postsecondary Student Aid Study: 2004 Graduate Students2004PostsecondaryqsOnpsOntsOn2
National Postsecondary Student Aid Study: 2004 Undergraduates2004PostsecondaryqsOnpsOntsOn1
National Postsecondary Student Aid Study: 2008 Graduate Students2008PostsecondaryqsOnpsOntsOn2
National Postsecondary Student Aid Study: 2008 Undergraduates2008PostsecondaryqsOnpsOntsOn1
National Postsecondary Student Aid Study: 2012 Graduate Students2012PostsecondaryqsOnpsOntsOn2
National Postsecondary Student Aid Study: 2012 Undergraduates2012PostsecondaryqsOnpsOntsOn1
National Postsecondary Student Aid Study: 2016 Graduate Students2016PostsecondaryqsOnpsOntsOn2
National Postsecondary Student Aid Study: 2016 Undergraduates2016PostsecondaryqsOnpsOntsOn1
National Study of Postsecondary Faculty: 2004 Faculty2004PostsecondaryqsOnpsOntsOff
National Study of Postsecondary Faculty: 2004 Institution2004PostsecondaryqsOnpsOntsOff
National Teacher and Principal Survey, 2015-16 Public School Principals2015-2016P-12qsOnpsOntsOff
National Teacher and Principal Survey, 2015-16 Public School Teachers2015-2016P-12qsOnpsOntsOff
National Teacher and Principal Survey, 2015-16 Public Schools2015-2016P-12qsOnpsOntsOff
Parent and Family Involvement in Education: 20122012P-12qsOnpsOntsOn7
Parent and Family Involvement in Education: 20162016P-12qsOnpsOntsOn7
Pre-Elementary Education Longitudinal Study, Waves 1-52003/2008P-12qsOnpsOntsOff
Private School Universe Survey: 2011-122011-2012P-12qsOnpsOntsOff
School Survey on Crime and Safety: 2005-062005-2006P-12qsOnpsOntsOn3
School Survey on Crime and Safety: 2007-082007-2008P-12qsOnpsOntsOn3
School Survey on Crime and Safety: 2009-102009-2010P-12qsOnpsOntsOn3
School Survey on Crime and Safety: 2015-162015-2016P-12qsOnpsOntsOn3
Schools and Staffing Survey, Districts: 1999-001999-2000P-12qsOffpsOntsOff
Schools and Staffing Survey, Districts: 2003-042003-2004P-12qsOnpsOntsOff
Schools and Staffing Survey, Districts: 2007-082007-2008P-12qsOnpsOntsOff
Schools and Staffing Survey, Districts: 2011-122011-2012P-12qsOnpsOntsOff
Schools and Staffing Survey, Library Media Centers: 1999-001999-2000P-12qsOffpsOntsOff
Schools and Staffing Survey, Library Media Centers: 2003-042003-2004P-12qsOnpsOntsOff
Schools and Staffing Survey, Library Media Centers: 2007-082007-2008P-12qsOnpsOntsOff
Schools and Staffing Survey, Library Media Centers: 2011-122011-2012P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private School Principals: 1999-001999-2000P-12qsOffpsOntsOff
Schools and Staffing Survey, Public and Private School Principals: 2003-042003-2004P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private School Principals: 2007-082007-2008P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private School Principals: 2011-122011-2012P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private Schools: 1999-001999-2000P-12qsOffpsOntsOff
Schools and Staffing Survey, Public and Private Schools: 2003-042003-2004P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private Schools: 2007-082007-2008P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private Schools: 2011-122011-2012P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private Teachers: 1999-001999-2000P-12qsOffpsOntsOff
Schools and Staffing Survey, Public and Private Teachers: 2003-042003-2004P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private Teachers: 2007-082007-2008P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private Teachers: 2011-122011-2012P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private Teachers: 2011-122011-2012P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private Teachers: 2007-082007-2008P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private Teachers: 2003-042003-2004P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private Teachers: 1999-001999-2000P-12qsOffpsOntsOff
Schools and Staffing Survey, Public and Private Schools: 2011-122011-2012P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private Schools: 2007-082007-2008P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private Schools: 2003-042003-2004P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private Schools: 1999-001999-2000P-12qsOffpsOntsOff
Schools and Staffing Survey, Public and Private School Principals: 2011-122011-2012P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private School Principals: 2007-082007-2008P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private School Principals: 2003-042003-2004P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private School Principals: 1999-001999-2000P-12qsOffpsOntsOff
Schools and Staffing Survey, Library Media Centers: 2011-122011-2012P-12qsOnpsOntsOff
Schools and Staffing Survey, Library Media Centers: 2007-082007-2008P-12qsOnpsOntsOff
Schools and Staffing Survey, Library Media Centers: 2003-042003-2004P-12qsOnpsOntsOff
Schools and Staffing Survey, Library Media Centers: 1999-001999-2000P-12qsOffpsOntsOff
Schools and Staffing Survey, Districts: 2011-122011-2012P-12qsOnpsOntsOff
Schools and Staffing Survey, Districts: 2007-082007-2008P-12qsOnpsOntsOff
Schools and Staffing Survey, Districts: 2003-042003-2004P-12qsOnpsOntsOff
Schools and Staffing Survey, Districts: 1999-001999-2000P-12qsOffpsOntsOff
School Survey on Crime and Safety: 2015-162015-2016P-12qsOnpsOntsOn3
School Survey on Crime and Safety: 2009-102009-2010P-12qsOnpsOntsOn3
School Survey on Crime and Safety: 2007-082007-2008P-12qsOnpsOntsOn3
School Survey on Crime and Safety: 2005-062005-2006P-12qsOnpsOntsOn3
Private School Universe Survey: 2011-122011-2012P-12qsOnpsOntsOff
Pre-Elementary Education Longitudinal Study, Waves 1-52003/2008P-12qsOnpsOntsOff
Parent and Family Involvement in Education: 20162016P-12qsOnpsOntsOn7
Parent and Family Involvement in Education: 20122012P-12qsOnpsOntsOn7
National Teacher and Principal Survey, 2015-16 Public Schools2015-2016P-12qsOnpsOntsOff
National Teacher and Principal Survey, 2015-16 Public School Teachers2015-2016P-12qsOnpsOntsOff
National Teacher and Principal Survey, 2015-16 Public School Principals2015-2016P-12qsOnpsOntsOff
National Study of Postsecondary Faculty: 2004 Institution2004PostsecondaryqsOnpsOntsOff
National Study of Postsecondary Faculty: 2004 Faculty2004PostsecondaryqsOnpsOntsOff
National Postsecondary Student Aid Study: 2016 Undergraduates2016PostsecondaryqsOnpsOntsOn1
National Postsecondary Student Aid Study: 2016 Graduate Students2016PostsecondaryqsOnpsOntsOn2
National Postsecondary Student Aid Study: 2012 Undergraduates2012PostsecondaryqsOnpsOntsOn1
National Postsecondary Student Aid Study: 2012 Graduate Students2012PostsecondaryqsOnpsOntsOn2
National Postsecondary Student Aid Study: 2008 Undergraduates2008PostsecondaryqsOnpsOntsOn1
National Postsecondary Student Aid Study: 2008 Graduate Students2008PostsecondaryqsOnpsOntsOn2
National Postsecondary Student Aid Study: 2004 Undergraduates2004PostsecondaryqsOnpsOntsOn1
National Postsecondary Student Aid Study: 2004 Graduate Students2004PostsecondaryqsOnpsOntsOn2
National Postsecondary Student Aid Study: 2000 Undergraduates2000PostsecondaryqsOffpsOntsOn1
National Postsecondary Student Aid Study: 2000 Graduate Students2000PostsecondaryqsOffpsOntsOn2
National Postsecondary Student Aid Study: 1996 Undergraduates1996PostsecondaryqsOffpsOntsOn1
National Postsecondary Student Aid Study: 1996 Graduate Students1996PostsecondaryqsOffpsOntsOn2
High School Longitudinal Study of 20092009PostsecondaryqsOnpsOntsOff
High School Longitudinal Study of 20092009P-12qsOnpsOntsOff
Education Longitudinal Study of 20022002PostsecondaryqsOnpsOntsOff
Education Longitudinal Study of 20022002P-12qsOnpsOntsOff
Early Childhood Program Participation: 20162016P-12qsOnpsOntsOn6
Early Childhood Program Participation: 20122012P-12qsOnpsOntsOn6
Beginning Postsecondary Students: 2012/20142012/2014PostsecondaryqsOnpsOntsOff
Beginning Postsecondary Students: 2004/20092004/2009PostsecondaryqsOnpsOntsOff
Beginning Postsecondary Students: 1996/20011996/2001PostsecondaryqsOnpsOntsOff
Beginning Postsecondary Students: 1990/19941990/1994PostsecondaryqsOnpsOntsOff
Baccalaureate and Beyond: 2008/20122008/2012PostsecondaryqsOnpsOntsOff
Baccalaureate and Beyond: 2008/20122008/2012Adult EducationqsOnpsOntsOff
Baccalaureate and Beyond: 2000/20012000/2001PostsecondaryqsOffpsOntsOff
Baccalaureate and Beyond: 2000/20012000/2001Adult EducationqsOffpsOntsOff
Baccalaureate and Beyond: 1993/2003 Graduate students1993/2003PostsecondaryqsOffpsOntsOff
Baccalaureate and Beyond: 1993/2003 Graduate students1993/2003Adult EducationqsOffpsOntsOff
Baccalaureate and Beyond: 1993/20031993/2003PostsecondaryqsOnpsOntsOff
Baccalaureate and Beyond: 1993/20031993/2003Adult EducationqsOnpsOntsOff
Adult Training and Education Survey: 20162016Adult EducationqsOnpsOntsOff
Beginning Postsecondary Students: 1990/19941990/1994PostsecondaryqsOnpsOntsOff
Baccalaureate and Beyond: 1993/2003 Graduate students1993/2003PostsecondaryqsOffpsOntsOff
Baccalaureate and Beyond: 1993/2003 Graduate students1993/2003Adult EducationqsOffpsOntsOff
Baccalaureate and Beyond: 1993/20031993/2003PostsecondaryqsOnpsOntsOff
Baccalaureate and Beyond: 1993/20031993/2003Adult EducationqsOnpsOntsOff
National Postsecondary Student Aid Study: 1996 Undergraduates1996PostsecondaryqsOffpsOntsOn1
National Postsecondary Student Aid Study: 1996 Graduate Students1996PostsecondaryqsOffpsOntsOn2
Beginning Postsecondary Students: 1996/20011996/2001PostsecondaryqsOnpsOntsOff
Schools and Staffing Survey, Public and Private Teachers: 1999-001999-2000P-12qsOffpsOntsOff
Schools and Staffing Survey, Public and Private Schools: 1999-001999-2000P-12qsOffpsOntsOff
Schools and Staffing Survey, Public and Private School Principals: 1999-001999-2000P-12qsOffpsOntsOff
Schools and Staffing Survey, Library Media Centers: 1999-001999-2000P-12qsOffpsOntsOff
Schools and Staffing Survey, Districts: 1999-001999-2000P-12qsOffpsOntsOff
National Postsecondary Student Aid Study: 2000 Undergraduates2000PostsecondaryqsOffpsOntsOn1
National Postsecondary Student Aid Study: 2000 Graduate Students2000PostsecondaryqsOffpsOntsOn2
Baccalaureate and Beyond: 2000/20012000/2001PostsecondaryqsOffpsOntsOff
Baccalaureate and Beyond: 2000/20012000/2001Adult EducationqsOffpsOntsOff
Education Longitudinal Study of 20022002PostsecondaryqsOnpsOntsOff
Education Longitudinal Study of 20022002P-12qsOnpsOntsOff
Pre-Elementary Education Longitudinal Study, Waves 1-52003/2008P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private Teachers: 2003-042003-2004P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private Schools: 2003-042003-2004P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private School Principals: 2003-042003-2004P-12qsOnpsOntsOff
Schools and Staffing Survey, Library Media Centers: 2003-042003-2004P-12qsOnpsOntsOff
Schools and Staffing Survey, Districts: 2003-042003-2004P-12qsOnpsOntsOff
National Study of Postsecondary Faculty: 2004 Institution2004PostsecondaryqsOnpsOntsOff
National Study of Postsecondary Faculty: 2004 Faculty2004PostsecondaryqsOnpsOntsOff
National Postsecondary Student Aid Study: 2004 Undergraduates2004PostsecondaryqsOnpsOntsOn1
National Postsecondary Student Aid Study: 2004 Graduate Students2004PostsecondaryqsOnpsOntsOn2
Beginning Postsecondary Students: 2004/20092004/2009PostsecondaryqsOnpsOntsOff
School Survey on Crime and Safety: 2005-062005-2006P-12qsOnpsOntsOn3
Schools and Staffing Survey, Public and Private Teachers: 2007-082007-2008P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private Schools: 2007-082007-2008P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private School Principals: 2007-082007-2008P-12qsOnpsOntsOff
Schools and Staffing Survey, Library Media Centers: 2007-082007-2008P-12qsOnpsOntsOff
Schools and Staffing Survey, Districts: 2007-082007-2008P-12qsOnpsOntsOff
School Survey on Crime and Safety: 2007-082007-2008P-12qsOnpsOntsOn3
National Postsecondary Student Aid Study: 2008 Undergraduates2008PostsecondaryqsOnpsOntsOn1
National Postsecondary Student Aid Study: 2008 Graduate Students2008PostsecondaryqsOnpsOntsOn2
Baccalaureate and Beyond: 2008/20122008/2012PostsecondaryqsOnpsOntsOff
Baccalaureate and Beyond: 2008/20122008/2012Adult EducationqsOnpsOntsOff
High School Longitudinal Study of 20092009PostsecondaryqsOnpsOntsOff
High School Longitudinal Study of 20092009P-12qsOnpsOntsOff
School Survey on Crime and Safety: 2009-102009-2010P-12qsOnpsOntsOn3
Schools and Staffing Survey, Public and Private Teachers: 2011-122011-2012P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private Schools: 2011-122011-2012P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private School Principals: 2011-122011-2012P-12qsOnpsOntsOff
Schools and Staffing Survey, Library Media Centers: 2011-122011-2012P-12qsOnpsOntsOff
Schools and Staffing Survey, Districts: 2011-122011-2012P-12qsOnpsOntsOff
Private School Universe Survey: 2011-122011-2012P-12qsOnpsOntsOff
Parent and Family Involvement in Education: 20122012P-12qsOnpsOntsOn7
National Postsecondary Student Aid Study: 2012 Undergraduates2012PostsecondaryqsOnpsOntsOn1
National Postsecondary Student Aid Study: 2012 Graduate Students2012PostsecondaryqsOnpsOntsOn2
Early Childhood Program Participation: 20122012P-12qsOnpsOntsOn6
Beginning Postsecondary Students: 2012/20142012/2014PostsecondaryqsOnpsOntsOff
School Survey on Crime and Safety: 2015-162015-2016P-12qsOnpsOntsOn3
National Teacher and Principal Survey, 2015-16 Public Schools2015-2016P-12qsOnpsOntsOff
National Teacher and Principal Survey, 2015-16 Public School Teachers2015-2016P-12qsOnpsOntsOff
National Teacher and Principal Survey, 2015-16 Public School Principals2015-2016P-12qsOnpsOntsOff
Parent and Family Involvement in Education: 20162016P-12qsOnpsOntsOn7
National Postsecondary Student Aid Study: 2016 Undergraduates2016PostsecondaryqsOnpsOntsOn1
National Postsecondary Student Aid Study: 2016 Graduate Students2016PostsecondaryqsOnpsOntsOn2
Early Childhood Program Participation: 20162016P-12qsOnpsOntsOn6
Adult Training and Education Survey: 20162016Adult EducationqsOnpsOntsOff
Parent and Family Involvement in Education: 20162016P-12qsOnpsOntsOn7
National Postsecondary Student Aid Study: 2016 Undergraduates2016PostsecondaryqsOnpsOntsOn1
National Postsecondary Student Aid Study: 2016 Graduate Students2016PostsecondaryqsOnpsOntsOn2
Early Childhood Program Participation: 20162016P-12qsOnpsOntsOn6
Adult Training and Education Survey: 20162016Adult EducationqsOnpsOntsOff
School Survey on Crime and Safety: 2015-162015-2016P-12qsOnpsOntsOn3
National Teacher and Principal Survey, 2015-16 Public Schools2015-2016P-12qsOnpsOntsOff
National Teacher and Principal Survey, 2015-16 Public School Teachers2015-2016P-12qsOnpsOntsOff
National Teacher and Principal Survey, 2015-16 Public School Principals2015-2016P-12qsOnpsOntsOff
Beginning Postsecondary Students: 2012/20142012/2014PostsecondaryqsOnpsOntsOff
Parent and Family Involvement in Education: 20122012P-12qsOnpsOntsOn7
National Postsecondary Student Aid Study: 2012 Undergraduates2012PostsecondaryqsOnpsOntsOn1
National Postsecondary Student Aid Study: 2012 Graduate Students2012PostsecondaryqsOnpsOntsOn2
Early Childhood Program Participation: 20122012P-12qsOnpsOntsOn6
Schools and Staffing Survey, Public and Private Teachers: 2011-122011-2012P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private Schools: 2011-122011-2012P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private School Principals: 2011-122011-2012P-12qsOnpsOntsOff
Schools and Staffing Survey, Library Media Centers: 2011-122011-2012P-12qsOnpsOntsOff
Schools and Staffing Survey, Districts: 2011-122011-2012P-12qsOnpsOntsOff
Private School Universe Survey: 2011-122011-2012P-12qsOnpsOntsOff
School Survey on Crime and Safety: 2009-102009-2010P-12qsOnpsOntsOn3
High School Longitudinal Study of 20092009PostsecondaryqsOnpsOntsOff
High School Longitudinal Study of 20092009P-12qsOnpsOntsOff
Baccalaureate and Beyond: 2008/20122008/2012PostsecondaryqsOnpsOntsOff
Baccalaureate and Beyond: 2008/20122008/2012Adult EducationqsOnpsOntsOff
National Postsecondary Student Aid Study: 2008 Undergraduates2008PostsecondaryqsOnpsOntsOn1
National Postsecondary Student Aid Study: 2008 Graduate Students2008PostsecondaryqsOnpsOntsOn2
Schools and Staffing Survey, Public and Private Teachers: 2007-082007-2008P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private Schools: 2007-082007-2008P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private School Principals: 2007-082007-2008P-12qsOnpsOntsOff
Schools and Staffing Survey, Library Media Centers: 2007-082007-2008P-12qsOnpsOntsOff
Schools and Staffing Survey, Districts: 2007-082007-2008P-12qsOnpsOntsOff
School Survey on Crime and Safety: 2007-082007-2008P-12qsOnpsOntsOn3
School Survey on Crime and Safety: 2005-062005-2006P-12qsOnpsOntsOn3
Beginning Postsecondary Students: 2004/20092004/2009PostsecondaryqsOnpsOntsOff
National Study of Postsecondary Faculty: 2004 Institution2004PostsecondaryqsOnpsOntsOff
National Study of Postsecondary Faculty: 2004 Faculty2004PostsecondaryqsOnpsOntsOff
National Postsecondary Student Aid Study: 2004 Undergraduates2004PostsecondaryqsOnpsOntsOn1
National Postsecondary Student Aid Study: 2004 Graduate Students2004PostsecondaryqsOnpsOntsOn2
Schools and Staffing Survey, Public and Private Teachers: 2003-042003-2004P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private Schools: 2003-042003-2004P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private School Principals: 2003-042003-2004P-12qsOnpsOntsOff
Schools and Staffing Survey, Library Media Centers: 2003-042003-2004P-12qsOnpsOntsOff
Schools and Staffing Survey, Districts: 2003-042003-2004P-12qsOnpsOntsOff
Pre-Elementary Education Longitudinal Study, Waves 1-52003/2008P-12qsOnpsOntsOff
Education Longitudinal Study of 20022002PostsecondaryqsOnpsOntsOff
Education Longitudinal Study of 20022002P-12qsOnpsOntsOff
Baccalaureate and Beyond: 2000/20012000/2001PostsecondaryqsOffpsOntsOff
Baccalaureate and Beyond: 2000/20012000/2001Adult EducationqsOffpsOntsOff
National Postsecondary Student Aid Study: 2000 Undergraduates2000PostsecondaryqsOffpsOntsOn1
National Postsecondary Student Aid Study: 2000 Graduate Students2000PostsecondaryqsOffpsOntsOn2
Schools and Staffing Survey, Public and Private Teachers: 1999-001999-2000P-12qsOffpsOntsOff
Schools and Staffing Survey, Public and Private Schools: 1999-001999-2000P-12qsOffpsOntsOff
Schools and Staffing Survey, Public and Private School Principals: 1999-001999-2000P-12qsOffpsOntsOff
Schools and Staffing Survey, Library Media Centers: 1999-001999-2000P-12qsOffpsOntsOff
Schools and Staffing Survey, Districts: 1999-001999-2000P-12qsOffpsOntsOff
Beginning Postsecondary Students: 1996/20011996/2001PostsecondaryqsOnpsOntsOff
National Postsecondary Student Aid Study: 1996 Undergraduates1996PostsecondaryqsOffpsOntsOn1
National Postsecondary Student Aid Study: 1996 Graduate Students1996PostsecondaryqsOffpsOntsOn2
Baccalaureate and Beyond: 1993/2003 Graduate students1993/2003PostsecondaryqsOffpsOntsOff
Baccalaureate and Beyond: 1993/2003 Graduate students1993/2003Adult EducationqsOffpsOntsOff
Baccalaureate and Beyond: 1993/20031993/2003PostsecondaryqsOnpsOntsOff
Baccalaureate and Beyond: 1993/20031993/2003Adult EducationqsOnpsOntsOff
Beginning Postsecondary Students: 1990/19941990/1994PostsecondaryqsOnpsOntsOff
Baccalaureate and Beyond: 2008/20122008/2012Adult EducationqsOnpsOntsOff
Baccalaureate and Beyond: 2000/20012000/2001Adult EducationqsOffpsOntsOff
Baccalaureate and Beyond: 1993/2003 Graduate students1993/2003Adult EducationqsOffpsOntsOff
Baccalaureate and Beyond: 1993/20031993/2003Adult EducationqsOnpsOntsOff
Adult Training and Education Survey: 20162016Adult EducationqsOnpsOntsOff
Schools and Staffing Survey, Public and Private Teachers: 2011-122011-2012P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private Teachers: 2007-082007-2008P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private Teachers: 2003-042003-2004P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private Teachers: 1999-001999-2000P-12qsOffpsOntsOff
Schools and Staffing Survey, Public and Private Schools: 2011-122011-2012P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private Schools: 2007-082007-2008P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private Schools: 2003-042003-2004P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private Schools: 1999-001999-2000P-12qsOffpsOntsOff
Schools and Staffing Survey, Public and Private School Principals: 2011-122011-2012P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private School Principals: 2007-082007-2008P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private School Principals: 2003-042003-2004P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private School Principals: 1999-001999-2000P-12qsOffpsOntsOff
Schools and Staffing Survey, Library Media Centers: 2011-122011-2012P-12qsOnpsOntsOff
Schools and Staffing Survey, Library Media Centers: 2007-082007-2008P-12qsOnpsOntsOff
Schools and Staffing Survey, Library Media Centers: 2003-042003-2004P-12qsOnpsOntsOff
Schools and Staffing Survey, Library Media Centers: 1999-001999-2000P-12qsOffpsOntsOff
Schools and Staffing Survey, Districts: 2011-122011-2012P-12qsOnpsOntsOff
Schools and Staffing Survey, Districts: 2007-082007-2008P-12qsOnpsOntsOff
Schools and Staffing Survey, Districts: 2003-042003-2004P-12qsOnpsOntsOff
Schools and Staffing Survey, Districts: 1999-001999-2000P-12qsOffpsOntsOff
School Survey on Crime and Safety: 2015-162015-2016P-12qsOnpsOntsOn3
School Survey on Crime and Safety: 2009-102009-2010P-12qsOnpsOntsOn3
School Survey on Crime and Safety: 2007-082007-2008P-12qsOnpsOntsOn3
School Survey on Crime and Safety: 2005-062005-2006P-12qsOnpsOntsOn3
Private School Universe Survey: 2011-122011-2012P-12qsOnpsOntsOff
Pre-Elementary Education Longitudinal Study, Waves 1-52003/2008P-12qsOnpsOntsOff
Parent and Family Involvement in Education: 20162016P-12qsOnpsOntsOn7
Parent and Family Involvement in Education: 20122012P-12qsOnpsOntsOn7
National Teacher and Principal Survey, 2015-16 Public Schools2015-2016P-12qsOnpsOntsOff
National Teacher and Principal Survey, 2015-16 Public School Teachers2015-2016P-12qsOnpsOntsOff
National Teacher and Principal Survey, 2015-16 Public School Principals2015-2016P-12qsOnpsOntsOff
High School Longitudinal Study of 20092009P-12qsOnpsOntsOff
Education Longitudinal Study of 20022002P-12qsOnpsOntsOff
Early Childhood Program Participation: 20162016P-12qsOnpsOntsOn6
Early Childhood Program Participation: 20122012P-12qsOnpsOntsOn6
National Study of Postsecondary Faculty: 2004 Institution2004PostsecondaryqsOnpsOntsOff
National Study of Postsecondary Faculty: 2004 Faculty2004PostsecondaryqsOnpsOntsOff
National Postsecondary Student Aid Study: 2016 Undergraduates2016PostsecondaryqsOnpsOntsOn1
National Postsecondary Student Aid Study: 2016 Graduate Students2016PostsecondaryqsOnpsOntsOn2
National Postsecondary Student Aid Study: 2012 Undergraduates2012PostsecondaryqsOnpsOntsOn1
National Postsecondary Student Aid Study: 2012 Graduate Students2012PostsecondaryqsOnpsOntsOn2
National Postsecondary Student Aid Study: 2008 Undergraduates2008PostsecondaryqsOnpsOntsOn1
National Postsecondary Student Aid Study: 2008 Graduate Students2008PostsecondaryqsOnpsOntsOn2
National Postsecondary Student Aid Study: 2004 Undergraduates2004PostsecondaryqsOnpsOntsOn1
National Postsecondary Student Aid Study: 2004 Graduate Students2004PostsecondaryqsOnpsOntsOn2
National Postsecondary Student Aid Study: 2000 Undergraduates2000PostsecondaryqsOffpsOntsOn1
National Postsecondary Student Aid Study: 2000 Graduate Students2000PostsecondaryqsOffpsOntsOn2
National Postsecondary Student Aid Study: 1996 Undergraduates1996PostsecondaryqsOffpsOntsOn1
National Postsecondary Student Aid Study: 1996 Graduate Students1996PostsecondaryqsOffpsOntsOn2
High School Longitudinal Study of 20092009PostsecondaryqsOnpsOntsOff
Education Longitudinal Study of 20022002PostsecondaryqsOnpsOntsOff
Beginning Postsecondary Students: 2012/20142012/2014PostsecondaryqsOnpsOntsOff
Beginning Postsecondary Students: 2004/20092004/2009PostsecondaryqsOnpsOntsOff
Beginning Postsecondary Students: 1996/20011996/2001PostsecondaryqsOnpsOntsOff
Beginning Postsecondary Students: 1990/19941990/1994PostsecondaryqsOnpsOntsOff
Baccalaureate and Beyond: 2008/20122008/2012PostsecondaryqsOnpsOntsOff
Baccalaureate and Beyond: 2000/20012000/2001PostsecondaryqsOffpsOntsOff
Baccalaureate and Beyond: 1993/2003 Graduate students1993/2003PostsecondaryqsOffpsOntsOff
Baccalaureate and Beyond: 1993/20031993/2003PostsecondaryqsOnpsOntsOff
National Study of Postsecondary Faculty: 2004 Institution2004PostsecondaryqsOnpsOntsOff
National Study of Postsecondary Faculty: 2004 Faculty2004PostsecondaryqsOnpsOntsOff
National Postsecondary Student Aid Study: 2016 Undergraduates2016PostsecondaryqsOnpsOntsOn1
National Postsecondary Student Aid Study: 2016 Graduate Students2016PostsecondaryqsOnpsOntsOn2
National Postsecondary Student Aid Study: 2012 Undergraduates2012PostsecondaryqsOnpsOntsOn1
National Postsecondary Student Aid Study: 2012 Graduate Students2012PostsecondaryqsOnpsOntsOn2
National Postsecondary Student Aid Study: 2008 Undergraduates2008PostsecondaryqsOnpsOntsOn1
National Postsecondary Student Aid Study: 2008 Graduate Students2008PostsecondaryqsOnpsOntsOn2
National Postsecondary Student Aid Study: 2004 Undergraduates2004PostsecondaryqsOnpsOntsOn1
National Postsecondary Student Aid Study: 2004 Graduate Students2004PostsecondaryqsOnpsOntsOn2
National Postsecondary Student Aid Study: 2000 Undergraduates2000PostsecondaryqsOffpsOntsOn1
National Postsecondary Student Aid Study: 2000 Graduate Students2000PostsecondaryqsOffpsOntsOn2
National Postsecondary Student Aid Study: 1996 Undergraduates1996PostsecondaryqsOffpsOntsOn1
National Postsecondary Student Aid Study: 1996 Graduate Students1996PostsecondaryqsOffpsOntsOn2
High School Longitudinal Study of 20092009PostsecondaryqsOnpsOntsOff
Education Longitudinal Study of 20022002PostsecondaryqsOnpsOntsOff
Beginning Postsecondary Students: 2012/20142012/2014PostsecondaryqsOnpsOntsOff
Beginning Postsecondary Students: 2004/20092004/2009PostsecondaryqsOnpsOntsOff
Beginning Postsecondary Students: 1996/20011996/2001PostsecondaryqsOnpsOntsOff
Beginning Postsecondary Students: 1990/19941990/1994PostsecondaryqsOnpsOntsOff
Baccalaureate and Beyond: 2008/20122008/2012PostsecondaryqsOnpsOntsOff
Baccalaureate and Beyond: 2000/20012000/2001PostsecondaryqsOffpsOntsOff
Baccalaureate and Beyond: 1993/2003 Graduate students1993/2003PostsecondaryqsOffpsOntsOff
Baccalaureate and Beyond: 1993/20031993/2003PostsecondaryqsOnpsOntsOff
Schools and Staffing Survey, Public and Private Teachers: 2011-122011-2012P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private Teachers: 2007-082007-2008P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private Teachers: 2003-042003-2004P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private Teachers: 1999-001999-2000P-12qsOffpsOntsOff
Schools and Staffing Survey, Public and Private Schools: 2011-122011-2012P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private Schools: 2007-082007-2008P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private Schools: 2003-042003-2004P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private Schools: 1999-001999-2000P-12qsOffpsOntsOff
Schools and Staffing Survey, Public and Private School Principals: 2011-122011-2012P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private School Principals: 2007-082007-2008P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private School Principals: 2003-042003-2004P-12qsOnpsOntsOff
Schools and Staffing Survey, Public and Private School Principals: 1999-001999-2000P-12qsOffpsOntsOff
Schools and Staffing Survey, Library Media Centers: 2011-122011-2012P-12qsOnpsOntsOff
Schools and Staffing Survey, Library Media Centers: 2007-082007-2008P-12qsOnpsOntsOff
Schools and Staffing Survey, Library Media Centers: 2003-042003-2004P-12qsOnpsOntsOff
Schools and Staffing Survey, Library Media Centers: 1999-001999-2000P-12qsOffpsOntsOff
Schools and Staffing Survey, Districts: 2011-122011-2012P-12qsOnpsOntsOff
Schools and Staffing Survey, Districts: 2007-082007-2008P-12qsOnpsOntsOff
Schools and Staffing Survey, Districts: 2003-042003-2004P-12qsOnpsOntsOff
Schools and Staffing Survey, Districts: 1999-001999-2000P-12qsOffpsOntsOff
School Survey on Crime and Safety: 2015-162015-2016P-12qsOnpsOntsOn3
School Survey on Crime and Safety: 2009-102009-2010P-12qsOnpsOntsOn3
School Survey on Crime and Safety: 2007-082007-2008P-12qsOnpsOntsOn3
School Survey on Crime and Safety: 2005-062005-2006P-12qsOnpsOntsOn3
Private School Universe Survey: 2011-122011-2012P-12qsOnpsOntsOff
Pre-Elementary Education Longitudinal Study, Waves 1-52003/2008P-12qsOnpsOntsOff
Parent and Family Involvement in Education: 20162016P-12qsOnpsOntsOn7
Parent and Family Involvement in Education: 20122012P-12qsOnpsOntsOn7
National Teacher and Principal Survey, 2015-16 Public Schools2015-2016P-12qsOnpsOntsOff
National Teacher and Principal Survey, 2015-16 Public School Teachers2015-2016P-12qsOnpsOntsOff
National Teacher and Principal Survey, 2015-16 Public School Principals2015-2016P-12qsOnpsOntsOff
High School Longitudinal Study of 20092009P-12qsOnpsOntsOff
Education Longitudinal Study of 20022002P-12qsOnpsOntsOff
Early Childhood Program Participation: 20162016P-12qsOnpsOntsOn6
Early Childhood Program Participation: 20122012P-12qsOnpsOntsOn6
Baccalaureate and Beyond: 2008/20122008/2012Adult EducationqsOnpsOntsOff
Baccalaureate and Beyond: 2000/20012000/2001Adult EducationqsOffpsOntsOff
Baccalaureate and Beyond: 1993/2003 Graduate students1993/2003Adult EducationqsOffpsOntsOff
Baccalaureate and Beyond: 1993/20031993/2003Adult EducationqsOnpsOntsOff
Adult Training and Education Survey: 20162016Adult EducationqsOnpsOntsOff
1
Percentage distribution of 1995–96 beginning postsecondary students' highest degree attained by 2001, by work status
Highest degree completed as of June 2001 Certificate
(%)
Associate
(%)
Bachelor
(%)
Never attained
(%)
Total
Estimates
Total 11.7 9.8 29.8 48.6 100%
Job 1995–96: hours worked per week while enrolled
Did not work while enrolled 14.0 9.8 38.5 37.8 100%
Worked part time 8.9 11.6 35.5 44.0 100%
Worked full time 14.5 7.2 8.3 69.9 100%
NOTE: Rows may not add up to 100% due to rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 1995–96 Beginning Postsecondary Students Longitudinal Study, Second Follow-up (BPS:96/01).

Computation by NCES QuickStats on 6/22/2009
ckeakb7
2
Percentage distribution of 1995–96 beginning postsecondary students' highest degree attained by 2001, by number of advanced placement tests taken
Persistence and completion at any institution as of 2000-01 Never attained
(%)
Certificate
(%)
Associate
(%)
Bachelor
(%)
Total
Estimates
Total 48.6 11.7 9.9 29.8 100%
Number of Advanced Placement tests taken
0 51.1 7.7 12.1 29.1 100%
1 38.1 2.6 6.0 53.4 100%
2 33.6 0.4 3.4 62.6 100%
Three or more 13.8 0.1 1.4 84.8 100%
NOTE: Rows may not add up to 100% due to rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 1995–96 Beginning Postsecondary Students Longitudinal Study, Second Follow-up (BPS:96/01).

Computation by NCES QuickStats on 6/22/2009
ckeak19
3
Percentage of beginning postsecondary students who received Pell grants, by race/ethnicity: 1995–96
  Pell Grant amount 1995-96
(%>0)
Estimates
Total 26.4
Race/ethnicity
White, non-Hispanic 19.0
Black, non-Hispanic 49.3
Hispanic 42.4
Asian/Pacific Islander 35.5
American Indian/Alaska Native 33.2
Other
‡ Reporting standards not met.

Source: U.S. Department of Education, National Center for Education Statistics, 1995–96 Beginning Postsecondary Students Longitudinal Study, Second Follow-up (BPS:96/01).

Computation by NCES QuickStats on 3/10/2009
cgfak7e
4
Percentage distribution of 1995–96 beginning postsecondary students' grade point average (GPA) through 2001, by income percentile rank
Cumulative Grade Point Average (GPA) as of 2001 Mostly A’s
(%)
A’s and B’s
(%)
Mostly B’s
(%)
B’s and C’s
(%)
Mostly C’s
(%)
C’s and D’s
(%)
Mostly D’s or below
(%)
Total
Estimates
Total 13.3 31.8 35.3 14.4 4.4 0.7 0.1 100%
Income percentile rank 1994
1-25 13.1 28.2 37.8 14.7 4.7 1.4 0.2 100%
26-50 13.5 30.2 37.3 12.8 5.8 0.3 0.2 100%
51-75 12.9 36.1 33.1 14.0 3.4 0.4 0.. 100%
More than 75 13.7 32.7 33.0 16.3 3.7 0.7 0.0 100%
NOTE: Rows may not add up to 100% due to rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 1995–96 Beginning Postsecondary Students Longitudinal Study, Second Follow-up (BPS:96/01).

Computation by NCES QuickStats on 6/22/2009
ckeak03
5
Percentage distribution of 1995–96 beginning postsecondary students' persistence at any institution through 2001, by gender
Persistence at any institution through 2001 Attained, still enrolled
(%)
Attained, not enrolled
(%)
Never attained, still enrolled
(%)
Never attained, not enrolled
(%)
Total
Estimates
Total 5.9 45.5 14.9 33.7 100%
Gender
Male 5.9 41.8 15.8 36.5 100%
Female 5.8 48.5 14.2 31.5 100%
NOTE: Rows may not add up to 100% due to rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 1995–96 Beginning Postsecondary Students Longitudinal Study, Second Follow-up (BPS:96/01).

Computation by NCES QuickStats on 6/22/2009
cgeakd4
1
Percent of graduate students who borrowed, by type of graduate program: 2003–04
  Loans: total student loans all sources
(%>0)
Estimates
Total 40.0
Graduate study: program
Business administration (MBA) 39.1
Education (any master's) 34.8
Other master of arts (MA) 41.3
Other master of science (MS) 31.8
Other master's degree 49.3
PhD except in education 19.9
Education (any doctorate) 27.1
Other doctoral degree 49.5
Medicine (MD) 77.3
Other health science degree 81.7
Law (LLB or JD) 81.0
Theology (MDiv, MHL, BD) 30.0
Post-baccalaureate certificate 30.1
Not in a degree program 28.0
SOURCE: U.S. Department of Education, National Center for Education Statistics, 2003–04 National Postsecondary Student Aid Study (NPSAS:04).

NOTICE OF REVISIONS: The NPSAS:04 weights were revised in June 2009. The revised weights will produce 2003-04 estimates that differ somewhat from those in any tables and publications produced before June 2009. See the description for the total Stafford loan variable (STAFFAMT) for details.

Computation by NCES QuickStats on 8/25/2009
bbfak2a
2
Percentage of graduate students with assistantships, by graduate field of study: 2003–04
  Assistantships
(%>0)
Estimates
Total 15.3
Graduate study: major field
Humanities 20.8
Social/behavioral sciences 31.7
Life sciences 47.4
Math/Engineering/Computer science 37.9
Education 7.6
Business/management 7.9
Health 10.3
Law 5.8
Others 23.8
Undeclared or not in a degree program 5.4
SOURCE: U.S. Department of Education, National Center for Education Statistics, 2003–04 National Postsecondary Student Aid Study (NPSAS:04).

NOTICE OF REVISIONS: The NPSAS:04 weights were revised in June 2009. The revised weights will produce 2003-04 estimates that differ somewhat from those in any tables and publications produced before June 2009. See the description for the total Stafford loan variable (STAFFAMT) for details.

Computation by NCES QuickStats on 8/25/2009
ckeak39
3
Percentage distribution of graduates students' student/employee role, by graduate field of study: 2003–04
Work: primarily student or employee A student working to meet expenses
(%)
An employee enrolled in school
(%)
No job
(%)
Total
Estimates
Total 35.8 45.1 19.1 100%
Graduate study: major field
Humanities 44.9 35.9 19.2 100%
Social/behavioral sciences 58.9 24.6 16.5 100%
Life sciences 61.0 20.7 18.3 100%
Math/Engineering/Computer science 47.4 38.3 14.3 100%
Education 26.3 63.3 10.4 100%
Business/management 24.8 61.8 13.3 100%
Health 39.4 19.0 41.6 100%
Law 39.6 11.6 48.8 100%
Others 47.0 38.5 14.5 100%
Undeclared or not in a degree program 20.5 67.3 12.2 100%
NOTE: Rows may not add up to 100% due to rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 2003–04 National Postsecondary Student Aid Study (NPSAS:04).

NOTICE OF REVISIONS: The NPSAS:04 weights were revised in June 2009. The revised weights will produce 2003-04 estimates that differ somewhat from those in any tables and publications produced before June 2009. See the description for the total Stafford loan variable (STAFFAMT) for details.

Computation by NCES QuickStats on 8/25/2009
ckeakce
4
Percentage of graduate students who have ever borrowed loans, by institution type: 2003–04
  Total loan debt (cumulative)
(%>0)
Estimates
Total 65.2
Type of 4-year institution
Public 4-year nondoctorate 61.4
Public 4-year doctorate 60.6
Private not-for-profit 4-yr nondoctorate 61.6
Private not-for-profit 4-year doctorate 71.3
Private for-profit 4-year 85.9
Attended more than one institution 68.9
SOURCE: U.S. Department of Education, National Center for Education Statistics, 2003–04 National Postsecondary Student Aid Study (NPSAS:04).

NOTICE OF REVISIONS: The NPSAS:04 weights were revised in June 2009. The revised weights will produce 2003-04 estimates that differ somewhat from those in any tables and publications produced before June 2009. See the description for the total Stafford loan variable (STAFFAMT) for details.

Computation by NCES QuickStats on 8/25/2009
ckeak5f
5
Average loan amount for graduate students, by parents' education, 2003–04
  Loans: total student loans all sources
(Mean[0])
Estimates
Total 6,302.0
Parent's highest education
Do not know parent's education level 7,677.5
High school diploma or less 5,878.7
Some college 6,016.3
Bachelor's degree 5,794.3
Master's degree or higher 7,185.9
SOURCE: U.S. Department of Education, National Center for Education Statistics, 2003–04 National Postsecondary Student Aid Study (NPSAS:04).

NOTICE OF REVISIONS: The NPSAS:04 weights were revised in June 2009. The revised weights will produce 2003-04 estimates that differ somewhat from those in any tables and publications produced before June 2009. See the description for the total Stafford loan variable (STAFFAMT) for details.

Computation by NCES QuickStats on 8/25/2009
ckeakef
1
Percentage of undergraduate students who applied for aid, by parents' income: 2003–04
Aid: applied for federal aid Yes
(%)
No
(%)
Total
Estimates
Total 58.3 41.7 100%
Income: dependent student household income
Less than $32,000 78.7 21.3 100%
$32,000-59,999 66.6 33.4 100%
$60,000-91,999 56.9 43.1 100%
$92,000 or more 47.1 52.9 100%
NOTE: Rows may not add up to 100% due to rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 2003–04 National Postsecondary Student Aid Study (NPSAS:04).

NOTICE OF REVISIONS: The NPSAS:04 weights were revised in June 2009. The revised weights will produce 2003-04 estimates that differ somewhat from those in any tables and publications produced before June 2009. See the description for the total Stafford loan variable (STAFFAMT) for details.

Computation by NCES QuickStats on 8/25/2009
cgeak8c
2
Percentage distribution of undergraduates' cumulative grade point average (GPA) categories, by major field of study: 2003–04
Cumulative Grade Point Average (GPA) as of 2003-04 Less than 2.75
(%)
2.75 - 3.74
(%)
3.75 or higher
(%)
Total
Estimates
Total 34.4 49.0 16.7 100%
College study: major
Humanities 35.9 50.4 13.6 100%
Social/behavioral sciences 35.0 52.1 12.8 100%
Life sciences 34.9 52.7 12.4 100%
Physical sciences 31.5 54.3 14.2 100%
Math 29.1 55.3 15.6 100%
Computer/information science 34.0 48.1 17.9 100%
Engineering 37.4 48.1 14.5 100%
Education 31.9 52.6 15.5 100%
Business/management 35.6 49.3 15.1 100%
Health 32.2 50.7 17.0 100%
Vocational/technical 33.3 47.1 19.6 100%
Other technical/professional 36.7 49.9 13.4 100%
Undeclared or not in a degree program 33.2 44.1 22.8 100%
NOTE: Rows may not add up to 100% due to rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 2003–04 National Postsecondary Student Aid Study (NPSAS:04).

NOTICE OF REVISIONS: The NPSAS:04 weights were revised in June 2009. The revised weights will produce 2003-04 estimates that differ somewhat from those in any tables and publications produced before June 2009. See the description for the total Stafford loan variable (STAFFAMT) for details.

Computation by NCES QuickStats on 8/25/2009
cgeake9
3
Mean net price of attendance for undergraduate students, by type of institution: 2003–04
  Net price after all aid
(Mean[0])
Estimates
Total 6,656.0
Institution: type
Public less-than-2-year 5,616.5
Public 2-year 4,716.3
Public 4-year nondoctorate 6,253.5
Public 4-year doctorate 7,564.1
Private not-for-profit less-than-4-year 7,382.3
Private not-for-profit 4-yr nondoctorate 9,208.7
Private not-for-profit 4-year doctorate 14,812.2
Private for-profit less-than-2-year 7,842.9
Private for-profit 2 years or more 6,737.6
Attended more than one institution
‡ Reporting standards not met.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 2003–04 National Postsecondary Student Aid Study (NPSAS:04).

NOTICE OF REVISIONS: The NPSAS:04 weights were revised in June 2009. The revised weights will produce 2003-04 estimates that differ somewhat from those in any tables and publications produced before June 2009. See the description for the total Stafford loan variable (STAFFAMT) for details.

Computation by NCES QuickStats on 8/25/2009
bcfak0c
4
Percentage distribution of undergraduates' parents' highest level of education, by type of institution: 2003–04
Parent's highest education High school or less
(%)
Some college
(%)
Bachelor's degree or higher
(%)
Total
Estimates
Total 37.1 21.6 41.3 100%
Institution: type
Public less-than-2-year 54.2 17.4 28.4 100%
Public 2-year 43.3 23.9 32.7 100%
Public 4-year nondoctorate 28.7 20.5 50.8 100%
Public 4-year doctorate 46.9 18.8 34.2 100%
Private not-for-profit less than 4-year 29.6 18.1 52.3 100%
Private not-for-profit 4-year nondoctorate 55.6 17.4 27.0 100%
Private not-for-profit 4-year doctorate 53.8 20.2 25.9 100%
NOTE: Rows may not add up to 100% due to rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 2003–04 National Postsecondary Student Aid Study (NPSAS:04).

NOTICE OF REVISIONS: The NPSAS:04 weights were revised in June 2009. The revised weights will produce 2003-04 estimates that differ somewhat from those in any tables and publications produced before June 2009. See the description for the total Stafford loan variable (STAFFAMT) for details.

Computation by NCES QuickStats on 10/14/2009
cgeakd5
5
Average amount of Pell grants received by undergraduates, by income and dependency status: 2003–04
  Grants: Pell Grants
(Avg>0)
Estimates
Total 2,449.7
Income: categories by dependency status
Dependent: Less than $10,000 3,242.2
Dependent: $10,000-$19,999 3,176.1
Dependent: $20,000-$29,999 2,715.0
Dependent: $30,000-$39,999 1,958.3
Dependent: $40,000-$49,999 1,508.6
Dependent: $50,000-$59,999 1,309.0
Dependent: $60,000-$69,999 1,241.7
Dependent: $70,000-$79,999 1,404.4
Dependent: $80,000-$99,999
Dependent: $100,000 or more
Independent: Less than $5,000 2,860.3
Independent: $5,000-$9,999 2,642.9
Independent: $10,000-$19,999 2,291.7
Independent: $20,000-$29,999 2,328.3
Independent: $30,000-$49,999 1,561.9
Independent: $50,000 or more 1,124.3
‡ Reporting standards not met.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 2003–04 National Postsecondary Student Aid Study (NPSAS:04).

NOTICE OF REVISIONS: The NPSAS:04 weights were revised in June 2009. The revised weights will produce 2003-04 estimates that differ somewhat from those in any tables and publications produced before June 2009. See the description for the total Stafford loan variable (STAFFAMT) for details.

Computation by NCES QuickStats on 8/25/2009
cgeak38
1
Percentage distribution of instructional faculty and staff's employment status, by institution type, Fall 2003
Employment status at this job Full time
(%)
Part time
(%)
Total
Estimates
Total 56.3 43.7 100%
Institution: type and control
Public doctoral 77.8 22.2 100%
Private not-for-profit doctoral 68.7 31.3 100%
Public master's 63.3 36.7 100%
Private not-for-profit master's 45.0 55.0 100%
Private not-for-profit baccalaureate 63.2 36.8 100%
Public associates 33.3 66.7 100%
Other 49.2 50.8 100%
NOTE: Rows may not add up to 100% due to rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 2004 National Study of Postsecondary Faculty (NSOPF:04).

Computation by NCES QuickStats on 6/19/2009
ckeak01
2
Percentage distribution of full-time instructional faculty and staff, by race/ethnicity, institution type: Fall 2003
Race/ethnicity White, non-Hispanic
(%)
Black, non-Hispanic
(%)
Asian/Pacific Islander
(%)
Hispanic
(%)
Other
(%)
Estimates
Total 80.3 5.9 8.6 3.4 1.2
Institution: type and control
Public doctoral 79.4 4.5 12.0 3.0 1.0
Private not-for-profit doctoral 79.1 5.3 11.9 2.9 0.8
Public master’s 78.3 8.9 7.6 3.6 1.6
Private not-for-profit master’s 85.4 5.1 5.7 2.5 1.3
Private not-for-profit baccalaureate 85.8 6.8 4.2 2.2 1.1
Public associates 81.2 7.2 4.4 5.5 1.7
Other 86.9 4.6 5.8 1.7 1.0
NOTE: Rows may not add up to 100% due to rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 2004 National Study of Postsecondary Faculty (NSOPF:04).

Computation by NCES QuickStats on 6/19/2009
ckeak04
3
Percentage distribution of full-time instructional faculty and staff, by tenure status, institution type: Fall 2003
Tenure status Tenured
(%)
On tenure track but not tenured
(%)
Not on tenure track
(%)
Not tenured-no tenure system
(%)
Total
Estimates
Total 49.3 21.3 20.9 8.5 100%
Institution: type and control
Public doctoral 53.0 20.4 25.9 0.7 100%
Private not-for-profit doctoral 47.1 19.6 28.8 4.5 100%
Public master’s 53.7 28.3 16.9 1.0 100%
Private not-for-profit master’s 41.9 28.1 21.5 8.6 100%
Private not-for-profit baccalaureate 42.9 25.1 21.6 10.4 100%
Public associates 49.1 15.6 9.3 26.0 100%
Other 39.4 17.3 18.7 24.6 100%
NOTE: Rows may not add up to 100% due to rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 2004 National Study of Postsecondary Faculty (NSOPF:04).

Computation by NCES QuickStats on 6/19/2009
ckeak98
4
Percentage distribution of part-time instructional faculty and staff, by academic rank, institution type: Fall 2003
Academic rank Professor
(%)
Associate professor
(%)
Assistant professor
(%)
Instructor or lecturer
(%)
Other ranks/not applicable
(%)
Estimates
Total 4.6 2.9 3.4 42.2 46.9
Institution: type and control
Public doctoral 6.3 4.5 8.1 45.0 36.0
Private not-for-profit doctoral 5.6 4.9 9.1 31.9 48.5
Public master’s 6.4 2.3 2.0 40.7 48.7
Private not-for-profit master’s 2.7 3.4 2.6 30.3 60.9
Private not-for-profit baccalaureate 4.6 4.2 5.4 32.5 53.3
Public associates 3.4 1.5 1.0 49.5 44.6
Other 7.1 4.9 5.2 33.3 49.4
NOTE: Rows may not add up to 100% due to rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 2004 National Study of Postsecondary Faculty (NSOPF:04).

Computation by NCES QuickStats on 6/19/2009
ckeakff
5
Average hours worked per week among full-time instructional faculty and staff, by tenure status: Fall 2003
  Hours worked per week
(Mean>0)
Estimates
Total 47.4
Tenure status
Tenured 53.3
On tenure track but not tenured 53.7
Not on tenure track 43.0
Not tenured-no tenure system 45.4
SOURCE: U.S. Department of Education, National Center for Education Statistics, 2004 National Study of Postsecondary Faculty (NSOPF:04).

Computation by NCES QuickStats on 6/19/2009
bkfakf3
1
Percentage of institutions with full- or part-time faculty represented by a union, by institution type: Fall 2003
Faculty represented by a union Not represented by a union
(%)
Represented by a union
(%)
Total
Estimates
Total 68.1 31.9 100%
Institution: type and control
Public doctoral 69.1 30.9 100%
Private not-for-profit doctoral 94.4 5.6 100%
Public master’s 58.1 41.9 100%
Private not-for-profit master’s 87.6 12.4 100%
Private not-for-profit baccalaureate 86.7 13.3 100%
Public associate’s 42.4 57.6 100%
Other 78.3 21.7 100%
NOTE: Rows may not add up to 100% due to rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 2004 National Study of Postsecondary Faculty (NSOPF:04).

Computation by NCES QuickStats on 6/19/2009
ckeak16
2
Among institutions with a tenure system, average percentage of undergraduate student credit hours assigned to full-time faculty and instructional staff, by institution type: Fall 2003
  Undergraduate instruction: percent full-time faculty
(Mean[0])
Estimates
Total 70.8
Institution: type and control
Public doctoral 68.6
Private not-for-profit doctoral 71.6
Public master’s 75.7
Private not-for-profit master’s 68.8
Private not-for-profit baccalaureate 76.1
Public associate’s 58.7
Other 82.8
SOURCE: U.S. Department of Education, National Center for Education Statistics, 2004 National Study of Postsecondary Faculty (NSOPF:04).

Computation by NCES QuickStats on 6/19/2009
ckeaka0
3
Percentage of institutions who have downsized tenured faculty, by institution type: Fall 2003
Downsized tenured faculty No
(%)
Yes
(%)
Total
Estimates
Total 85.7 14.3 100%
Institution: type and control
Public doctoral 83.4 16.6 100%
Private not-for-profit doctoral 93.9 6.1 100%
Public master’s 90.7 9.3 100%
Private not-for-profit master’s 99.6 0.4 100%
Private not-for-profit baccalaureate 88.1 11.9 100%
Public associate’s 87.7 12.3 100%
Other 68.0 32.0 100%
NOTE: Rows may not add up to 100% due to rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 2004 National Study of Postsecondary Faculty (NSOPF:04).

Computation by NCES QuickStats on 6/19/2009
ckeak5c
4
Percentage distribution of the maximum number of years full-time faculty and instructional staff can be on a tenure track without receiving tenure, by institution type: Fall 2003
Maximum years on tenure track No maximum
(%)
Less than 5 years
(%)
5 years
(%)
6 years
(%)
7 years
(%)
More than 7 years
(%)
Total
Estimates
Total 17.5 17.4 8.5 27.0 26.0 3.6 100%
Institution: type and control
Public doctoral 7.5 0.0 1.1 37.3 45.9 8.2 100%
Private not-for-profit doctoral 11.4 0.0 2.8 32.0 34.4 19.4 100%
Public master’s 1.5 0.0 22.0 37.1 38.9 0.6 100%
Private not-for-profit master’s 16.8 0.0 7.1 40.5 27.4 8.2 100%
Private not-for-profit baccalaureate 9.9 0.7 0.0 53.5 32.2 3.7 100%
Public associate’s 15.6 44.6 16.9 8.2 13.7 1.1 100%
Other 41.9 27.1 1.9 10.3 18.5 0.2 100%
NOTE: Rows may not add up to 100% due to rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 2004 National Study of Postsecondary Faculty (NSOPF:04).

Computation by NCES QuickStats on 6/19/2009
ckeak13
5
Percentage of institutions in which over half of student instruction hours are assigned to part-time faculty, by institution type: Fall 2003
  Undergraduate instruction: percent part-time faculty
(%>50)
Estimates
Total 17.9
Institution: type and control
Public doctoral 0.6
Private not-for-profit doctoral 9.9
Public master’s 1.6
Private not-for-profit master’s 15.6
Private not-for-profit baccalaureate 11.1
Public associate’s 23.9
Other 26.0
SOURCE: U.S. Department of Education, National Center for Education Statistics, 2004 National Study of Postsecondary Faculty (NSOPF:04).

Computation by NCES QuickStats on 6/19/2009
ckeakf7
1
Percentage distribution of 1992–93 bachelor's degree recipients' time-to-degree in years, by major field of study: 2003
Number of months to bachelor’s degree Within 4 years
(%)
4–5 years
(%)
5–6 years
(%)
6–10 years
(%)
More than 10 years
(%)
Total
Estimates
Total 35.5 27.4 11.4 11.7 14.0 100%
Undergraduate major
Business and management 32.6 26.9 8.7 13.3 18.6 100%
Education 32.9 30.4 10.7 11.0 15.0 100%
Engineering 25.3 37.4 15.9 11.4 10.0 100%
Health professions 22.0 27.3 13.5 14.2 23.1 100%
Public affairs/social services 28.3 29.7 11.9 13.2 17.0 100%
Biological sciences 53.5 21.7 10.9 8.4 5.5 100%
Mathematics & science 38.9 24.9 11.7 11.2 13.3 100%
Social science 47.5 25.3 11.4 10.2 5.6 100%
History 40.1 26.3 20.0 5.3 8.3 100%
Humanities 39.8 21.4 12.8 12.1 13.8 100%
Psychology 39.8 26.1 7.3 12.0 14.8 100%
Other 35.4 28.7 12.4 11.3 12.2 100%
NOTE: Rows may not add up to 100% due to rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 1993/03 Baccalaureate and Beyond Longitudinal Study (B&B:93/03).

Computation by QuickStats on 6/24/2009
cgeak2a
1
Percentage distribution of 1992–93 bachelor's degree recipients' time-to-degree in years, by major field of study: 2003
Number of months to bachelor’s degree Within 4 years (%) 4–5 years (%) 5–6 years (%) 6–10 years (%) More than 10 years (%) Total
Estimates
Total 46.3 24.3 9.0 8.0 12.4 100%
Undergraduate major
Business and management 43.1 22.1 9.3 8.0 17.6 100%
Education 38.3 29.9 8.7 9.6 13.5 100%
Engineering 38.7 35.9 12.3 7.2 5.9 100%
Health professions 29.2 29.3 10.2 11.6 19.7 100%
Public affairs/social services 32.8 26.2 8.6 11.3 21.2 100%
Biological sciences 62.0 21.2 8.7 4.4 3.8 100%
Mathematics & science 48.3 20.2 10.8 10.4 10.3 100%
Social science 63.2 18.8 7.6 4.2 6.2 100%
History 53.8 28.2 9.7 3.3 5.0 100%
Humanities 51.1 18.4 6.5 7.0 17.0 100%
Psychology 39.6 29.3 4.4 13.2 13.4 100%
Other 47.1 21.5 11.2 7.5 12.7 100%
NOTE: Rows may not add up to 100% due to rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 1993/03 Baccalaureate and Beyond Longitudinal Study (B&B:93/03).

Computation by QuickStats on 6/16/2009
ckeakfe
2
Percentage distribution of 1992–93 bachelor's degree receipient’s highest graduate degree attainment, by age at which student received bachelor's degree: 2003
Highest degree completed as of 2003 Bachelor’s degree
(%)
Master’s degree
(%)
First-professional degree
(%)
Doctoral degree
(%)
Total
Estimates
Total 73.5 20.4 4.1 2.0 100%
Age when received bachelor's degree
22 or younger 65.3 24.9 6.8 3.1 100%
23–24 80.7 15.5 2.4 1.4 100%
25–29 84.2 14.2 0.8 0.8 100%
30 or older 78.2 19.8 1.3 0.7 100%
NOTE: Rows may not add up to 100% due to rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 1993/03 Baccalaureate and Beyond Longitudinal Study (B&B:93/03).

Computation by QuickStats on 6/16/2009
ckeaked
2
Percentage distribution of 1992–93 bachelor's degree receipient’s highest graduate degree attainment, by age at which student received bachelor's degree: 2003
Highest degree completed as of 2003 Bachelor’s degree
(%)
Master’s degree
(%)
First-professional degree
(%)
Doctoral degree
(%)
Total
Estimates
Total 73.8 20.2 4.0 2.0 100%
Age when received bachelor's degree
22 or younger 65.5 24.6 6.7 3.1 100%
23–24 80.9 15.4 2.3 1.3 100%
25–29 85.0 13.7 0.6 0.7 100%
30 or older 78.5 19.4 1.3 0.8 100%
NOTE: Rows may not add up to 100% due to rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 1993/03 Baccalaureate and Beyond Longitudinal Study (B&B:93/03).

Computation by QuickStats on 6/16/2009
ckeak4e
3
Average annual salary among 1992–93 bachelor's degree recipients, by highest degree attained: 2003
  Job 2003: annual salary
(Mean[0])
Estimates
Total 55,407.6
Highest degree attained by 2003
Bachelor’s degree 53,547.5
Master’s degree 56,241.6
First-professional degree 83,798.6
Doctoral degree 63,214.4
SOURCE: U.S. Department of Education, National Center for Education Statistics, 1993/03 Baccalaureate and Beyond Longitudinal Study (B&B:93/03).

Computation by QuickStats on 6/16/2009
ckeak84
3
Average annual salary among 1992–93 bachelor's degree recipients, by highest degree attained: 2003
  Job 2003: annual salary
(Mean[0])
Estimates
Total 55,407.6
Highest degree attained by 2003
Bachelor’s degree 53,547.5
Master’s degree 56,241.6
First-professional degree 83,798.6
Doctoral degree 63,214.4
SOURCE: U.S. Department of Education, National Center for Education Statistics, 1993/03 Baccalaureate and Beyond Longitudinal Study (B&B:93/03).

Computation by QuickStats on 6/16/2009
ckeak2a
4
Percentage of 1992–93 bachelor degree recipients who were still paying undergraduate education loans, by occupation: 2003
  Total loans: amount owed as of 2003
(%>0)
Estimates
Total 37.8
Job 2003: occupation
Educators 38.2
Business and management 30.9
Engineering/architecture 26.2
Computer science 28.7
Medical professionals 51.9
Editors/writers/performers 27.9
Human/protective service/legal profess 51.0
Research, scientists, technical 30.9
Administrative/clerical/legal support 55.0
Mechanics, laborers
Service industries 28.5
Other, military 17.7
‡ Reporting standards not met.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 1993/03 Baccalaureate and Beyond Longitudinal Study (B&B:93/03).

Computation by QuickStats on 6/16/2009
bbfakd2
4
Percentage of 1992–93 bachelor degree recipients who were still paying undergraduate education loans, by occupation: 2003
  Undergraduate loans: total owed as of 2003
(%>1)
Estimates
Total 16.6
Job 2003: occupation
Educators 22.0
Business and management 12.7
Engineering/architecture 12.6
Computer science 10.8
Medical professionals 20.7
Editors/writers/performers 15.1
Human/protective service/legal profess 24.0
Research, scientists, technical 14.1
Administrative/clerical/legal support 24.7
Mechanics, laborers 18.1
Service industries 13.0
Other, military 13.6
SOURCE: U.S. Department of Education, National Center for Education Statistics, 1993/03 Baccalaureate and Beyond Longitudinal Study (B&B:93/03).

Computation by QuickStats on 6/16/2009
bffak6c
5
Percentage distribution of 1992–93 bachelor's degree recipients' teaching status, by highest degree attained: 2003
Teaching status in 2003 Currently teaching (%) Left teaching (%) Never taught
(%)
Total
Estimates
Total 10.6 9.3 80.2 100%
Highest degree completed as of 2003
Bachelor's degree 8.2 8.2 83.6 100%
Master's degree 20.1 13.3 66.6 100%
First-professional degree 0.6 4.9 94.5 100%
Doctoral degree 1.0 9.8 89.2 100%
NOTE: Rows may not add up to 100% due to rounding

SOURCE: U.S. Department of Education, National Center for Education Statistics, 1993/03 Baccalaureate and Beyond Longitudinal Study (B&B:93/03).

Computation by QuickStats on 6/16/2009
ckeake6
5
Percentage distribution of 1992–93 bachelor's degree recipients' teaching status, by highest degree attained: 2003
Teaching status in 2003 Currently teaching (%) Left teaching (%) Never taught
(%)
Total
Estimates
Total 10.6 9.3 80.2 100%
Highest degree completed as of 2003
Bachelor's degree 8.2 8.2 83.6 100%
Master's degree 20.1 13.3 66.6 100%
First-professional degree 0.6 4.9 94.5 100%
Doctoral degree 1.0 9.8 89.2 100%
NOTE: Rows may not add up to 100% due to rounding

SOURCE: U.S. Department of Education, National Center for Education Statistics, 1993/03 Baccalaureate and Beyond Longitudinal Study (B&B:93/03).

Computation by QuickStats on 6/16/2009
ckeake1
1
2
3
4
5
1
Disability as reported by teacher (parent if teacher data missing), by Child's race.
Disability as reported by teacher (parent if teacher data missing), Wave 1 Autism
(%)
Learning Disability
(%)
Mental Retardation
(%)
Speech Or Language Impairment
(%)
Other impairment
(%)
Estimates
Total7.2 2.4 4.3 47.1 39.0
Child's race
Hispanic10.3 3.5 7.1 41.7 37.5
Black Or African American/Non-Hispanic9.7 4.7 6.6 35.1 43.9
White/Non-Hispanic5.7 1.6 2.9 51.4 38.4
NOTE: Rows may not add up to 100% due to rounding.

SOURCE: Pre-Elementary Education Longitudinal Study (PEELS), Waves 1-5

Computation by NCES QuickStats on 8/23/2011
cchbbf1
2
				
Child's main education setting, by Household income
Child's main education setting, Wave 1 Regular Education Classroom
(%)
Special Education Setting
(%)
Home
(%)
Other Specify
(%)
Total
Estimates
Total74.4 21.2 2.7 1.7 100%
Household income, Wave 1
$20,000 Or Less72.2 18.3 8.2 1.3 100%
$20,001 - 40,00068.6 27.0 0.0 4.4 100%
> $40,00080.3 19.0 0.6 0.0 100%
NOTE: Rows may not add up to 100% due to rounding.

SOURCE: Pre-Elementary Education Longitudinal Study (PEELS), Waves 1-5

Computation by NCES QuickStats on 8/23/2011
cchbbf2
3
Overall academic skills (kindergarten), by District poverty/wealth category.
Overall academic skills (kindergarten), Wave 1 Far Below Average
(%)
Below Average
(%)
Average
(%)
Above Average
(%)
Far Above Average
(%)
Total
Estimates
Total17.9 32.7 35.8 12.7 0.9 100%
District poverty/wealth category
High Wealth22.1 30.1 41.6 6.2 0.0 100%
Medium Wealth9.0 27.7 53.1 8.6 1.6 100%
Low Wealth22.8 32.7 23.8 20.7 0.0 100%
Very Low Wealth18.9 41.2 24.3 13.5 2.1 100%
NOTE: Rows may not add up to 100% due to rounding.

SOURCE: Pre-Elementary Education Longitudinal Study (PEELS), Waves 1-5

Computation by NCES QuickStats on 8/23/2011
cchbbf3
4
First professional license/certificate, 1 by Years teacher working with children with disabilities
First professional license/certificate, Wave 1 Child Development
(%)
Early Childhood Education
(%)
Early Childhood Special Education
(%)
Special Education
(%)
Other
(%)
Estimates
Total8.8 20.8 18.0 22.8 29.6
Years teacher working with children with disabilities, Wave 1
Less than 5 years9.4 27.9 17.4 14.3 31.0
6 - 10 years10.9 22.4 12.1 22.2 32.4
11 - 15 years11.1 28.0 19.8 18.0 23.2
15 years or more5.5 10.4 22.4 32.3 29.4
NOTE: Rows may not add up to 100% due to rounding.

SOURCE: Pre-Elementary Education Longitudinal Study (PEELS), Waves 1-5

Computation by NCES QuickStats on 8/23/2011
cchbbf4
5
Description of child's school, by Total hours/week child attends school.
Description of child's school (kindergarten or higher), Wave 4 Regular School - Serves All Students
(%)
School Serves Only Disabled Students
(%)
Magnet School
(%)
>Other
(%)
Estimates
Total94.0 2.9 1.1 2.0
Total hours/week child attends school (kindergarten), Wave 4
15 hours or less96.1 3.3 0.0 0.6
16 to 25 hours88.6 3.6 3.9 3.8
26 to 30 hours83.0 10.5 3.3 3.2
More than 30 hours96.9 0.0 1.9 1.2
NOTE: Rows may not add up to 100% due to rounding.

SOURCE: Pre-Elementary Education Longitudinal Study (PEELS), Waves 1-5

Computation by NCES QuickStats on 8/23/2011
cchbbf5
1
Percentage distribution of beginning postsecondary students who took distance education courses by student/employee role: 2003–04
Distance education courses in 2003-04 Yes
(%)
No
(%)
Total
Estimates
Total 9.3 90.7 100%
Job 2003-04: primarily student or employee
A student working to meet expenses 9.7 90.3 100%
An employee enrolled in school 11.4 8.6 100%
No job 7.6 92.7 100%
NOTE: Rows may not add up to 100% due to rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 2003–04 Beginning Postsecondary Students Longitudinal Study, First Follow-up (BPS:04/06).

Computation by NCES QuickStats on 6/17/2009
cehak17
2
Percentage distribution of 2003–04 beginning postsecondary students' persistence at any institution through 2006, by gender.
Persistence at any institution through 2006 Attained, still enrolled
(%)
Attained, not enrolled
(%)
No degree, still enrolled
(%)
No degree, not enrolled
(%)
Total
Estimates
Total 7.0 8.9 50.7 33.5 100%
Gender
Male 6.5 7.5 50.4 35.6 100%
Female 7.3 9.9 50.9 31.9 100%
NOTE: Rows may not add up to 100% due to rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 2003–04 Beginning Postsecondary Students Longitudinal Study, First Follow-up (BPS:04/06).

Computation by NCES QuickStats on 6/22/2009
cgeak59
3
Percentage of 2003–04 beginning postsecondary students who received financial aid, by undergraduate degree attainment and enrollment status through 2006
  Aid: total student aid all sources in 2003-04
(%>0)
Estimates
Total 70.6
Persistence at any institution through 2006
Attained a degree or certificate 80.1
No degree, still enrolled 70.6
No degree, not enrolled 66.1
SOURCE: U.S. Department of Education, National Center for Education Statistics, 2003–04 Beginning Postsecondary Students Longitudinal Study, First Follow-up (BPS:04/06).

Computation by NCES QuickStats on 6/17/2009
cgeak94
4
Percentage distribution of 2003–04 beginning postsecondary students' degree attainment and enrollment status through 2006, by grade point average (GPA)
Persistence at any institution through 2006 Attained, still enrolled
(%)
Attained, not enrolled
(%)
No degree, still enrolled
(%)
No degree, not enrolled
(%)
Total
Estimates
Total 7.0 8.9 50.7 33.5 100%
Cumulative Grade Point Average (GPA) as of 2003-04
Below 2.0 3.9 4.2 39.4 52.5 100%
2.1 to 2.50 5.1 5.4 50.7 38.8 100%
2.51 to 2.99 6.4 6.7 59.9 27.0 100%
3.0 and above 8.2 11.4 50.8 29.5 100%
NOTE: Rows may not add up to 100% due to rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 2003–04 Beginning Postsecondary Students Longitudinal Study, First Follow-up (BPS:04/06).

Computation by NCES QuickStats on 6/17/2009
cgeak4f
5
Percentage distribution of 2003–04 beginning postsecondary students’ degree attainment and enrollment status through 2006, by highest degree expectations
Persistence anywhere through 2006 Attained, still enrolled
(%)
Attained, not enrolled
(%)
No degree, still enrolled
(%)
No degree, not enrolled
(%)
Total
Estimates
Total 7.0 8.9 50.7 33.5 100%
Highest degree expected, 2003-04
No degree or certificate 3.8 17.1 16.3 62.8 100%
Certificate 6.9 41.5 10.3 41.3 100%
Associate’s degree 8.7 17.3 25.3 48.8 100%
Bachelor’s degree 6.9 7.9 45.2 40.0 100%
Post-BA or post-master certificate 5.1 13.4 42.9 38.6 100%
Master’s degree 7.1 4.8 60.6 27.4 100%
Doctoral degree 7.1 4.2 67.9 20.8 100%
First-professional degree 3.5 6.7 67.6 22.2 100%
NOTE: Rows may not add up to 100% due to rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 2003–04 Beginning Postsecondary Students Longitudinal Study, First Follow-up (BPS:04/06).

Computation by NCES QuickStats on 6/22/2009
cgeak02
1
Percentage distribution of undergraduates' attendance intensity, by institution type: 2007–08
Attendance intensity Exclusively full-time
(%)
Exclusively part-time
(%)
Mixed full-time and part-time
(%)
Total
Estimates
Total 47.7 35.4 16.9 100%
Institution: type
Public less-than-2-year 64.5 31.5 4.0 100%
Public 2-year 26.3 58.8 14.9 100%
Public 4-year nondoctorate 54.5 27.9 17.6 100%
Public 4-year doctorate 65.0 15.4 19.6 100%
Private not-for-profit less than 4-year 55.2 28.9 15.8 100%
Private not-for-profit 4-yr nondoctorate 69.0 18.4 12.5 100%
Private not-for-profit 4-year doctorate 74.7 13.9 11.5 100%
Private for-profit less-than-2-year 75.0 15.8 9.1 100%
Private for-profit 2 years or more 67.0 18.7 14.4 100%
Attended more than one institution 40.8 26.2 33.0 100%
NOTE: Rows may not add up to 100% due to rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 2007–08 National Postsecondary Student Aid Study (NPSAS:08)

Computation by NCES QuickStats on 6/22/2009
cgeakc9
2
Percentage of undergraduates who received Pell Grants, by income and dependency status: 2007–08
  Grants: Pell Grants
(%>0)
Estimates
Total 27.3
Income: categories by dependency status
Dependent: Less than $10,000 63.2
Dependent: $10,000-$19,999 72.7
Dependent: $20,000-$29,999 64.9
Dependent: $30,000-$39,999 53.5
Dependent: $40,000-$49,999 32.0
Dependent: $50,000-$59,999 15.4
Dependent: $60,000-$69,999 2.3
Dependent: $70,000-$79,999 0.0
Dependent: $80,000-$99,999 0.0
Dependent: $100,000 or more 0.0
Independent: Less than $5,000 53.3
Independent: $5,000-$9,999 65.5
Independent: $10,000-$19,999 52.3
Independent: $20,000-$29,999 34.8
Independent: $30,000-$49,999 28.2
Independent: $50,000 or more 0.2
SOURCE: U.S. Department of Education, National Center for Education Statistics, 2007–08 National Postsecondary Student Aid Study (NPSAS:08)

Computation by NCES QuickStats on 6/26/2009
cgfak1b
3
Average net price of attendance after all financial aid for full-time undergraduate students, by type of institution: 2007–08
  Net price after all aid
(Avg>0)
Estimates
Total 11,658.9
Type of institution
Public less-than-2-year 9,667.4
Public 2-year 7,560.8
Public 4-year nondoctorate 8,922.5
Public 4-year doctorate 11,625.2
Private not-for-profit less-than-4-year 10,782.5
Private not-for-profit 4-year nondoctorate 14,462.2
Private not-for-profit 4-year doctorate 20,047.5
Private for-profit less-than-2-year 10,298.3
Private for-profit 2 years or more 14,406.9
Attended more than one institution
‡ Reporting standards not met.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 2007–08 National Postsecondary Student Aid Study (NPSAS:08)

Computation by NCES QuickStats on 6/22/2009
cgeakf7
4
Percentage distribution of dependent undergraduates’ parents’ income, by type of institution: 2007–08
Parents’ income Less than $36,000
(%)
$36,000-66,999
(%)
$67,000-104,999
(%)
$105,000 or more
(%)
Total
Estimates
Total 24.8 25.5 25.0 24.7 100%
Institution: sector
Public 4-year 20.6 22.7 27.4 29.2 100%
Private not-for-profit 4-year 17.5 20.9 25.3 36.4 100%
Public 2-year 30.6 31.4 23.2 14.8 100%
Private for-profit 50.1 25.1 15.9 8.9 100%
NOTE: Rows may not add up to 100% due to rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, 2007–08 National Postsecondary Student Aid Study (NPSAS:08)

Computation by NCES QuickStats on 6/22/2009
cgeak3a
5
Mean estimated student need for undegraduate students, by type of degree program: 2007–08
  Aid: estimated student need
(Mean[0])
Estimates
Total 7,978.0
College study: degree program
Certificate 8,696.4
Associate’s degree 5,248.0
Bachelor’s degree 10,890.9
Not in a degree program or others 2,909.0
SOURCE: U.S. Department of Education, National Center for Education Statistics, 2007–08 National Postsecondary Student Aid Study (NPSAS:08)

Computation by NCES QuickStats on 6/22/2009
cgeaka8
1
Percentage of graduate students who borrowed, by type of graduate program: 2007–08
  Graduate loan debt (cumulative)
(%>0)
Estimates
Total 53.2
Graduate degree: type
Master's degree 52.8
Doctoral degree 46.5
First-professional degree 82.1
Post-BA or post-master's certificate 51.6
Not in a degree program 34.8
SOURCE: U.S. Department of Education, National Center for Education Statistics, 2007–08 National Postsecondary Student Aid Study (NPSAS:08)

Computation by NCES QuickStats on 6/19/2009
ckeake3
2
Percentage of graduate students with assistantships, by attendance intensity: 2007–08
  Assistantships
(%>0)