Skip Navigation
small NCES header image

Guide to Sources

National Center for Education Statistics (NCES)

Common Core of Data

 The Common Core of Data (CCD) is NCES's primary database on public elementary and secondary education in the United States. It is a comprehensive, annual, national statistical database of all public elementary and secondary schools and school districts containing data designed to be comparable across all states. This database can be used to select samples for other NCES surveys and provide basic information and descriptive statistics on public elementary and secondary schools and schooling in general.

The CCD collects statistical information annually from approximately 100,000 public elementary and secondary schools and approximately 18,000 public school districts (including supervisory unions and regional education service agencies) in the 50 states, the District of Columbia, Department of Defense (DoD) dependents schools, the Bureau of Indian Education, Puerto Rico, American Samoa, Guam, the Northern Mariana Islands, and the U.S. Virgin Islands. Three categories of information are collected in the CCD survey: general descriptive information on schools and school districts; data on students and staff; and fiscal data. The general descriptive information includes name, address, phone number, and type of locale; the data on students and staff include selected demographic characteristics; and the fiscal data pertain to revenues and current expenditures.

The EDFacts data collection system is the primary collection tool for the CCD. NCES works collaboratively with the Department of Education's Performance Information Management Service to develop the CCD collection procedures and data definitions. Coordinators from State Education Agencies (SEAs) submit the CCD data at different levels (school, agency, and state) to the EDFacts collection system. Prior to submitting CCD files to EDFacts, SEAs must collect and compile information from their respective Local Education Agencies (LEAs) through established administrative records systems within their state or jurisdiction.

Once SEAs have completed their submissions, the CCD survey staff analyzes and verifies the data for quality assurance. Even though the CCD is a universe collection and thus not subject to sampling errors, nonsampling errors can occur. The two potential sources of nonsampling errors are nonresponse and inaccurate reporting. NCES attempts to minimize nonsampling errors through the use of annual training of SEA coordinators, extensive quality reviews, and survey editing procedures. In addition, each year, SEAs are given the opportunity to revise their state-level aggregates from the previous survey cycle.

The CCD survey consists of six components: The Public Elementary/Secondary School Universe Survey, the Local Education Agency (School District) Universe Survey, the State Nonfiscal Survey of Public Elementary/Secondary Education, the National Public Education Financial Survey (NPEFS), the School District Fiscal Data Survey (F-33), and the Teacher Compensation Survey.

Public Elementary/Secondary School Universe Survey

The Public Elementary/Secondary School Universe Survey includes all public schools providing education services to prekindergarten, kindergarten, grade 1–12, and ungraded students. The CCD Public Elementary/Secondary School Universe Survey includes records for each public elementary and secondary school in the 50 states, the District of Columbia, Puerto Rico, American Samoa, Guam, the Commonwealth of the Northern Mariana Islands, the U.S. Virgin Islands, the Bureau of Indian Education, and the DoD dependents schools (overseas and domestic).

The Public Elementary/Secondary School Universe Survey includes data for the following variables: NCES school ID number, state school ID number, name of the school, name of the agency that operates the school, mailing address, physical location address, phone number, school type, operational status, locale code, latitude, longitude, county number, county name, full-time-equivalent (FTE) classroom teacher count, low/high grade span offered, congressional district code, school level, students eligible for free lunch, students eligible for reduced-price lunch, total students eligible for free and reduced-price lunch, and student totals and detail (by grade, by race/ethnicity, and by sex). The survey also contains flags indicating whether a school is Title I eligible, schoolwide Title I eligible, a magnet school, a charter school, a shared-time school, or a BIE school, as well as which grades are offered at the school.

Local Education Agency (School District) Universe

The coverage of the Local Education Agency Universe Survey includes all school districts and administrative units providing education services to prekindergarten, kindergarten, grade 1–12, and ungraded students. The CCD Local Education Agency Universe Survey includes records for the 50 states, the District of Columbia, Puerto Rico, the Bureau of Indian Education, American Samoa, Guam, the Commonwealth of the Northern Mariana Islands, the U.S. Virgin Islands, and the DoD dependents schools (overseas and domestic).

The Local Education Agency Universe Survey includes the following variables: NCES agency ID number, state agency ID number, agency name, phone number, mailing address, physical location address, agency type code, supervisory union number, American National Standards Institute (ANSI) state and county code, county name, core based statistical area (CBSA) code, metropolitan/ micropolitan code, metropolitan status code, district locale code, congressional district code, operational status code, BIE agency status, low/high grade span offered, agency charter status, number of schools, number of full-time-equivalent (FTE) teachers, number of ungraded students, number of PK–12 students, number of special education/ Individualized Education Program (IEP) students, number of English language learner (ELL) students, instructional staff fields, support staff fields, and a flag indicating whether student counts by race/ethnicity were reported by five or seven racial/ethnic categories.

State Nonfiscal Survey of Public Elementary/Secondary Education

The State Nonfiscal Survey of Public Elementary/ Secondary Education for the 2011–12 school year provides state-level, aggregate information about students and staff in public elementary and secondary education. It includes data from the 50 states, the District of Columbia, Puerto Rico, the U.S. Virgin Islands, the Commonwealth of the Northern Mariana Islands, and Guam. The DoD dependents schools (overseas and domestic), the Bureau of Indian Education, and American Samoa did not report data for the 2011–12 school year. This survey covers public school student membership by grade, race/ethnicity, and state or jurisdiction and covers number of staff in public schools by category and state or jurisdiction. Beginning with the 2006–07 school year, the number of diploma recipients and other high school completers are no longer included in the State Nonfiscal Survey of Public Elementary/Secondary Education file. These data are now published in the public-use Common Core of Data State Dropout and Completion Data File.

National Public Education Financial Survey

The purpose of the National Public Education Financial Survey (NPEFS) is to provide district, state, and federal policymakers, researchers, and other interested users with descriptive information about revenues and expenditures for public elementary and secondary education. The data collected are useful to (1) chief officers of state education agencies; (2) policymakers in the executive and legislative branches of federal and state governments; (3) education policy and public policy researchers; and (4) the public, journalists, and others.

Data for NPEFS are collected from SEAs in the 50 states, the District of Columbia, Puerto Rico, and four other jurisdictions (American Samoa, Guam, the Commonwealth of the Northern Mariana Islands, and the U.S. Virgin Islands). The data file is organized by state or jurisdiction and contains revenue data by funding source; expenditure data by function (the activity being supported by the expenditure) and object (the category of expenditure); average daily attendance data; and total student membership data from the CCD State Nonfiscal Survey of Public Elementary/Secondary Education.

School District Finance Survey

The purpose of the School District Finance Survey (F-33) is to provide finance data for all local education agencies (LEAs) that provide free public elementary and secondary education in the United States. National and state totals are not included (national- and state-level figures are presented, however, in the National Public Education Financial Survey [NPEFS]).

Both NCES and the Governments Division of the U.S. Census Bureau collect public school system finance data, and they collaborate in their efforts to gather these data. The Census Bureau acts as the primary collection agent and produces two data files: one for distribution and reporting by the Census Bureau and the other for distribution and reporting by NCES.

The FY 11 F-33 data file contains 18,297 records representing the public elementary and secondary education agencies in the 50 states and the District of Columbia. The file includes variables for revenues by source, expenditures by function and object, indebtedness, assets, student membership counts, as well as identification variables.

Teacher Compensation Survey

The FY 11 F-33 data file contains 18,297 records representing the public elementary and secondary education agencies in the 50 states and the District of Columbia. The file includes variables for revenues by source, expenditures by function and object, indebtedness, assets, student membership counts, as well as identification variables.

Further information on the nonfiscal CCD data may be obtained from

Patrick Keaton
Administrative Data Division
Elementary and Secondary Branch
National Center for Education Statistics
1990 K Street NW
Washington, DC 20006
patrick.keaton@ed.gov
http://nces.ed.gov/ccd

Further information on the fiscal CCD data may be obtained from

Stephen Cornman
Administrative Data Division
Elementary and Secondary Branch
National Center for Education Statistics
1990 K Street NW
Washington, DC 20006
stephen.cornman@ed.gov
http://nces.ed.gov/ccd

Early Childhood Longitudinal Study, Kindergarten Class of 2010–11 (ECLS-K:2011)

The Early Childhood Longitudinal Study, Kindergarten Class of 2010–11 (ECLS-K:2011) is sponsored by the National Center for Education Statistics (NCES) in the Institute of Education Sciences of the U.S. Department of Education to provide detailed information on the school achievement and experiences of students throughout their elementary school years. The students participating in ECLS-K:2011 are being followed longitudinally from the kindergarten year (the 2010–11 school year) through the spring of 2016, when most of them are expected to be in fifth grade. This sample of students is designed to be nationally representative of all students who were enrolled in kindergarten or who were of kindergarten age and being educated in an ungraded classroom or school in the United States in the 2010–11 school year, including those in public and private schools, those who attended full-day and part-day programs, those who were in kindergarten for the first time, and those who were kindergarten repeaters. Students who attended early learning centers or institutions that offered education only through kindergarten are included in the study sample and represented in the cohort.

The ECLS-K:2011 places emphasis on measuring students' experiences within multiple contexts and development in multiple domains. The design of the study includes the collection of information from the students, their parents/guardians, their teachers, their schools, and their before- and after-school care providers.

A nationally representative sample of approximately 18,200 children enrolled in 970 schools during the 2010–11 school year participated in the base year of ECLS-K-2011. The sample includes children from different racial/ethnic and socioeconomic backgrounds. Asian/Pacific Islander students were oversampled to assure that the sample included enough students of this race/ ethnicity to be able to make accurate estimates for these students as a group. Two data collections were conducted in the 2010–11 school year, one in the fall and one in the spring. A total of approximately 780 of the 1,320 originally sampled schools participated during the base year of the study. This translates into a weighted unit response rate (weighted by the base weight) of 63 percent for the base year.

Further information on the ECLS-K may be obtained from

Gail Mulligan
Sample Surveys Division
Longitudinal Surveys Branch
National Center for Education Statistics
1990 K Street NW
Washington, DC 20006
ecls@ed.gov
http://nces.ed.gov/ecls/birth.asp

Integrated Postsecondary Education Data System

The Integrated Postsecondary Education Data System (IPEDS) surveys approximately 7,500 postsecondary institutions, including universities and colleges, as well as institutions offering technical and vocational education beyond the high school level. IPEDS, an annual universe collection that began in 1986, replaced the Higher Education General Information Survey (HEGIS). In order to present data in a timely manner, "provisional" IPEDS data are used in tables shown. These data have been fully reviewed, edited, and imputed, but do not incorporate data revisions submitted by institutions after the close of data collection.

IPEDS consists of interrelated survey components that provide information on postsecondary institutions, student enrollment, programs offered, degrees and certificates conferred, and both the human and financial resources involved in the provision of institutionally based postsecondary education. Prior to 2000, the IPEDS survey had the following subject-matter components: Graduation Rates; Fall Enrollment; Institutional Characteristics; Completions; Salaries, Tenure, and Fringe Benefits of Full-Time Faculty; Fall Staff; Finance; and Academic Libraries (in 2000, the Academic Libraries component became a survey separate from IPEDS). Since 2000, IPEDS survey components occurring in a particular collection year have been organized into three seasonal collection periods: fall, winter, and spring. The Institutional Characteristics and Completions components first took place during the fall 2000 collection; the Employees by Assigned Position (EAP), Salaries, and Fall Staff components first took place during the winter 2001–02 collection; and the Enrollment, Student Financial Aid, Finance, and Graduation Rates components first took place during the spring 2001 collection. In the winter 2005–06 data collection, the Employees by Assigned Position, Fall Staff, and Salaries components were merged into the Human Resources component. During the 2007–08 collection year, the Enrollment component was broken into two separate components: 12-Month Enrollment (taking place in the fall collection) and Fall Enrollment (taking place in the spring collection). In the 2011–12 IPEDS data collection year, the Student Financial Aid component was moved to the winter data collection to aid in the timing of the net price of attendance calculations displayed on the College Navigator (http://nces.ed.gov/collegenavigator). In the 2012–13 IPEDS data collection year, the Human Resources component was moved to the spring data collection.

Beginning in 2008–09, the first-professional degree category was combined with the post-master's certificate category. Some degrees formerly identified as first-professional that take more than two full-time-equivalent academic years to complete, such as those in Theology (M.Div, M.H.L./Rav), are included in the Master's degree category. Doctor's degrees were broken out into three distinct categories: research/scholarship, professional practice, and other doctor's degrees.

IPEDS race/ethnicity data collection also changed in 2008–09. The "Asian" race category is now separate from a "Native Hawaiian or Other Pacific Islander" category. Survey takers also have the option of identifying themselves as being of "Two or more races." To reflect the recognition that "Hispanic" refers to ethnicity, not race, the new Hispanic category reads "Hispanics of any race."

The degree-granting institutions portion of IPEDS is a census of colleges that award associate's or higher degrees and are eligible to participate in Title IV financial aid programs. Prior to 1993, data from technical and vocational institutions were collected through a sample survey. Beginning in 1993, all data are gathered in a census of all postsecondary institutions. The tabulations on "institutional characteristics" from 1993 forward are based on lists of all institutions and are not subject to sampling errors.

The classification of institutions offering college and university education changed as of 1996. Prior to 1996, institutions that had courses leading to an associate's or higher degree or that had courses accepted for credit toward those degrees were considered higher education institutions. Higher education institutions were accredited by an agency or association that was recognized by the U.S. Department of Education or were recognized directly by the Secretary of Education. Tables, or portions of tables, that use only this standard are noted as "higher education." The newer standard includes institutions that award associate's or higher degrees and that are eligible to participate in Title IV federal financial aid programs. Tables that contain any data according to this standard are titled "degree-granting" institutions. Time-series tables may contain data from both series, and they are noted accordingly. The impact of this change on data collected in 1996 was not large. For example, tables on faculty salaries and benefits were only affected to a very small extent. Also, degrees awarded at the bachelor's level or higher were not heavily affected. The largest impact was on private 2-year college enrollment. In contrast, most of the data on public 4-year colleges were affected to a minimal extent. The impact on enrollment in public 2-year colleges was noticeable in certain states, but was relatively small at the national level. Overall, total enrollment for all institutions was about one-half of a percent higher in 1996 for degree-granting institutions than for higher education institutions.

Prior to the establishment of IPEDS in 1986, HEGIS acquired and maintained statistical data on the characteristics and operations of institutions of higher education. Implemented in 1966, HEGIS was an annual universe survey of institutions accredited at the college level by an agency recognized by the Secretary of the U.S. Department of Education. These institutions were listed in NCES's Education Directory, Colleges and Universities.

HEGIS surveys collected information on institutional characteristics, faculty salaries, finances, enrollment, and degrees. Since these surveys, like IPEDS, were distributed to all higher education institutions, the data presented are not subject to sampling error. However, they are subject to nonsampling error, the sources of which varied with the survey instrument.

The NCES Taskforce for IPEDS Redesign recognized that there were issues related to the consistency of data definitions as well as the accuracy, reliability, and validity of other quality measures within and across surveys. The IPEDS redesign in 2000 provided institution-specific web-based data forms. While the new system shortened data processing time and provided better data consistency, it did not address the accuracy of the data provided by institutions.

Beginning in 2003–04 with the Prior Year Data Revision System, prior-year data have been available to institutions entering current data. This allows institutions to make changes to their prior-year entries either by adjusting the data or by providing missing data. These revisions allow the evaluation of the data's accuracy by looking at the changes made.

NCES conducted a study (NCES 2005-175) of the 2002–03 data that were revised in 2003–04 to determine the accuracy of the imputations, track the institutions that submitted revised data, and analyze the revised data they submitted. When institutions made changes to their data, it was assumed that the revised data were the "true" data. The data were analyzed for the number and type of institutions making changes, the type of changes, the magnitude of the changes, and the impact on published data.

Because NCES imputes missing data, imputation procedures were also addressed by the Redesign Taskforce. For the 2003–04 assessment, differences between revised values and values that were imputed in the original files were compared (i.e., revised value minus imputed value). These differences were then used to provide an assessment of the effectiveness of imputation procedures. The size of the differences also provides an indication of the accuracy of imputation procedures. To assess the overall impact of changes on aggregate IPEDS estimates, published tables for each component were reconstructed using the revised 2002–03 data. These reconstructed tables were then compared to the published tables to determine the magnitude of aggregate bias and the direction of this bias.

The fall 2011 and spring 2012 data collections were entirely web-based. Data were provided by "keyholders," institutional representatives appointed by campus chief executives, who were responsible for ensuring that survey data submitted by the institution were correct and complete. Because Title IV institutions are the primary focus of IPEDS and because these institutions are required to respond to the survey, response rates for Title IV institutions in the fall 2011 IPEDS collection were high. The Institutional Characteristics (IC) component response rate among all Title IV entities was 100.0 percent (all 7,479 Title IV entities responded). In addition, the response rates for the Completions and 12-Month Enrollment components were also 100.0 percent.

NCES statistical standards require that the potential for nonresponse bias for all institutions (including those in other U.S. jurisdictions) be analyzed for sectors for which the response rate is less than 85 percent. Due to response rates of 100.0 percent at the unit level for all three of the survey components, analysis for nonresponse bias was not necessary for the fall 2011 collection. However, data from four institutions that responded to the IC component contained item nonresponse. Price of attendance data collected during fall 2011 but covering prior academic years were imputed for these institutions.

Although IPEDS provides the most comprehensive data system for postsecondary education, there are 100 or more entities that collect their own information from postsecondary institutions. This raises the issue of how valid IPEDS data are when compared to education data collected by non-IPEDS sources. In the Data Quality Study, Thomson Peterson data were chosen to assess the validity of IPEDS data because Thomson Peterson is one of the largest and most comprehensive sources of postsecondary data available.

Not all IPEDS components could be compared to Thomson Peterson. Either Thomson Peterson did not collect data related to a particular IPEDS component, or the data items collected by Thomson Peterson were not comparable to the IPEDS items (i.e., the data items were defined differently). Comparisons were made for a selected number of data items in five areas—tuition and price, employees by assigned position, enrollment, student financial aid, and finance. More details on the accuracy and reliability of IPEDS data can be found in the Integrated Postsecondary Education Data System Data Quality Study (NCES 2005-175).

Further information on IPEDS may be obtained from

Richard Reeves
Administrative Data Division
Postsecondary Branch
National Center for Education Statistics
1990 K Street NW
Washington, DC 20006
richard.reeves@ed.gov
http://nces.ed.gov/ipeds

Fall (12-Month Enrollment)

Data on 12-month enrollment are collected for award levels ranging from postsecondary certificates of less than 1 year to doctoral degrees. The 12-month period during which data are collected is selected by the institution and can be either July 1 through June 30 or September 1 through August 31. Data are collected by race/ethnicity and gender and include unduplicated headcounts and instructional activity (contact or credit hours). These data are also used to calculate a full-time-equivalent (FTE) enrollment based on instructional activity. FTE enrollment is useful for gauging the size of the educational enterprise at the institution. Prior to the 2007–08 IPEDS data collection, the data collected in the 12-Month Enrollment component were part of the Fall Enrollment component, which is conducted during the Spring data collection period. However, to improve the timeliness of the data, a separate 12-Month Enrollment survey component was developed in 2007. These data are now collected in the fall for the previous academic year. Of the 7,407 Title IV entities that were expected to respond to the 12-Month Enrollment component of the fall 2012 data collection, 7,403 responded, for an approximate response rate of 100.0 percent.

Further information on the IPEDS 12-Month Enrollment component may be obtained from

IPEDS Staff
Administrative Data Division
Postsecondary Branch
National Center for Education Statistics
1990 K Street NW
Washington, DC 20006
http://nces.ed.gov/ncestaff/SurvDetl.asp?surveyID=010

Fall (Completions)

This survey was part of the HEGIS series throughout its existence. However, the degree classification taxonomy was revised in 1970–71, 1982–83, 1991–92, and 2002–03. Collection of degree data has been maintained through IPEDS.

Degrees-conferred trend tables arranged by the 2002–03 classification are included to provide consistent data from 1970–71 through the most recent year. Data on associate's and other formal awards below the baccalaureate degree, by field of study, cannot be made comparable with figures from years prior to 1982–83. The nonresponse rate does not appear to be a significant source of nonsampling error for this survey. The response rate over the years has been high; for the fall 2012 Completions component, it was about 100.0 percent. Because of the high response rate, there was no need to conduct a nonresponse bias analysis. Imputation methods for the fall 2012 Completions component are discussed in Postsecondary Institutions and Cost of Attendance in  2012–13; Degrees and Other Awards Conferred, 2011–12; and 12-Month Enrollment, 2011–12 (NCES 2013-289rev).

The Integrated Postsecondary Education Data System Data Quality Study (NCES 2005-175) indicated that most Title IV institutions supplying revised data on completions in 2003–04 were able to supply missing data for the prior year. The small differences between imputed data for the prior year and the revised actual data supplied by the institution indicated that the imputed values produced by NCES were acceptable.

Further information on the IPEDS Completions component may be obtained from

IPEDS Staff
Administrative Data Division
Postsecondary Branch
National Center for Education Statistics
1990 K Street NW
Washington, DC 20006
http://nces.ed.gov/ncestaff/SurvDetl.asp?surveyID=010

Fall (Institutional Characteristics)

This survey collects the basic information necessary to classify institutions, including control, level, and types of programs offered, as well as information on tuition, fees, and room and board charges. Beginning in 2000, the survey collected institutional pricing data from institutions with first-time, full-time, degree/certificate-seeking undergraduate students. Unduplicated full-year enrollment counts and instructional activity are now collected in the Fall Enrollment survey. Beginning in 2008–09, student financial aid data collected includes greater detail. The overall unweighted response rate was 100.0 percent for Title IV degree-granting institutions for 2009 data.

In the fall 2012 data collection, the response rate for the 7,476 Title IV entities expected to respond to the Institutional Characteristics component was about 100.0 percent (7,474 Title IV entities responded). In addition, data from 10 institutions that responded to the Institutional Characteristics component contained item nonresponse, and these missing items were imputed. Imputation methods for the fall 2012 Institutional Characteristics component are discussed in the 2012–13 Integrated Postsecondary Education Data System (IPEDS) Methodology Report (NCES 2013-293).

The Integrated Postsecondary Education Data System Data Quality Study (NCES 2005-175) looked at tuition and price in Title IV institutions. Only 8 percent of institutions in 2002–03 and 2003–04 reported the same data to IPEDS and Thomson Peterson consistently across all selected data items. Differences in wordings or survey items may account for some of these inconsistencies.

Further information on the IPEDS Institutional Characteristics component may be obtained from

Tara Lawley
Administrative Data Division
Postsecondary Branch
National Center for Education Statistics
1990 K Street NW
Washington, DC 20006
tara.lawley@ed.gov
http://nces.ed.gov/ipeds

Winter (Student Financial Aid)

This component was part of the spring data collection from IPEDS data collection years 2000–01 to 2010–11, but it moved to the winter data collection starting with the 2011–12 IPEDS data collection year. This move will aid in the timing of the net price of attendance calculations displayed on College Navigator (http://nces.ed.gov/collegenavigator).

Financial aid data are collected for undergraduate students. Data are collected regarding federal grants, state and local government grants, institutional grants, and loans. The collected data include the number of students receiving each type of financial assistance and the average amount of aid received by type of aid. Beginning in 2008–09, student financial aid data collected includes greater detail on types of aid offered.

In the winter 2012–13 data collection, the Student Financial Aid component presented data on the number of first-time, full-time degree- and certificate-seeking undergraduate financial aid recipients for the 2011–12 academic year. Of the 7,064 Title IV institutions expected to respond to the Student Financial Aid component, 7,058 Title IV institutions responded, resulting in a response rate of about 99.9 percent.

Further information on the IPEDS Student Financial Aid component may be obtained from

Tara Lawley
Administrative Data Division
Postsecondary Branch
National Center for Education Statistics
1990 K Street NW
Washington, DC 20006
tara.lawley@ed.gov
http://nces.ed.gov/ipeds

Spring (Fall Enrollment)

This survey has been part of the HEGIS and IPEDS series since 1966. Response rates for this survey have been relatively high, generally exceeding 85 percent. Beginning in 2000, with web-based data collection, higher response rates were attained. In the spring 2013 data collection, where the Fall Enrollment component covered fall 2012, the response rate was 99.9 percent. Data collection procedures for the Fall Enrollment component of the spring 2013 data collection are presented in Enrollment in Postsecondary Institutions, Fall 2012; Financial Statistics, Fiscal Year 2012; Graduation Rates, Selected Cohorts, 2004–09; and Employees in Postsecondary Institutions, Fall 2012 (NCES 2013-183).

Beginning with the fall 1986 survey and the introduction of IPEDS (see above), the survey was redesigned. The survey allows (in alternating years) for the collection of age and residence data. Beginning in 2000, the survey collected instructional activity and unduplicated headcount data, which are needed to compute a standardized, full-time-equivalent (FTE) enrollment statistic for the entire academic year.

The Integrated Postsecondary Education Data System Data Quality Study (NCES 2005-175) showed that public institutions made the majority of changes to enrollment data during the 2004 revision period. The majority of changes were made to unduplicated headcount data, with the net differences between the original data and the revised data at about 1 percent. Part-time students in general and enrollment in private not-for-profit institutions were often underestimated. The fewest changes by institutions were to Classification of Instructional Programs (CIP) code data. (The CIP is a taxonomic coding scheme that contains titles and descriptions of primarily postsecondary instructional programs.) More institutions provided enrollment data to IPEDS than to Thomson Peterson. A fairly high percentage of institutions that provided data to both provided the same data, and among those that did not, the difference in magnitude was less than 10 percent.

Further information on the IPEDS Fall Enrollment component may be obtained from

IPEDS Staff
Administrative Data Division
Postsecondary Branch
National Center for Education Statistics
1990 K Street NW
Washington, DC 20006
http://nces.ed.gov/ncestaff/SurvDetl.asp?surveyID=010

Spring (Finance)

This survey was part of the HEGIS series and has been continued under IPEDS. Substantial changes were made in the financial survey instruments in fiscal year (FY) 1976, FY 82, FY 87, FY 97, and FY 02. While these changes were significant, considerable effort has been made to present only comparable information on trends and to note inconsistencies. The FY 76 survey instrument contained numerous revisions to earlier survey forms, which made direct comparisons of line items very difficult. Beginning in FY 82, Pell Grant data were collected in the categories of federal restricted grant and contract revenues and restricted scholarship and fellowship expenditures. Finance tables including data prior to 2000 have been adjusted by subtracting the largely duplicative Pell Grant amounts from the later data to maintain comparability with pre-FY 82 data. The introduction of IPEDS in the FY 87 survey included several important changes to the survey instrument and data processing procedures. Beginning in FY 97, data for private institutions were collected using new financial concepts consistent with Financial Accounting Standards Board (FASB) reporting standards, which provide a more comprehensive view of college finance activities. The data for public institutions continued to be collected using the older survey form. The data for public and private institutions were no longer comparable and, as a result, no longer presented together in analysis tables. In FY 01, public institutions had the option of either continuing to report using Government Accounting Standards Board (GASB) standards or using the new FASB reporting standards. Beginning in FY 02, public institutions had three options: the original GASB standards, the FASB standards, or the new GASB Statement 35 standards (GASB35). Because of the complexity of the multiple forms used by public institutions, finance data for public institutions for some recent years are not available.

Possible sources of nonsampling error in the financial statistics include nonresponse, imputation, and misclassification. The unweighted response rate has been about 85 to 90 percent for most of the historic years; however, in more recent years, response rates have been much higher because Title IV institutions are required to respond. The 2002 IPEDS data collection was a full-scale web-based collection, which offered features that improved the quality and timeliness of the data. The ability of IPEDS to tailor online data entry forms for each institution based on characteristics such as institutional control, level of institution, and calendar system, and the institutions' ability to submit their data online, were two such features that improved response.

The response rate for the FY 2012 Finance survey component was 99.8 percent. Data collection procedures for the FY 2012 survey are discussed in Enrollment in Postsecondary Institutions, Fall 2012; Financial Statistics, Fiscal Year 2012; Graduation Rates, Selected Cohorts, 2004-09; and Employees in Postsecondary Institutions, Fall 2012: First Look (Provisional Data) (NCES 2013-183).

Two general methods of imputation were used in HEGIS. If prior-year data were available for a nonresponding institution, they were inflated using the Higher Education Price Index and adjusted according to changes in enrollments. If prior-year data were not available, current data were used from peer institutions selected for location (state or region), control, level, and enrollment size of institution. In most cases, estimates for nonreporting institutions in HEGIS were made using data from peer institutions.

Beginning with FY 87, IPEDS included all postsecondary institutions, but maintained comparability with earlier surveys by allowing 2- and 4-year institutions to be tabulated separately. For FY 87 through FY 91, in order to maintain comparability with the historical time series of HEGIS institutions, data were combined from two of the three different survey forms that make up IPEDS. The vast majority of the data were tabulated from form 1, which was used to collect information from public and private not-for-profit 2- and 4-year colleges. Form 2, a condensed form, was used to gather data for 2-year for-profit institutions. Because of the differences in the data requested on the two forms, several assumptions were made about the form 2 reports so that their figures could be included in the degree-granting institution totals.

In IPEDS, the form 2 institutions were not asked to separate appropriations from grants and contracts, nor were they asked to separate state from local sources of funding. For the form 2 institutions, all federal revenues were assumed to be federal grants and contracts, and all state and local revenues were assumed to be restricted state grants and contracts. All other form 2 sources of revenue, except for tuition and fees and sales and services of educational activities, were included under "other." Similar adjustments were made to the expenditure accounts. The form 2 institutions reported instruction and scholarship and fellowship expenditures only. All other educational and general expenditures were allocated to academic support.

The Integrated Postsecondary Education Data System Data Quality Study (NCES 2005-175) found that only a small percentage (2.9 percent, or 168) of postsecondary institutions either revised 2002–03 data or submitted data for items they previously left unreported. Though relatively few institutions made changes, the changes made were relatively large—greater than 10 percent of the original data. With a few exceptions, these changes, large as they were, did not greatly affect the aggregate totals.

Again, institutions were more likely to report data to IPEDS than to Thomson Peterson, and there was a higher percentage reporting different values among those reporting to both. The magnitude of the difference was generally greater for research expenditures. It is likely that the large differences are a function of the way institutions report these data to both entities.

Further information on the IPEDS Finance component may be obtained from

IPEDS Staff
Administrative Data Division
Postsecondary Branch
National Center for Education Statistics
1990 K Street NW
Washington, DC 20006
http://nces.ed.gov/ncestaff/SurvDetl.asp?surveyID=010

Spring (Graduation Rates and Graduation Rates 200 Percent)

Graduation rates data are collected for full-time, first-time degree- and certificate-seeking undergraduate students. Data included are the number of students entering the institution as full-time, first-time degree- or certificate-seeking students in a particular year (cohort), by race/ ethnicity and gender; the number of students completing their program within a time period equal to 1½ times (150 percent) the normal period of time; and the number of students who transferred to other institutions.

In the spring 2013 data collection, the Graduation Rates component collected counts of full-time, first-time degree- and certificate-seeking undergraduate students entering an institution in the cohort year (4-year institutions used the cohort year 2006; less-than-4-year institutions used the cohort year 2009), and their completion status as of August 31, 2012 (150 percent of normal program completion time) at the institution initially entered. The response rate for this component was 99.9 percent.

The 200 Percent Graduation Rates component collected counts of full-time, first-time degree- and certificate-seeking undergraduate students beginning their postsecondary education in the reference period and their completion status as of August 31, 2012 (200 percent of normal program completion time) at the same institution where the students started. Four-year institutions report on bachelor's or equivalent degree-seeking students and use cohort year 2004 as the reference period, while less-than-4-year institutions report on all students in the cohort and use cohort year 2008 as the reference period. The response rate for this component was 99.9 percent.

Further information on the IPEDS Graduation component may be obtained from

IPEDS Staff
Administrative Data Division
Postsecondary Branch
National Center for Education Statistics
1990 K Street NW
Washington, DC 20006
http://nces.ed.gov/ncestaff/SurvDetl.asp?surveyID=010

Spring (Human Resources)

The IPEDS Human Resources (HR) component was part of the winter data collection from IPEDS data collection years 2000–01 to 2011–12.  For the 2012–13 data collection year, the HR component was moved to the spring 2013 data collection, in order to give institutions more time to prepare their survey responses (the spring and winter collections begin on the same date, but the reporting deadline for the  spring collection is several weeks later than the reporting deadline for the winter collection).

Human Resources, 2012–13 Collection Year

In 2012–13, new occupational categories replaced the primary function/occupational activity categories previously used in the IPEDS HR component. This change was required in order to align the IPEDS HR categories with the 2010 Standard Occupational Classification (SOC) system. In tandem with the change in 2012–13 from using primary function/occupational activity categories to using the new occupational categories, the sections making up the IPEDS HR component (which previously had been Employees by Assigned Position [EAP], Fall Staff, and Salaries) were changed to Full-Time Instructional Staff, Full-time Noninstructional Staff, Salaries, Part-Time Staff, and New Hires.

The webpage "Changes to the 2012–13 IPEDS Data Collection and Changes to Occupational Categories for the 2012–13 Human Resources Data Collection" (http://nces.ed.gov/ipeds/surveys/datacollection2012-13.asp)  provides information on the redesigned IPEDS component, and the webpage "Resources for Implementing Changes to the IPEDS Human Resources (HR) Survey Component Due to Updated 2010 Standard Occupational Classification (SOC) System" (http://nces.ed.gov/ipeds/resource/soc.asp) contains further information, including notes comparing the new classifications with the old ("Comparison of New IPEDS Occupational Categories with Previous Categories"), a crosswalk from the new IPEDS occupational categories to the 2010 SOC occupational categories ("New IPEDS Occupational Categories and 2010 SOC"), answers to frequently asked questions, and a link to current IPEDS HR survey screens.

Human Resources, Collection Years Prior to 2012–13

In IPEDS collection years prior to 2012–13, the Human Resources component was composed of three sections: Employees by Assigned Position (EAP), Fall Staff, and Salaries.

Data gathered by the Employees by Assigned Position (EAP) section categorizes all employees by full- or part-time status, faculty status, and primary function/occupational activity. Institutions with M.D. or D.O. programs are required to report their medical school employees separately. A response to the EAP was required of all 6,858 Title IV institutions and administrative offices in the United States and other jurisdictions for winter 2008–09, and 6,845, or 99.8 percent unweighted, responded. Of the 6,970 Title IV institutions and administrative offices required to respond to the winter 2009–10 EAP, 6,964, or 99.9 percent, responded. And of the 7,256 Title IV institutions and administrative offices required to respond to the EAP for winter 2010–11, 7,252, or 99.9 percent, responded.

The primary functions/occupational activities of the EAP section are primarily instruction, instruction combined with research and/or public service, primarily research, primarily public service, executive/administrative/managerial, other professionals (support/service), graduate assistants, technical and paraprofessionals, clerical and secretarial, skilled crafts, and service/maintenance.

All full-time instructional faculty classified in the EAP full-time non-medical school part as either (1) primarily instruction or (2) instruction combined with research and/or public service are included in the Salaries section, unless they are exempt.

The Fall Staff section categorizes all staff on the institution's payroll as of November 1 of the collection year, by employment status (full time or part time), primary function/occupational activity, gender, and race/ethnicity. These data elements are collected from degree-granting and non-degree-granting institutions; however, additional data elements are collected from degree-granting institutions and related administrative offices with 15 or more full-time staff. These elements include faculty status, contract length/teaching period, academic rank, salary class intervals, and newly hired full-time permanent staff.

The Fall Staff section, which is required only in odd-numbered reporting years, was not required during the 2008–09 HR data collection. However, of the 6,858 Title IV institutions and administrative offices in the United States and other jurisdictions, 3,295, or 48.0 percent unweighted, did provide data in the Fall Staff section that year. During the 2009–10 HR data collection, when all 6,970 Title IV institutions and administrative offices were required to respond to the Fall Staff section, 6,964, or 99.9 percent, did so. A response to the Fall Staff section of the 2010–11 HR collection was optional, and 3,364 Title IV institutions and administrative offices responded that year (a response rate of 46.3 percent).

The study Integrated Postsecondary Education Data System Data Quality Study (NCES 2005-175) found that for 2003–04 employee data items, changes were made by 1.2 percent (77) of the institutions that responded. All who made changes made changes that resulted in different employee counts. For both institutional and aggregate differences, the changes had little impact on the original employee count submissions. A large number of institutions reported different staff data to IPEDS and Thomson Peterson; however, the magnitude of the differences was small—usually no more than 17 faculty members for any faculty variable.

The Salaries section collects data for full-time instructional faculty on the institution's payroll as of November 1 of the collection year (except those in medical schools of the EAP section, as described above), by contract length/teaching period, gender, and academic rank. The reporting of data by faculty status in the Salaries section is required from 4-year degree-granting institutions and above only. Salary outlays and fringe benefits are also collected for full-time instructional staff on 9/10- and 11/12-month contracts/teaching periods. This section is applicable to degree-granting institutions unless exempt.

This institutional survey was conducted for most years from 1966–67 to 1987–88; it has been conducted annually since 1989–90, except for 2000–01. Although the survey form has changed a number of times during these years, only comparable data are presented.

Between 1966–67 and 1985–86, this survey differed from other HEGIS surveys in that imputations were not made for nonrespondents. Thus, there is some possibility that the salary averages presented in this report may differ from the results of a complete enumeration of all colleges and universities. Beginning with the surveys for 1987–88, the IPEDS data tabulation procedures included imputations for survey nonrespondents. The unweighted response rate for the 2008–09 Salaries survey section was 99.9 percent. The response rate for the 2009–10 Salaries section was 100.0 percent (4,453 of the 4,455 required institutions responded), and the response rate for 2010–11 was 99.9 percent (4,561 of the 4,565 required institutions responded). Imputation methods for the 2010–11 Salaries survey section are discussed in Employees in Postsecondary Institutions, Fall 2010, and Salaries of Full-Time Instructional Staff, 2010–11 (NCES 2012-276).

Although data from this survey are not subject to sampling error, sources of nonsampling error may include computational errors and misclassification in reporting and processing. The electronic reporting system does allow corrections to prior-year reported or missing data, and this should help with these problems. Also, NCES reviews individual institutions' data for internal and longitudinal consistency and contacts institutions to check inconsistent data.

The Integrated Postsecondary Education Data System Data Quality Study (NCES 2005-175) found that only 1.3 percent of the responding Title IV institutions in 2003–04 made changes to their salaries data. The differences between the imputed data and the revised data were small and found to have little impact on the published data.

Further information on the Human Resources component may be obtained from

IPEDS Staff
Administrative Data Division
Postsecondary Branch
National Center for Education Statistics
1990 K Street NW
Washington, DC 20006
http://nces.ed.gov/ncestaff/SurvDet1.asp?surveyID=010

National Assessment of Educational Progress

The National Assessment of Educational Progress (NAEP) is a series of cross-sectional studies initially implemented in 1969 to assess the educational achievement of U.S. students and monitor changes in those achievements. In the main national NAEP, a nationally representative sample of students is assessed at grades 4, 8, and 12 in various academic subjects.

The assessments are based on frameworks developed by the National Assessment Governing Board (NAGB). Assessment items include both multiple-choice and constructed-response (requiring written answers) items. Results are reported in two ways: by average score and by achievement level. Average scores are reported for the nation, for participating states and jurisdictions, and for subgroups of the population. Percentages of students performing at or above three achievement levels (Basic, Proficient, and Advanced) are also reported for these groups.

From 1990 until 2001, main NAEP was conducted for states and other jurisdictions that chose to participate. In 2002, under the provisions of the No Child Left Behind Act of 2001, all states began to participate in main NAEP and an aggregate of all state samples replaced the separate national sample.

Mathematics assessments were administered in 2000, 2003, 2005, 2007, 2009, 2011, and 2013. In 2005, NAGB called for the development of a new mathematics framework. The revisions made to the mathematics framework for the 2005 assessment were intended to reflect recent curricular emphases and better assess the specific objectives for students at each grade level.

The revised mathematics framework focuses on two dimensions: mathematical content and cognitive demand. By considering these two dimensions for each item in the assessment, the framework ensures that NAEP assesses an appropriate balance of content, as well as a variety of ways of knowing and doing mathematics.

For grades 4 and 8, comparisons over time can be made among the assessments prior to and after the implementation of the 2005 framework. The changes to the grade 12 assessment were too drastic to allow the results to be directly compared with previous years. The changes to the grade 12 assessment included adding more questions on algebra, data analysis, and probability to reflect changes in high school mathematics standards and coursework, as well as the merging of the measurement and geometry content areas. The reporting scale for grade 12 mathematics was changed from 0–500 to 0–300. For more information regarding the 2005 mathematics framework revisions, see http://nces.ed.gov/nationsreportcard/mathematics/frameworkcomparison.asp.

Reading assessments were administered in 2000, 2002, 2003, 2005, 2007, 2009, 2011, and 2013. In 2009, a new framework was developed for the 4th-, 8th-, and 12th-grade NAEP reading assessments.

Both a content alignment study and a reading trend or bridge study were conducted to determine if the "new" assessment was comparable to the "old" assessment. Overall, the results of the special analyses suggested that the old and new assessments were similar in terms of their item and scale characteristics and the results they produced for important demographic groups of students. Thus, it was determined that the results of the 2009 reading assessment could still be compared to those from earlier assessment years, thereby maintaining the trend lines first established in 1992. For more information regarding the 2009 reading framework revisions, see http://nces.ed.gov/nationsreportcard/reading/whatmeasure.asp.

In spring 2013, NAEP released results from the NAEP 2012 economics assessment in The Nation's Report Card: Economics 2012 (NCES 2013-453). First administered in 2006, the NAEP economics assessment measures 12th-graders' understanding of a wide range of topics in three main content areas: market economy, national economy, and international economy. The 2012 assessment is based on a nationally representative sample of nearly 11,000 12th-graders. Comparing results from 2012 with results from 2006 can advance the inquiry of whether our nation's high school seniors are becoming more literate in economics.

In the report The Nation's Report Card: A First Look—2013 Mathematics and Reading (NCES 2014-451), NAEP released the results of the 2013 mathematics and reading assessments. Results can also be accessed using the interactive graphics and downloadable data available at the new online Nation's Report Card website (http://nationsreportcard.gov/reading_math_2013/#/).

In addition to conducting the main assessments, NAEP also conducts the long-term trend assessments and trial urban district assessments. Long-term trend assessments provide an opportunity to observe educational progress in reading and mathematics of 9-, 13-, and 17-year-olds since the early 1970s. The long-term trend reading assessment measures students' reading comprehension skills using an array of passages that vary by text types and length. The assessment was designed to measure students' ability to locate specific information in the text provided; make inferences across a passage to provide an explanation; and identify the main idea in the text.

The NAEP long-term trend assessment in mathematics measures knowledge of mathematical facts; ability to carry out computations using paper and pencil; knowledge of basic formulas, such as those applied in geometric settings; and ability to apply mathematics to skills of daily life, such as those involving time and money.

The Nation's Report Card: Trends in Academic Progress 2012 (NCES 2013-456) provides the results of 12 long-term trend reading assessments dating back to 1971 and 11 long-term trend mathematics assessments dating back to 1973.

The NAEP Trial Urban District Assessment (TUDA) focuses attention on urban education and measures educational progress within participating large urban districts. TUDA mathematics and reading assessments are based on the same mathematics and reading assessments used to report national and state results. TUDA reading results were first reported for 6 urban districts in 2002, and TUDA mathematics results were first reported for 10 urban districts in 2003.

The Nation's Report Card: A First Look—2013 Mathematics and Reading Trial Urban District Assessment (NCES 2014-466) provides the results of the 2013 mathematics and reading TUDA, which measured the reading and mathematics progress of 4th- and 8th-graders from 21 urban school districts. Results from the 2013 mathematics and reading TUDA can also be accessed using the  interactive graphics and downloadable data available at the online TUDA website (http://nationsreportcard.gov/reading_math_tuda_2013/#/).

Further information on NAEP may be obtained from

Arnold Goldstein
Assessments Division
Reporting and Dissemination Branch
National Center for Education Statistics
1990 K Street NW
Washington, DC 20006
arnold.goldstein@ed.gov
http://nces.ed.gov/nationsreportcard

Private School Universe Survey

The purposes of the Private School Universe Survey (PSS) data collection activities are (1) to build an accurate and complete list of private schools to serve as a sampling frame for NCES sample surveys of private schools and (2) to report data on the total number of private schools, teachers, and students in the survey universe. Begun in 1989 under the U.S. Census Bureau, the PSS has been conducted every 2 years, and data for the 1989–90, 1991–92, 1993–94, 1995–96, 1997–98, 1999–2000, 2001–02, 2003–04, 2005–06, 2007–08, and 2009–10 school years have been released. A First Look report on the 2011–12 PSS data, Characteristics of Private Schools in the United States: Results From the 2011–12 Private School Universe Survey (NCES 2013-316) was published in July 2013.

The PSS produces data similar to that of the CCD for public schools, and can be used for public-private comparisons. The data are useful for a variety of policy202 and research-relevant issues, such as the growth of religiously affiliated schools, the number of private high school graduates, the length of the school year for various private schools, and the number of private school students and teachers.

The target population for this universe survey is all private schools in the United States that meet the PSS criteria of a private school (i.e., the private school is an institution that provides instruction for any of grades K through 12, has one or more teachers to give instruction, is not administered by a public agency, and is not operated in a private home). The survey universe is composed of schools identified from a variety of sources. The main source is a list frame initially developed for the 1989–90 PSS. The list is updated regularly by matching it with lists provided by nationwide private school associations, state departments of education, and other national guides and sources that list private schools. The other source is an area frame search in approximately 124 geographic areas, conducted by the U.S. Census Bureau.

Of the 40,302 schools included in the 2009–10 sample, 10,229 were found ineligible for the survey. Those not responding numbered 1,856, and those responding numbered 28,217. The unweighted response rate for the 2009–10 PSS survey was 93.8 percent.

Of the 39,325 schools included in the 2011–12 sample, 10,030 cases were considered as out-of-scope (not eligible for the PSS). A total of 26,983 private schools completed a PSS interview (15.8 percent completed online), while 2,312 schools refused to participate, resulting in an unweighted response rate of 92.1 percent.

Further information on the PSS may be obtained from

Steve Broughman
Sample Surveys Division
Cross-Sectional Surveys Branch
National Center for Education Statistics
1990 K Street NW
Washington, DC 20006
stephen.broughman@ed.gov
http://nces.ed.gov/surveys/pss

Projections of Education Statistics

Since 1964, NCES has published projections of key statistics for elementary and secondary schools and institutions of higher education. The latest report is titled Projections of Education Statistics to 2022 (NCES 2014-051). These projections include statistics for enrollments, instructional staff, graduates, earned degrees, and expenditures. These reports include several alternative projection series and a methodology section describing the techniques and assumptions used to prepare them.

Differences between the reported and projected values are, of course, almost inevitable. An evaluation of past projections revealed that, at the elementary and secondary level, projections of enrollments have been quite accurate: mean absolute percentage differences for enrollment ranged from 0.3 to 1.3 percent for projections from 1 to 5 years in the future, while those for teachers were less than 3 percent. At the higher education level, projections of enrollment have been fairly accurate: mean absolute percentage differences were 5 percent or less for projections from 1 to 5 years into the future.

Further information on Projections of Education Statistics may be obtained from

William Hussar
Annual Reports and Information
National Center for Education Statistics
1990 K Street NW
Washington, DC 20006
william.hussar@ed.gov
http://nces.ed.gov/annuals

Other Department of Education Agencies

Office of Special Education Programs

Annual Report to Congress on the Implementation of the Individuals with Disabilities Education Act

The Individuals with Disabilities Education Act (IDEA) is a law ensuring services to children with disabilities throughout the nation. IDEA governs how states and public agencies provide early intervention, special education, and related services to more than 6.5 million eligible infants, toddlers, children, and youth with disabilities.

The Individuals with Disabilities Education Act (IDEA), formerly the Education of the Handicapped Act (EHA), requires the Secretary of Education to transmit to Congress annually a report describing the progress made in serving the nation's children with disabilities. This annual report contains information on children served by public schools under the provisions of Part B of the IDEA and on children served in state-operated programs for the disabled under Chapter I of the Elementary and Secondary Education Act.

Statistics on children receiving special education and related services in various settings and school personnel providing such services are reported in an annual submission of data to the Office of Special Education Programs (OSEP) by the 50 states, the District of Columbia, and the outlying areas. The child count information is based on the number of children with disabilities receiving special education and related services on December 1 of each year. Count information is available from http://www.ideadata.org.

Since each participant in programs for the disabled is reported to OSEP, the data are not subject to sampling error. However, nonsampling error can arise from a variety of sources. Some states follow a noncategorical approach to the delivery of special education services, but produce counts of students by disabling condition because Part B of the EHA requires it. In those states that do categorize their disabled students, definitions and labeling practices vary.

Further information on this annual report to Congress may be obtained from

Office of Special Education Programs
Office of Special Education and Rehabilitative Services
U.S. Department of Education
400 Maryland Avenue SW
Washington, DC 20202-7100
http://www.ed.gov/about/reports/annual/osep/index.html
http://idea.ed.gov/
http://www.ideadata.org

Other Governmental Agencies

Bureau of Justice Statistics

National Crime Victimization Survey (NCVS)

The National Crime Victimization Survey (NCVS), administered for the U.S. Bureau of Justice Statistics by the U.S. Census Bureau, is the nation's primary source of information on crime and the victims of crime. Initiated in 1972 and redesigned in 1992, the NCVS collects detailed information on the frequency and nature of the crimes of rape, sexual assault, robbery, aggravated and simple assault, theft, household burglary, and motor vehicle theft experienced by Americans and their households each year. The survey measures both crimes reported to police and crimes not reported to the police.

NCVS estimates presented may differ from those in previous published reports. This is because a small number of victimizations, referred to as series victimizations, are included using a new counting strategy. High-frequency repeat victimizations, or series victimizations, are six or more similar but separate victimizations that occur with such frequency that the victim is unable to recall each individual event or describe each event in detail. As part of ongoing research efforts associated with the redesign of the NCVS, BJS investigated ways to include high-frequency repeat victimizations, or series victimizations, in estimates of criminal victimization. Including series victimizations would obtain a more accurate estimate of victimization. BJS has decided to include series victimizations using the victim's estimates of the number of times the victimizations occurred over the past 6 months, capping the number of victimizations within each series at a maximum of 10. This strategy for counting series victimizations balances the desire to estimate national rates and account for the experiences of persons with repeat victimizations while noting that some estimation errors exist in the number of times these victimizations occurred. Including series victimizations in national rates results in rather large increases in the level of violent victimization; however, trends in violence are generally similar regardless of whether series victimizations are included. For more information on the new counting strategy and supporting research, see Methods for Counting High-Frequency Repeat Victimizations in the National Crime Victimization Survey at http://bjs.ojp.usdoj.gov/content/pub/pdf/mchfrv.pdf.

Readers should note that in 2003, in accordance with changes to the Office of Management and Budget's standards for the classification of federal data on race and ethnicity, the NCVS item on race/ethnicity was modified. A question on Hispanic origin is followed by a question on race. The new question about race allows the respondent to choose more than one race and delineates Asian as a separate category from Native Hawaiian or Other Pacific Islander. Analysis conducted by the Demographic Surveys Division at the U.S. Census Bureau showed that the new question had very little impact on the aggregate racial distribution of the NCVS respondents, with one exception: There was a 1.6 percentage point decrease in the percentage of respondents who reported themselves as White. Due to changes in race/ethnicity categories, comparisons of race/ethnicity across years should be made with caution.

There were changes in the sample design and survey methodology in the 2006 NCVS that may have affected survey estimates. Caution should be used when comparing the 2006 estimates to those of other years. Data from 2007 onward are comparable to earlier years. Analyses of the 2007 estimates indicate that the program changes made in 2006 had relatively small effects on NCVS changes. For more information on the 2006 NCVS data, see Criminal Victimization, 2006, at http:// bjs.ojp.usdoj.gov/content/pub/pdf/cv06.pdf, the technical notes at http://bjs.ojp.usdoj.gov/content/pub/pdf/cv06tn.pdf, and Criminal Victimization, 2007, at http://bjs.ojp.usdoj.gov/content/pub/pdf/cv07.pdf.

The number of NCVS-eligible households in the sample in 2011 was about 89,000. They were selected using a stratified, multistage cluster design. In the first stage, the primary sampling units (PSUs), consisting of counties or groups of counties, were selected. In the second stage, smaller areas, called Enumeration Districts (EDs), were selected from each sampled PSU. Finally, from selected EDs, clusters of four households, called segments, were selected for interview. At each stage, the selection was done proportionate to population size in order to create a self-weighting sample. The final sample was augmented to account for households constructed after the decennial Census. Within each sampled household, U.S. Census Bureau personnel attempt to interview all household members age 12 and older to determine whether they had been victimized by the measured crimes during the 6 months preceding the interview.

The first NCVS interview with a housing unit is conducted in person. Subsequent interviews are conducted by telephone, if possible. About 72,000 persons age 12 and older are interviewed each 6 months. Households remain in the sample for 3 years and are interviewed seven times at 6-month intervals. Since the survey's inception, the initial interview at each sample unit has been used only to bound future interviews to establish a time frame to avoid duplication of crimes uncovered in these subsequent interviews. Beginning in 2006, data from the initial interview have been adjusted to account for the effects of bounding and included in the survey estimates. After their seventh interview, households are replaced by new sample households. The NCVS has consistently obtained a response rate of over 90 percent at the household level. The completion rates for persons within households in 2011 were about 88 percent. Weights were developed to permit estimates for the total U.S. population 12 years and older.

Further information on the NCVS may be obtained from

Jennifer Truman
Victimization Statistics Branch
Bureau of Justice Statistics
810 Seventh Street NW
Washington, DC 20531
jennifer.truman@usdoj.gov

Bureau of Labor Statistics

Consumer Price Indexes

The Consumer Price Index (CPI) represents changes in prices of all goods and services purchased for consumption by urban households. Indexes are available for two population groups: a CPI for All Urban Consumers (CPI-U) and a CPI for Urban Wage Earners and Clerical Workers (CPI-W). Unless otherwise specified, data are adjusted for inflation using the CPI-U. These values are frequently adjusted to a school-year basis by averaging the July through June figures. Price indexes are available for the United States, the four Census regions, size of city, cross-classifications of regions and size classes, and 26 local areas. The major uses of the CPI include as an economic indicator, as a deflator of other economic series, and as a means of adjusting income.

Also available is the Consumer Price Index research series using current methods (CPI-U-RS), which presents an estimate of the CPI-U from 1978 to the present that incorporates most of the improvements that the Bureau of Labor Statistics has made over that time span into the entire series. The historical price index series of the CPI-U does not reflect these changes, though these changes do make the present and future CPI more accurate. The limitations of the CPI-U-RS include considerable uncertainty surrounding the magnitude of the adjustments and the several improvements in the CPI that have not been incorporated into the CPI-U-RS for various reasons. Nonetheless, the CPI-U-RS can serve as a valuable proxy for researchers needing a historical estimate of inflation using current methods.

Further information on consumer price indexes may be obtained from

Bureau of Labor Statistics
U.S. Department of Labor
2 Massachusetts Avenue NE
Washington, DC 20212
http://www.bls.gov/cpi

Employment and Unemployment Surveys

Statistics on the employment and unemployment status of the population and related data are compiled by the Bureau of Labor Statistics (BLS) using data from the Current Population Survey (CPS) (see below) and other surveys. The Current Population Survey, a monthly household survey conducted by the U.S. Census Bureau for the Bureau of Labor Statistics, provides a comprehensive body of information on the employment and unemployment experience of the nation's population, classified by age, sex, race, and various other characteristics.

Further information on unemployment surveys may be obtained from

Bureau of Labor Statistics
U.S. Department of Labor
2 Massachusetts Avenue NE
Washington, DC 20212
cpsinfo@bls.gov
http://www.bls.gov/bls/employment.htm

Census Bureau

American Community Survey (ACS)

The Census Bureau introduced the American Community Survey (ACS) in 1996. Fully implemented in 2005, it provides a large monthly sample of demographic, socioeconomic, and housing data comparable in content to the Long Forms of the Decennial Census up to and including the 2000 long form. Aggregated over time, these data will serve as a replacement for the Long Form of the Decennial Census. The survey includes questions mandated by federal law, federal regulations, and court decisions.

Since 2011, the survey has been mailed to approximately 295,000 addresses in the United States and Puerto Rico each month, or about 3.5 million addresses annually. A larger proportion of addresses in small governmental units (e.g., American Indian reservations, small counties, and towns) also receive the survey. The monthly sample size is designed to approximate the ratio used in the 2000 Census, which requires more intensive distribution in these areas. The ACS covers the U.S. resident population, which includes the entire civilian, noninstitutionalized population; incarcerated persons; institutionalized persons; and the active duty military who are in the United States. In 2006, the ACS began interviewing residents in group quarter facilities. Institutionalized group quarters include adult and juvenile correctional facilities, nursing facilities, and other health care facilities. Noninstitutionalized group quarters include college and university housing, military barracks, and other noninstitutional facilities such as workers and religious group quarters and temporary shelters for the homeless.

National-level data from the ACS are available from 2000 onward. The ACS produces 1-year estimates for populations of 65,000 and over, 3-year estimates for populations of 20,000 or over, and 5-year estimates for populations of almost any size. To illustrate, 2012 ACS 1-year estimates represented data collected between January 1, 2012, and December 31, 2012; 2010–12 ACS 3-year estimates represented data collected between January 1, 2010, and December 31, 2012; and the 2008–12 ACS 5-year estimates represented data collected between January 1, 2008, and December 31, 2012.

Further information about the ACS is available at http://www.census.gov/acs/www/.

Current Population Survey

The Current Population Survey (CPS) is a monthly survey of about 60,000 households conducted by the U.S. Census Bureau for the Bureau of Labor Statistics. The CPS is the primary source of information of labor force statistics for the U.S. noninstitutionalized population (e.g., excludes military personnel and their families living on bases and inmates of correctional institutions). In addition, supplemental questionnaires are used to provide further information about the U.S. population. Specifically, in October, detailed questions regarding school enrollment and school characteristics are asked. In March, detailed questions regarding income are asked.

The current sample design, introduced in July 2001, includes about 72,000 households. Each month about 58,900 of the 72,000 households are eligible for interview, and of those, 7 to 10 percent are not interviewed because of temporary absence or unavailability. Information is obtained each month from those in the household who are 15 years of age and older, and demographic data are collected for children 0–14 years of age. In addition, supplemental questions regarding school enrollment are asked about eligible household members ages 3 and older. Prior to July 2001, data were collected in the CPS from about 50,000 dwelling units. The samples are initially selected based on the decennial census files and are periodically updated to reflect new housing construction.

A major redesign of the CPS was implemented in January 1994 to improve the quality of the data collected. Survey questions were revised, new questions were added, and computer-assisted interviewing methods were used for the survey data collection. Further information about the redesign is available in Current Population Survey, October 1995: (School Enrollment Supplement) Technical Documentation at http://www.census.gov/prod/techdoc/cps/cpsoct95.pdf.

Caution should be used when comparing data from 1994 through 2001 with data from 1993 and earlier. Data from 1994 through 2001 reflect 1990 census-based population controls, while data from 1993 and earlier reflect 1980 or earlier census-based population controls. Also use caution when comparing data from 1994 through 2001 with data from 2002 onward, as data from 2002 reflect 2000 census-based controls. Changes in population controls generally have relatively little impact on summary measures such as means, medians, and percentage distributions. They can have a significant impact on population counts. For example, use of the 1990 census-based population control resulted in about a 1 percent increase in the civilian noninstitutional population and in the number of families and households. Thus, estimates of levels for data collected in 1994 and later years will differ from those for earlier years by more than what could be attributed to actual changes in the population. These differences could be disproportionately greater for certain subpopulation groups than for the total population.

Beginning in 2003, race/ethnicity questions expanded to include information on people of two or more races. Native Hawaiian/Pacific Islander data are collected separately from Asian data. The questions have also been worded to make it clear that self-reported data on race/ ethnicity should reflect the race/ethnicity with which the responder identifies, rather than what may be written in official documentation.

The estimation procedure employed for monthly CPS data involves inflating weighted sample results to independent estimates of characteristics of the civilian noninstitutional population in the United States by age, sex, and race. These independent estimates are based on statistics from decennial censuses; statistics on births, deaths, immigration, and emigration; and statistics on the population in the armed services. Generalized standard error tables are provided in the Current Population Reports; methods for deriving standard errors can be found within the CPS technical documentation at http://www.census.gov/cps/methodology/techdocs.html. The CPS data are subject to both nonsampling and sampling errors.

Prior to 2009, standard errors were estimated using the generalized variance function. The generalized variance function is a simple model that expressed the variance as a function of the expected value of a survey estimate. Beginning with March 2009 CPS data, standard errors were estimated using replicate weight methodology. Those interested in using CPS household-level supplement replicate weights to calculate variances may refer to Estimating Current Population Survey (CPS) Household-Level Supplement Variances Using Replicate Weights at http://thedataweb.rm.census.gov/pub/cps/supps/HH-level_Use_of_the_Public_Use_Replicate_Weight_File.doc.

Further information on CPS may be obtained from

Education and Social Stratification Branch
Population Division
Census Bureau
U.S. Department of Commerce
4600 Silver Hill Road
Washington, DC 20233
http://www.census.gov/cps

Dropouts

Each October, the Current Population Survey (CPS) includes supplemental questions on the enrollment status of the population ages 3 years and over as part of the monthly basic survey on labor force participation. In addition to gathering the information on school enrollment, with the limitations on accuracy as noted below under "School Enrollment," the survey data permit calculations of dropout rates. Both status and event dropout rates are tabulated from the October CPS. Event rates describe the proportion of students who leave school each year without completing a high school program. Status rates provide cumulative data on dropouts among all young adults within a specified age range. Status rates are higher than event rates because they include all dropouts ages 16 through 24, regardless of when they last attended school.

In addition to other survey limitations, dropout rates may be affected by survey coverage and exclusion of the institutionalized population. The incarcerated population has grown more rapidly and has a higher dropout rate than the general population. Dropout rates for the total population might be higher than those for the noninstitutionalized population if the prison and jail populations were included in the dropout rate calculations. On the other hand, if military personnel, who tend to be high school graduates, were included, it might offset some or all of the impact from the theoretical inclusion of the jail and prison population.

Another area of concern with tabulations involving young people in household surveys is the relatively low coverage ratio compared to older age groups. CPS undercoverage results from missed housing units and missed people within sample households. Overall CPS undercoverage for October 2012 is estimated to be about 14 percent. CPS coverage varies with age, sex, and race. Generally, coverage is larger for females than for males and larger for non-Blacks than for Blacks. For example, in October 2012 the coverage ratio for Black 20- to 24-year-old males was 63 percent. The CPS weighting procedure partially corrects for the bias due to undercoverage. Further information on CPS methodology may be obtained from http://www.census.gov/cps.

Further information on the calculation of dropouts and dropout rates may be obtained from Trends in High School Dropout and Completion Rates in the United States: 1972–2009 (NCES 2012-006) at http://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2012006 or by contacting

Chris Chapman
Sample Surveys Division
Cross-Sectional Surveys Branch
National Center for Education Statistics
1990 K Street NW
Washington, DC 20006
chris.chapman@ed.gov

Educational Attainment

Reports documenting educational attainment are produced by the Census Bureau using March CPS supplement (Annual Social and Economic Supplement [ASEC]) results. The sample size for the 2013 ASEC supplement (including basic CPS) was about 99,000 households. The latest release is Educational Attainment in the United States: 2013; the tables may be downloaded at http://www.census.gov/hhes/socdemo/education/data/cps/2013/tables.html.gov/hhes/.

In addition to the general constraints of CPS, some data indicate that the respondents have a tendency to overestimate the educational level of members of their household. Some inaccuracy is due to a lack of the respondent's knowledge of the exact educational attainment of each household member and the hesitancy to acknowledge anything less than a high school education. Another cause of nonsampling variability is the change in the numbers in the armed services over the years.

Further information on CPS's educational attainment data may be obtained from

Education and Social Stratification Branch
Census Bureau
U.S. Department of Commerce
4600 Silver Hill Road
Washington, DC 20233
http://www.census.gov/hhes/socdemo/education

School Enrollment

Each October, the Current Population Survey (CPS) includes supplemental questions on the enrollment status of the population ages 3 years and over. Prior to 2001, the October supplement consisted of approximately 47,000 interviewed households. Beginning with the October 2001 supplement, the sample was expanded by 9,000 to a total of approximately 56,000 interviewed households. The main sources of nonsampling variability in the responses to the supplement are those inherent in the survey instrument. The question of current enrollment may not be answered accurately for various reasons. Some respondents may not know current grade information for every student in the household, a problem especially prevalent for households with members in college or in nursery school. Confusion over college credits or hours taken by a student may make it difficult to determine the year in which the student is enrolled. Problems may occur with the definition of nursery school (a group or class organized to provide educational experiences for children) where respondents' interpretations of "educational experiences" vary.

For the October 2012 basic CPS, the household-level nonresponse rate was 9.6 percent. The person-level nonresponse rate for the school enrollment supplement was an additional 9.2 percent. Since the basic CPS nonresponse rate is a household-level rate and the school enrollment supplement nonresponse rate is a person-level rate, these rates cannot be combined to derive an overall nonresponse rate. Nonresponding households may have fewer persons than interviewed ones, so combining these rates may lead to an overestimate of the true overall nonresponse rate for persons for the school enrollment supplement.

Further information on CPS methodology may be obtained from http://www.census.gov/cps.

Further information on the CPS School Enrollment Supplement may be obtained from

Education and Social Stratification Branch
Census Bureau
U.S. Department of Commerce
4600 Silver Hill Road
Washington, DC 20233
http://www.census.gov/hhes/school/index.html

Decennial Census, Population Estimates, and Population Projections

The decennial census is a universe survey mandated by the U.S. Constitution. It is a questionnaire sent to every household in the country, and it is composed of seven questions about the household and its members (name, sex, age, relationship, Hispanic origin, race, and whether the housing unit is owned or rented). The Census Bureau also produces annual estimates of the resident population by demographic characteristics (age, sex, race, and Hispanic origin) for the nation, states, and counties, as well as national and state projections for the resident population. The reference date for population estimates is July 1 of the given year. With each new issue of July 1 estimates, the Census Bureau revises estimates for each year back to the last census. Previously published estimates are superseded and archived.

Census respondents self-report race and ethnicity. The race questions on the 1990 and 2000 censuses differed in some significant ways. In 1990, the respondent was instructed to select the one race "that the respondent considers himself/herself to be," whereas in 2000, the respondent could select one or more races that the person considered himself or herself to be. American Indian, Eskimo, and Aleut were three separate race categories in 1990; in 2000, the American Indian and Alaska Native categories were combined, with an option to write in a tribal affiliation. This write-in option was provided only for the American Indian category in 1990. There was a combined Asian and Pacific Islander race category in 1990, but the groups were separated into two categories in 2000.

The census question on ethnicity asks whether the respondent is of Hispanic origin, regardless of the race option(s) selected; thus, persons of Hispanic origin may be of any race. In the 2000 census, respondents were first asked, "Is this person Spanish/Hispanic/Latino?" and then given the following options: No, not Spanish/Hispanic/Latino; Yes, Puerto Rican; Yes, Mexican, Mexican American, Chicano; Yes, Cuban; and Yes, other Spanish/Hispanic/Latino (with space to print the specific group). In the 2010 census, respondents were asked "Is this person of Hispanic, Latino, or Spanish origin?" The options given were No, not of Hispanic, Latino, or Spanish origin; Yes, Mexican, Mexican Am., Chicano; Yes, Puerto Rican; Yes, Cuban; and Yes, another other Hispanic, Latino, or Spanish origin—along with instructions to print "Argentinean, Colombian, Dominican, Nicaraguan, Salvadoran, Spaniard, and so on" in a specific box.

The 2000 and 2010 censuses each asked the respondent "What is this person's race?" and allowed the respondent to select one or more options. The options provided were largely the same in both the 2000 and 2010 censuses: White; Black, African American, or Negro; American Indian or Alaska Native (with space to print the name of enrolled or principal tribe); Asian Indian; Japanese; Native Hawaiian; Chinese; Korean; Guamanian or Chamorro; Filipino; Vietnamese; Samoan; Other Asian; Other Pacific Islander; and Some other race. The last three options included space to print the specific race. Two significant differences between the 2000 and 2010 census questions on race were that no race examples were provided for the "Other Asian" and "Other Pacific Islander" responses in 2000, whereas the race examples of "Hmong, Laotian, Thai, Pakistani, Cambodian, and so on" and "Fijian, Tongan, and so on," were provided for the "Other Asian" and "Other Pacific Islander" responses, respectively, in 2010.

The census population estimates program modified the enumerated population from the 2010 census to produce the population estimates base for 2010 and onward. As part of the modification, the Census Bureau recoded the "Some other race" responses from the 2010 census to one or more of the five OMB race categories used in the estimates program (for more information, see http://www.census.gov/popest/methodology/2012-nat-st-co-meth.pdf).

Further information on the decennial census may be obtained from http://www.census.gov.

Survey of Income and Program Participation

The main objective of the Survey of Income and Program Participation (SIPP) is to provide accurate and comprehensive information about the income and program participation of individuals and households in the United States and about the principal determinants of income and program participation. SIPP offers detailed information on cash and noncash income on a subannual basis. The survey also collects data on taxes, assets, liabilities, and participation in government transfer programs. SIPP data allow the government to evaluate the effectiveness of federal, state, and local programs.

The survey design is a continuous series of national panels, with sample size ranging from approximately 14,000 to 36,700 interviewed households. The duration of each panel ranges from 2½ to 4 years. The SIPP sample is a multistage-stratified sample of the U.S. civilian noninstitutionalized population. For the 1984–93 panels, a new panel of households was introduced each year in February. A 4-year panel was introduced in April 1996. A 2000 panel was introduced in February 2000 for two waves, but was cancelled after 8 months. A 2½-year panel was introduced in February 2004 and is the first SIPP panel to use the 2000 decennial-based redesign of the sample. All household members ages 15 years and over are interviewed by self-response, if possible. Proxy response is permitted when household members are not available for interviewing. The latest panel was selected in September 2008.

The SIPP content is built around a "core" of labor force, program participation, and income questions designed to measure the economic situation of people in the United States. These questions expand the data currently available on the distribution of cash and noncash income and are repeated at each interviewing wave. The survey uses a 4-month recall period, with approximately the same number of interviews being conducted in each month of the 4-month period for each wave. Interviews are conducted by personal visit and by decentralized telephone.

The survey has been designed to also provide a broader context for analysis by adding questions on a variety of topics not covered in the core section. These questions are labeled "topical modules" and are assigned to particular interviewing waves of the survey. Topics covered by the modules include personal history, child care, wealth, program eligibility, child support, utilization and cost of healthcare, disability, school enrollment, taxes, and annual income.

Further information on the SIPP may be obtained from

Economics and Statistics Administration
Census Bureau
U.S. Department of Commerce
4600 Silver Hill Road
Washington, DC 20233
http://www.census.gov/sipp/intro.html

Other Organization Sources

International Association for the Evaluation of Educational Achievement

The International Association for the Evaluation of Educational Achievement (IEA) is composed of governmental research centers and national research institutions around the world whose aim is to investigate education problems common among countries. Since its inception in 1958, the IEA has conducted more than 30 research studies of cross-national achievement. The regular cycle of studies encompasses learning in basic school subjects. Examples are the Trends in International Mathematics and Science Study (TIMSS) and the Progress in International Reading Literacy Study (PIRLS). IEA projects also include studies of particular interest to IEA members, such as the TIMSS 1999 Video Study of Mathematics and Science Teaching, the Civic Education Study, and studies on information technology in education.

The international bodies that coordinate international assessments vary in the labels they apply to participating education systems, most of which are countries. IEA differentiates between IEA members, which IEA refers to as "countries" in all cases, and "benchmarking participants." IEA members include countries such as the United States and Ireland, as well as subnational entities such as England and Scotland (which are both part of the United Kingdom), the Flemish community of Belgium, and Hong Kong-CHN (which is a Special Administrative Region of China). IEA benchmarking participants are all subnational entities and include Canadian provinces, U.S. states, and Dubai in the United Arab Emirates (among others). Benchmarking participants, like the participating countries, are given the opportunity to assess the comparative international standing of their students' achievement and to view their curriculum and instruction in an international context. Subnational entities that participated as benchmarking participants are excluded from this indicator's analysis.

Some IEA studies, such as TIMSS and PIRLS, include an assessment portion as well as contextual questionnaires to collect information about students' home and school experiences. The TIMSS and PIRLS scales, including the scale averages and standard deviations, are designed to remain constant from assessment to assessment so that education systems (including countries and subnational education systems) can compare their scores over time, as well as compare their scores directly with the scores of other education systems. Although each scale was created to have a mean of 500 and a standard deviation of 100, the subject matter and the level of difficulty of items necessarily differ by grade, subject, and domain/ dimension. Therefore, direct comparisons between scores across grades, subjects, and different domain/dimension types should not be made.

Further information on the International Association for the Evaluation of Educational Achievement may be obtained from http://www.iea.nl.

Trends in International Mathematics and Science Study

The Trends in International Mathematics and Science Study (TIMSS, formerly known as the Third International Mathematics and Science Study) provides reliable and timely data on the mathematics and science achievement of U.S. fourth- and eighth-graders compared with that of their peers in other countries. TIMSS is on a 4-year cycle, with data collection occurring in 1995, 1999 (eighth grade only), 2003, 2007, and 2011. In 2011, a total of 77 education systems, including 63 IEA members and 14 benchmarking participants, participated in TIMSS. The next TIMSS data collection is scheduled for 2015. TIMSS collects information through mathematics and science assessments and questionnaires. The questionnaires request information to help provide a context for student performance, focusing on such topics as students' attitudes and beliefs about learning mathematics and science, what students do as part of their mathematics and science lessons, students' completion of homework, and their lives both in and outside of school; teachers' perceptions of their preparedness for teaching mathematics and science topics, teaching assignments, class size and organization, instructional content and practices, and participation in professional development activities; and principals' viewpoints on policy and budget responsibilities, curriculum and instruction issues, and student behavior, as well as descriptions of the organization of schools and courses. The assessments and questionnaires are designed to specifications in a guiding framework. The TIMSS framework describes the mathematics and science content to be assessed and provides grade-specific objectives, an overview of the assessment design, and guidelines for item development.

Progress in International Reading Literacy Study

The Progress in International Reading Literacy Study (PIRLS) provides reliable and timely data on the reading literacy of U.S. fourth-graders compared with that of their peers in other countries. PIRLS is on a 5-year cycle, with data having been collected in 2001, 2006, and 2011. In 2011, a total of 57 education systems, including 48 IEA members and 9 benchmarking participants, participated in PIRLS. The next PIRLS data collection is scheduled for 2016. PIRLS collects information through a reading literacy assessment and questionnaires that help to provide a context for student performance. Questionnaires are administered to collect information about students' home and school experiences in learning to read. A student questionnaire addresses students' attitudes towards reading and their reading habits. In addition, questionnaires are given to students' teachers and school principals to gather information about students' school experiences in developing reading literacy. In countries other than the United States, a parent questionnaire is also administered. The assessments and questionnaires are designed to specifications in a guiding framework. The PIRLS framework describes the reading content to be assessed and provides objectives specific to fourth grade, an overview of the assessment design, and guidelines for item development.

TIMSS and PIRLS Sampling and Response Rates

It is not feasible to assess every fourth- or eighth-grade student in the United States. As is done in all participating countries and other education systems, representative samples of students are selected. The sample design employed by TIMSS and PIRLS in 2011 is generally referred to as a two-stage stratified cluster sample. In the first stage of sampling, individual schools were selected with a probability proportionate to size (PPS) approach, which means that the probability is proportional to the estimated number of students enrolled in the target grade. In the second stage of sampling, intact classrooms were selected within sampled schools.

TIMSS and PIRLS guidelines call for a minimum of 150 schools to be sampled, with a minimum of 4,000 students assessed. The basic sample design of one classroom per school was designed to yield a total sample of approximately 4,500 students per population.

About 23,000 students in almost 900 schools across the United States participated in the 2011 TIMSS, joining 600,000 other student participants around the world. Because the Progress in International Reading Literacy Study (PIRLS) was also administered at grade 4 in spring 2011, TIMSS and PIRLS in the United States were administered in the same schools to the extent feasible. Students took either TIMSS or PIRLS on the day of the assessments. About 13,000 U.S. students participated in PIRLS in 2011, joining 300,000 other student participants around the world. Accommodations were not provided for students with disabilities or students who were unable to read or speak the language of the test. These students were excluded from the sample. The IEA requirement is that the overall exclusion rate, which is composed of exclusions of schools and students, should not exceed more than 5 percent of the national desired target population.

In order to minimize the potential for response biases, the IEA developed participation or response rate standards that apply to all participating education systems and govern whether or not an education system's data are included in the TIMSS or PIRLS international datasets and the way in which its statistics are presented in the international reports. These standards were set using composites of response rates at the school, classroom, and student and teacher levels. Response rates were calculated with and without the inclusion of substitute schools that were selected to replace schools refusing to participate. In TIMSS 2011 at grade 4 in the United States, the weighted school participation rate was 79 percent before the use of substitute schools and 84 percent after the use of replacement schools; the weighted student response rate was 95 percent. In TIMSS 2011 at grade 8 in the United States, the weighted school participation rate was 87 percent before the use of substitute schools and 87 percent after the use of replacement schools; the weighted student response rate was 94 percent. In the 2011 PIRLS administered in the United States, the weighted school participation rate was 80 percent before the use of substitute schools and 85 percent after the use of replacement schools; the weighted student response rate was 96 percent.

Further information on the TIMSS study may be obtained from

Stephen Provasnik
Assessments Division
International Assessment Branch
National Center for Education Statistics
1990 K Street NW, Room 9034
Washington, DC 20006
(202) 502-7480
stephen.provasnik@ed.gov
http://nces.ed.gov/timss
http://www.iea.nl/timss_2011.html

Further information on the PIRLS study may be obtained from

Sheila Thompson
Assessments Division
International Assessment Branch
National Center for Education Statistics
1990 K Street NW, Room 9031
Washington, DC 20006
(202) 502-7425
sheila.thompson@ed.gov
http://nces.ed.gov/surveys/pirls/
http://www.iea.nl/pirls_2011.html

Organization for Economic Cooperation and Development

The Organization for Economic Cooperation and Development (OECD) publishes analyses of national policies and survey data in education, training, and economics in OECD and partner countries. Newer studies include student survey data on financial literacy and on digital literacy.

Education at a Glance (EAG)

To highlight current education issues and create a set of comparative education indicators that represent key features of education systems, OECD initiated the Indicators of Education Systems (INES) project and charged the Centre for Educational Research and Innovation (CERI) with developing the cross-national indicators for it. The development of these indicators involved representatives of the OECD countries and the OECD Secretariat. Improvements in data quality and comparability among OECD countries have resulted from the country-to-country interaction sponsored through the INES project. The most recent publication in this series is Education at a Glance 2012: OECD Indicators.

The 2013 EAG featured the 34 OECD countries: Australia, Austria, Belgium, Canada, Chile, the Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Israel, Italy, Japan, the Republic of Korea, Luxembourg, Mexico, the Netherlands, New Zealand, Norway, Poland, Portugal, the Slovak Republic, Slovenia, Spain, Sweden, Switzerland, Turkey, the United Kingdom, and the United States. In addition to the 34 OECD countries, two non-OECD countries that participated in OECD's Indicators of Education Systems (INES) program, Brazil and the Russian Federation, were often included, along with six other G20 countries that did not participate in INES (Argentina, China, India, Indonesia, Saudi Arabia, and South Africa).

The OECD Handbook for Internationally Comparative Education Statistics: Concepts, Standards, Definitions, and Classifications provides countries with specific guidance on how to prepare information for OECD education surveys; facilitates countries' understanding of OECD indicators and their use in policy analysis; and provides a reference for collecting and assimilating educational data. Chapter 7 of the OECD Handbook for Internationally Comparative Education Statistics contains a discussion of data quality issues. Users should examine footnotes carefully to recognize some of the data limitations.

Further information on international education statistics may be obtained from

Andreas Schleicher
Deputy Director for Education and Skills and
Special Advisor on Education Policy to the OECD's Secretary General
OECD Directorate for Education and Skills
2, rue André Pascal
75775 Paris CEDEX 16
France
andreas.schleicher@oecd.org
http://www.oecd.org

Program for International Student Assessment

The Program for International Student Assessment (PISA) is a system of international assessments that focuses on 15-year-olds' capabilities in reading literacy, mathematics literacy, and science literacy. PISA also includes measures of general, or cross-curricular, competencies such as learning strategies. PISA emphasizes functional skills that students have acquired as they near the end of mandatory schooling. PISA is organized by the Organization for Economic Cooperation and Development (OECD), an intergovernmental organization of industrialized countries, and was administered for the first time in 2000, when 43 education systems participated. In 2003, 41 education systems participated in the assessment; in 2006, 57 education systems (30 OECD member countries and 27 nonmember countries or education systems) participated; and in 2009, 65 education systems (34 OECD member countries and 31 nonmember countries or education systems) participated. (An additional nine education systems administered PISA 2009 in 2010.) In PISA 2012, the most recent administration of PISA, 65 education systems (34 OECD member countries and 31 nonmember countries or education systems), as well as the U.S. states of Connecticut, Florida, and Massachusetts, participated.

PISA is a 2-hour paper-and-pencil exam. Assessment items include a combination of multiple-choice questions and open-ended questions that require students to develop their own response. PISA scores are reported on a scale that ranges from 0 to 1,000, with the OECD mean set at 500 and a standard deviation set at 100. In 2012, mathematics, science, and reading literacy were assessed primarily through a paper-and-pencil exam, and problem-solving was administered using a computer-based exam. Education systems could also participate in optional pencil-and-paper financial literacy assessments and computer-based mathematics and reading assessments.

PISA is implemented on a 3-year cycle that began in 2000. Each PISA assessment cycle focuses on one subject in particular, although all three subjects are assessed every 3 years. In the first cycle, PISA 2000, reading literacy was the major focus, occupying roughly two-thirds of assessment time. For 2003, PISA focused on mathematics literacy as well as the ability of students to solve problems in real-life settings. In 2006, PISA focused on science literacy. In 2009, PISA focused on reading literacy again. In 2012, PISA focused on mathematics literacy.

The intent of PISA reporting is to provide an overall description of performance in reading literacy, mathematics literacy, and science literacy every 3 years, and to provide a more detailed look at each domain in the years when it is the major focus. These cycles will allow education systems to compare changes in trends for each of the three subject areas over time.

To implement PISA, each of the participating education systems scientifically draws a nationally representative sample of 15-year-olds, regardless of grade level. In the United States, about 6,100 students from 161 public and private schools took the PISA 2012 assessment. In the U.S. state education systems, about 1,700 students at 50 schools in Connecticut, about 1,900 students at 54 schools in Florida, and about 1,700 students at 49 schools in Massachusetts took the 2012 assessment. PISA 2012 was only administered at public schools in the U.S. state education systems.

In each education system, the assessment is translated into the primary language of instruction; in the United States, all materials are written in English.

Further information on PISA may be obtained from

Holly Xie
Dana Kelly
Assessments Division
International Assessment Branch
National Center for Education Statistics
1990 K Street NW
Washington, DC 20006
holly.xie@ed.gov
dana.kelly@ed.gov
http://nces.ed.gov/surveys/pisa

Would you like to help us improve our products and website by taking a short survey?

YES, I would like to take the survey

or

No Thanks

The survey consists of a few short questions and takes less than one minute to complete.
National Center for Education Statistics - http://nces.ed.gov
U.S. Department of Education