Skip Navigation
small NCES header image
Projections of Education Statistics to 2014, published September 2005.

Appendix C: Data Sources

Sources and Comparability of Data

The information in this report was obtained from many sources, including federal and state agencies, private research organizations, and professional associations. The data were collected by many methods, including surveys of a universe (such as all colleges) or of a sample, and compilations of administrative records. Care should be used when comparing data from different sources. Differences in procedures, such as timing, phrasing of questions, and interviewer training, mean that the results from the different sources are not strictly comparable. More extensive documentation of one survey's procedures than of another's does not imply more problems with the data, only that more information is available on the survey.

Accuracy of Data

The accuracy of any statistic is determined by the joint effects of “sampling” and “nonsampling” errors. Estimates based on a sample will differ from the figures that would have been obtained if a complete census had been taken using the same survey instruments, instructions, and procedures. Besides sampling errors, both of the surveys, universe and sample, are subject to errors of design, reporting, and processing, and errors due to nonresponse. To the extent possible, these nonsampling errors are kept to a minimum by methods built into the survey procedures. In general, however, the effects of nonsampling errors are more difficult to gauge than those produced by sampling variability.

Sampling Errors

The standard error is the primary measure of sampling variability. It provides a specific range—with a stated confidence—within which a given estimate would lie if a complete census had been conducted. The chances that a complete census would differ from the sample by less than the standard error are about 68 out of 100. The chances that the difference would be less than 1.65 times the standard error are about 90 out of 100. The chances that the difference would be less than 1.96 times the standard error are about 95 out of 100. The chances that it would be less than 2.58 times as large are about 99 out of 100.

The standard error can help assess how valid a comparison between two estimates might be. The standard error of a difference between two sample estimates that are uncorrelated is approximately equal to the square root of the sum of the squared standard errors of the estimates. The standard error (se) of the difference between sample estimate “a” and sample estimate “b” is

sea-b = (sea2 + seb2)1/2

Note that most of the standard errors in subsequent sections and in the original documents are approximations. That is, to derive estimates of standard errors that would be applicable to a wide variety of items and could be prepared at a moderate cost, a number of approximations were required. As a result, most of the standard errors presented provide a general order of magnitude rather than the exact standard error for any specific item.

Nonsampling Errors

Both universe and sample surveys are subject to nonsampling errors. Nonsampling errors are of two kinds—random and nonrandom. Random nonsampling errors may arise when respondents or interviewers interpret questions differently, when respondents must estimate values, or when coders, keyers, and other processors handle answers differently. Nonrandom nonsampling errors result from total nonresponse (no usable data obtained for a sampled unit), partial or item nonresponse (only a portion of a response may be usable), inability or unwillingness on the part of respondents to provide information, difficulty interpreting questions, mistakes in recording or keying data, errors of collection or processing, and overcoverage or undercoverage of the target universe. Random nonresponse errors usually, but not always, result in an understatement of sampling errors and thus an overstatement of the precision of survey estimates. Because estimating the magnitude of nonsampling errors would require special experiments or access to independent data, these magnitudes are seldom available.

To compensate for suspected nonrandom errors, adjustments of the sample estimates are often made. For example, adjustments are frequently made for nonresponse, both total and partial. Imputations are usually made separately within various groups of sample members that have similar survey characteristics. Imputation for item nonresponse is usually made by substituting for a missing item the response to that item of a respondent having characteristics similar to those of the respondent.

Although the magnitude of nonsampling errors in the data used in Projections of Education Statistics is frequently unknown, idiosyncrasies that have been identified are noted on the appropriate tables.

Top

Federal Agency Sources

National Center for Education Statistics (NCES)

Common Core of Data

NCES uses the Common Core of Data (CCD) survey to acquire and maintain statistical data from each of the 50 states, the District of Columbia, the Bureau of Indian Affairs, Department of Defense Dependents' Schools (overseas), and the outlying areas. Information about staff and students is collected annually at the school, local education agency or school district (LEA), and state levels. Information about revenues and expenditures is also collected at the state and LEA levels.

Data are collected for a particular school year (July 1 through June 30) via survey instruments sent to the state education agencies during the school year. States have 1 year in which to modify the data originally submitted.

Since the CCD is a universe survey, the CCD information presented in this edition of the Projections of Education Statistics is not subject to sampling errors. However, nonsampling errors could come from two sources—nonreturn and inaccurate reporting. Almost all of the states submit the six CCD survey instruments each year, but submissions are sometimes incomplete or too late for publication.

Understandably, when 58 education agencies compile and submit data for approximately 95,000 public schools and 17,000 local school districts, misreporting can occur. Typically, this results from varying interpretations of NCES definitions and differing recordkeeping systems. NCES attempts to minimize these errors by working closely with the state education agencies through the National Forum on Education Statistics.

The state education agencies report data to NCES from data collected and edited in their regular reporting cycles. NCES encourages the agencies to incorporate into their own survey systems the NCES items they do not already collect so that those items will also be available for the subsequent CCD survey. Over time, this has meant fewer missing data cells in each state's response, reducing the need to impute data.

NCES subjects data from the education agencies to a comprehensive edit. Where data are determined to be inconsistent, missing, or out of range, NCES contacts the education agencies for verification. NCES prepared state summary forms are returned to the state education agencies for verification. States are also given an opportunity to revise their state level aggregates from the previous survey cycle.

Further information on the CCD may be obtained from

John Sietsema
Elementary/Secondary Cooperative System and Institutional Studies Division (ESCSISD)
National Center for Education Statistics
1990 K Street NW
Washington, DC 20006
http://nces.ed.gov/ccd/

Private School Universe Survey

The purposes of Private School Survey (PSS) data collection activities are to build an accurate and complete list of private schools to serve as a sampling frame for NCES sample surveys of private schools, and to report data on the total number of private schools, teachers, and students in the survey universe. The PSS is conducted every 2 years, with collections in the 1989–90, 1991–92, 1993–94, 1995–96, 1997–98, 1999–2000, 2001–02, and 2003–04 school years.

The PSS produces data similar to that of the CCD for public schools and can be used for public-private comparisons. The data are useful for a variety of policy and research-relevant issues, such as the growth of religiously affiliated schools, the number of private high school graduates, the length of the school year for various private schools, and the number of private school students and teachers.

The target population for the universe survey consists of all private schools in the United States that meet NCES criteria of a school (e.g., a private school is an institution that provides instruction for any of grades K through 12, has one or more teachers to give instruction, is not administered by a public agency, and is not operated in a private home). The survey universe is composed of schools identified from a variety of sources. The main source is a list frame, initially developed for the 1989–90 PSS. The list is updated regularly, matching it with lists provided by nationwide private school associations, state departments of education, and other national guides and sources that list private schools. The other source is an area frame search in approximately 120 geographic areas, conducted by the Bureau of the Census.

Further information on the PSS may be obtained from

Steve Broughman
Elementary/Secondary Sample Survey Studies program (ESLSD)
National Center for Education Statistics
1990 K Street NW
Washington, DC 20006
Stephen.Broughman@ed.gov
http://nces.ed.gov/surveys/pss/

Top

Integrated Postsecondary Education Data System

The Integrated Postsecondary Education Data System (IPEDS) surveys approximately 10,000 postsecondary institutions, including universities and colleges, as well as institutions offering technical and vocational education beyond the high school level. This survey, which began in 1986, replaced the Higher Education General Information Survey (HEGIS).

IPEDS consists of several integrated components that obtain information on who provides postsecondary education (institutions), who participates in it and completes it (students), what programs are offered and what programs are completed, and both the human and financial resources involved in the provision of institutionally based postsecondary education. Specifically, these components include Institutional Characteristics, including instructional activity; Fall Enrollment, including age and residence; Completions; Finance; Staff; Salaries of Full Time Instructional Faculty; Student Financial Aid; and Graduation Rate.

The degree-granting institutions portion of this survey is a census of colleges awarding associates or higher degrees and that were eligible to participate in Title IV financial aid programs. Prior to 1993, data from the technical and vocational institutions were collected through a sample survey. Beginning in 1993, all data were gathered in a census of all postsecondary institutions. The IPEDS tabulations developed for this edition of Projections of Education Statistics are based on lists of all institutions and are not subject to sampling errors.

The definition of institutions generally thought of as offering college and university education has been changed in recent years. The old standard for higher education institutions included those institutions that had courses that led to an associate degree or higher, or were accepted for credit towards those degrees. The higher education institutions were accredited by an agency or association that was recognized by the U.S. Department of Education or recognized directly by the Secretary of Education. The current category includes institutions that award associate or higher level degrees that are eligible to participate in Title IV federal financial aid programs. The impact of this change has generally not been large. For example, tables on faculty salaries and benefits were only affected to a very small extent. Also, degrees awarded at the bachelor's level or higher were not heavily affected. Most of the data on public 4-year colleges has been affected only to a minimal extent. The impact on enrollment in public 2-year colleges was noticeable in certain states, but relatively small at the national level. The largest impact has been on private 2-year college enrollment. Overall, enrollment for all institutions was about one-half of a percent higher for degree-granting institutions compared to the total for higher education institutions.

Prior to the establishment of IPEDS in 1986, HEGIS acquired and maintained statistical data on the characteristics and operations of institutions of higher education. Implemented in 1966, HEGIS was an annual universe survey of institutions accredited at the college level by an agency recognized by the Secretary of the U.S. Department of Education. These institutions were listed in the NCES publication Education Directory, Colleges and Universities.

HEGIS surveys solicited information concerning institutional characteristics, faculty salaries, finances, enrollment, and degrees. Since these surveys were distributed to all higher education institutions, the data presented are not subject to sampling error. However, they are subject to nonsampling error, the sources of which varied with the survey instrument. Information concerning the nonsampling error of the enrollment and degrees surveys draws extensively on the HEGIS Post Survey Validation Study conducted in 1979.

Further information on IPEDS may be obtained from

Susan Broyles
Postsecondary Institutional Studies Program (PSD)
National Center for Education Statistics
1990 K Street NW
Washington, DC 20006
Susan.Broyles@ed.gov
http://nces.ed.gov/ipeds/

Institutional Characteristics   This survey provides the basis for the universe of institutions presented in the Directory of Postsecondary Institutions. The survey collects basic information necessary to classify the institutions, including control, level, and kinds of programs, and information on tuition, fees, and room and board charges. Beginning in 2000, the survey collected institutional pricing data from institutions with first-time, full-time, degree/certificate-seeking undergraduate students. Unduplicated full-year enrollment counts and instructional activity are now collected on the Fall Enrollment survey. The overall response rate was 99.2 percent for Title IV degree-granting institutions in 2002.

Further information may be obtained from

Patricia Brown
Postsecondary Institutional Studies Program (PSD)
National Center for Education Statistics
1990 K Street NW
Washington, DC 20006
Patricia.Brown@ed.gov
http://nces.ed.gov/ipeds/

Fall Enrollment   This survey has been part of the HEGIS and IPEDS series since 1966. The enrollment survey response rate is relatively high. The 2002 overall response rate was 99.6 percent for degree-granting institutions. Beginning in 2000, the data collection method was web-based, replacing the paper survey forms that had been used in past years. Imputation methods and response bias analysis for the 2001–02 survey are discussed in Enrollment in Postsecondary Institutions, Fall 2002 and Financial Statistics, Fiscal Year 2002 (NCES 2005–168). Major sources of nonsampling error for this survey, as identified in the 1979 report, were classification problems, the unavailability of needed data, interpretation of definitions, the survey due date, and operational errors. Of these, the classification of students appears to have been the main source of error. Institutions had problems in correctly classifying first-time freshmen and other first–time students for both full-time and part-time categories. These problems occurred most often at 2-year institutions (private and public) and private 4-year institutions. In the 1977–78 HEGIS validation studies, the classification problem led to an estimated overcount of 11,000 full-time students and an undercount of 19,000 part-time students. Although the ratio of error to the grand total was quite small (less than 1 percent), the percentage of errors was as high as 5 percent for detailed student levels and even higher at certain aggregation levels.

Beginning with fall 1986, the survey system was redesigned with the introduction of IPEDS (see above). The survey allows (in alternating years) for the collection of age and residence data. In 2000, the Fall Enrollment survey collected the instructional activity and unduplicated headcount data, which are needed to compute a standardized, full-time-equivalent (FTE) enrollment statistic for the entire academic year. Starting in 2001, unduplicated headcounts by level of student, and by race/ethnicity and gender of student were also requested, as well as the total number of students in the entering class.

Further information may be obtained from

Cathy Statham
Postsecondary Institutional Studies Program (PSD)
National Center for Education Statistics
1990 K Street NW
Washington, DC 20006
Cathy.Statham@ed.gov
http://nces.ed.gov/ipeds/

Top

Completions   This survey was part of the HEGIS series throughout its existence. However, the degree classification taxonomy was revised in 1970–71, 1982–83, and 1991–92. Collection of degree data has been maintained through the IPEDS system.

Though information from survey years 1970–71 through 1981–82 is directly comparable, care must be taken if information before or after that period is included in any comparison. The nonresponse rate did not appear to be a significant source of nonsampling error for this survey. The return rate over the years has been high, with the degree–granting institutions response rate for the 2001–02 survey at 98.9 percent. The overall response rate for the non-degree-granting institutions was 93.2 percent in 2001–02. Because of the high return rate for the degree-granting institutions, nonsampling error caused by imputation was also minimal. Imputation methods and response bias analysis for the 2001–02 survey are discussed in Postsecondary Institutions in the United States: Fall 2002 and Degrees and Other Awards Conferred: 200102 (NCES 2004–154).

The major sources of nonsampling error for this survey were differences between the NCES program taxonomy and taxonomies used by the colleges, classification of double majors, operational problems, and survey timing. In the 1979 HEGIS validation study, these sources of nonsampling error contributed to an error rate of 0.3 percent overreporting of bachelor's degrees and 1.3 percent overreporting of master's degrees. The differences, however, varied greatly among fields. Over 50 percent of the fields selected for the validation study had no errors identified. Categories of fields that had large differences were business and management, education, engineering, letters, and psychology. It was also shown that differences in proportion to the published figures were less than 1 percent for most of the selected fields that had some errors. Exceptions to these were master's and Ph.D. programs in labor and industrial relations (20 percent and 8 percent); bachelor's and master's programs in art education (3 percent and 4 percent); bachelor's and Ph.D. programs in business and commerce, and in distributive education (5 percent and 9 percent); master's programs in philosophy (8 percent); and Ph.D. programs in psychology (11 percent).

Further information on IPEDS Completions surveys may be obtained from

Andrew Mary
Postsecondary Institutional Studies Program (PSD)
National Center for Education Statistics
1990 K Street NW
Washington, DC 20006
Andrew.Mary@ed.gov
http://nces.ed.gov/ipeds/

Financial Statistics   This survey was part of the HEGIS series and has been continued under the IPEDS system. Changes were made in the financial survey instruments in fiscal years (FY) 1976, 1982, and 1987. The FY 76 survey instrument contained numerous revisions to earlier survey forms and made direct comparisons of line items very difficult. Beginning in FY 82, Pell Grant data were collected in the categories of federal restricted grants and contracts revenues, and restricted scholarships and fellowships expenditures. The introduction of IPEDS in the FY 87 survey included several important changes to the survey instrument and data processing procedures. While these changes were significant, considerable effort has been made to present only comparable information on trends in this report and to note inconsistencies. Finance tables for this publication have been adjusted by subtracting the largely duplicative Pell Grant amounts from the later data to maintain comparability with pre FY 82 data.

Possible sources of nonsampling error in the financial statistics include nonresponse, imputation, and misclassification. The response rate has been about 85 to 90 percent for most years. The response rate for the FY 2002 survey was 98.7 percent for degree-granting institutions. Because of the higher response rate for public colleges (99.7 percent for public 4-year and 98.5 percent for public 2-year, compared to 98.7 percent for not-for-profit 4-year and 98.4 percent for not-for-profit 2-year), it is probable that the public colleges' data are more accurate than the data for private colleges. Imputation methods and response bias analysis for the 2001–02 survey are discussed in Enrollment in Postsecondary Institutions, Fall 2002 and Financial Statistics, Fiscal Year 2002 (NCES 2005–168).

Two general methods of imputation were used in HEGIS. If the prior year's data were available for a nonresponding institution, these data were inflated using the Higher Education Price Index and adjusted according to changes in enrollments. If no previous year's data were available, current data were used from peer institutions selected for location (state or region), control, level, and enrollment size. In most cases, estimates for nonreporting institutions in IPEDS were made using data from peer institutions.

Beginning with FY 87, the IPEDS survey system included all postsecondary institutions, but maintained comparability with earlier surveys by allowing 2- and 4-year institutions to be tabulated separately. For FY 87 through FY 91, in order to maintain comparability with the historical time series of HEGIS institutions, data were combined from two of the three different survey forms that make up the IPEDS survey system. The vast majority of the data were tabulated from form 1, which was used to collect information from public and private not-for-profit 2- and 4-year colleges. Form 2, a condensed form, was used to gather data from 2-year for-profit institutions. Because of the differences in the data requested on the two forms, several assumptions were made about the form 2 reports so that their figures could be included in the degree-granting institutions totals.

In IPEDS, the form 2 institutions were not asked to separate appropriations from grants and contracts, nor state from local sources of funding. For the form 2 institutions, all the federal revenues were assumed to be federal grants and contracts, and all of the state and local revenues were assumed to be restricted state grants and contracts. All other form 2 sources of revenue, except for tuition and fees, and sales and services of educational activities, were included under "other." Similar adjustments were made to the expenditure accounts. The form 2 institutions reported instruction and scholarship and fellowship expenditures only. All other educational and general expenditures were allocated to academic support.

To reduce reporting error, NCES uses national standards for reporting finance statistics. These standards are contained in College and University Business Administration: Administrative Services (1974 Edition), and the Financial Accounting and Reporting Manual for Higher Education (1990 Edition), published by the National Association of College and University Business Officers; Audits of Colleges and Universities (as amended August 31, 1974), by the American Institute of Certified Public Accountants; and HEGIS Financial Reporting Guide (1980), by NCES. Wherever possible, definitions and formats in the survey form are consistent with those in these four accounting texts.

Further information on IPEDS Financial Statistics surveys may be obtained from

Cathy Statham
Postsecondary Institutional Studies Program (PSD)
National Center for Education Statistics
1990 K Street NW
Washington, DC 20006
Cathy.Statham@ed.gov
http://nces.ed.gov/ipeds/

Top

Bureau of the Census

Current Population Survey

Prior to July 2001, estimates of school enrollment rates, as well as social and economic characteristics of students, were based on data collected in the Census Bureau's monthly household survey of about 50,000 dwelling units. Beginning in July 2001, this sample was expanded to 60,000 dwelling units. The monthly Current Population Survey (CPS) sample consists of 754 areas comprising 2,007 geographic areas, independent cities, and minor civil divisions throughout the 50 states and the District of Columbia. The samples are initially selected based on the decennial census files and are periodically updated to reflect new housing construction.

The monthly CPS deals primarily with labor force data for the civilian noninstitutional population (i.e., excluding military personnel and their families living on post and inmates of institutions). In addition, in October of each year, supplemental questions are asked about highest grade completed, level and grade of current enrollment, attendance status, number and type of courses, degree or certificate objective, and type of organization offering instruction for each member of the household. In March of each year, supplemental questions on income are asked. The responses to these questions are combined with answers to two questions on educational attainment: highest grade of school ever attended, and whether that grade was completed.

The estimation procedure employed for monthly CPS data involves inflating weighted sample results to independent estimates of characteristics of the civilian noninstitutional population in the United States by age, sex, and race. These independent estimates are based on statistics from decennial censuses; statistics on births, deaths, immigration, and emigration; and statistics on the population in the armed services. Generalized standard error tables are provided in the Current Population Reports. The data are subject to both nonsampling and sampling errors.

Caution should also be used when comparing newer data, which reflect 1990 census-based population controls, with data from March 1993 and earlier years, which reflect 1980 or earlier census-based population controls. This change in population controls had relatively little impact on summary measures, such as means, medians, and percentage distributions. It does, however, have a significant impact on levels. For example, use of 1990- based population controls results in about a 1 percent increase in the civilian noninstitutional population and in the number of families and households. Thus, estimates of levels for data collected in 1994 and later years will differ from those for earlier years by more than what could be attributed to actual changes in the population. These differences could be disproportionately greater for certain subpopulation groups than for the total population.

Further information on CPS may be obtained from

Education and Social Stratification Branch
Population Division
Bureau of the Census
U.S. Department of Commerce
Washington, DC 20233
http://www.bls.census.gov/cps/cpsmain.htm

School Enrollment   Each October, the Current Population Survey (CPS) includes supplemental questions on the enrollment status of the population 3 years old and over, in addition to the monthly basic survey on labor force participation. Prior to 2001, the October supplement consisted of approximately 47,000 interviewed households. Beginning with the October 2001 supplement, the sample was expanded by 9,000 to a total of approximately 56,000 interviewed households. The main sources of nonsampling variability in the responses to the supplement are those inherent in the survey instrument. The question of current enrollment may not be answered accurately for various reasons. Some respondents may not know current grade information for every student in the household, a problem especially prevalent for households with members in college or in nursery school. Confusion over college credits or hours taken by a student may make it difficult to determine the year in which the student is enrolled. Problems may occur with the definition of nursery school (a group or class organized to provide educational experiences for children), where respondents' interpretations of "educational experiences" vary.

For the October 2001 basic CPS, the nonresponse rate was 6.7 percent, and for the school enrollment supplement, the nonresponse rate was an additional 3.6 percent, for a total supplement nonresponse rate of 10.1 percent.

Further information on CPS methodology may be obtained from

http://www.bls.census.gov/cps/cpsmain.htm

Further information on CPS "School Enrollment" may be obtained from

Education and Social Stratification Branch
Bureau of the Census
U.S. Department of Commerce
Washington, DC 20233
http://www.census.gov/population/www/socdemo/school.html

State Population Projections.   These state population projections were prepared using a cohort component method by which each component of population change—births, deaths, state to state migration flows, international in migration, and international out migration—was projected separately for each birth cohort by sex, race, and Hispanic origin. The basic framework was the same as in past Census Bureau projections.

Detailed components necessary to create the projections were obtained from vital statistics, administrative records, census data, and national projections.

The cohort component method is based on the traditional demographic accounting system:

P1 = P0 + B D + DIM DOM + IIM - IOM

where:

P1 = population at the end of the period
P0 = population at the beginning of the period
B = births during the period
D = deaths during the period
DIM = domestic in migration during the period
DOM = domestic out migration during the period
IIM= international in migration during the period
IOM= international out migration during the period

To generate population projections with this model, the Census Bureau created separate datasets for each of these components. In general, the assumptions concerning the future levels of fertility, mortality, and international migration are consistent with the assumptions developed for the national population projections of the Census Bureau.

Once the data for each component were developed, it was a relatively straightforward process to apply the cohort component method and produce the projections. For each projection year, the base population for each state was disaggregated into eight race and Hispanic categories (non Hispanic White; non Hispanic Black; non Hispanic American Indian, Eskimo, and Aleut; non Hispanic Asian and Pacific Islander; Hispanic White; Hispanic Black; Hispanic American Indian, Eskimo, and Aleut; and Hispanic Asian and Pacific Islander), by sex, and single year of age (ages 0 to 85+). The next step was to survive each age sex race ethnic group forward 1 year using the pertinent survival rate. The internal redistribution of the population was accomplished by applying the appropriate state to state migration rates to the survived population in each state. The projected out migrants were subtracted from the state of origin and added to the state of destination (as in migrants). Next, the appropriate number of immigrants from abroad was added to each group. The population under age 1 was created by applying the appropriate age race ethnic specific birth rates to females of childbearing age. The number of births by sex and race/ethnicity were survived forward and exposed to the appropriate migration rate to yield the population under age 1. The final results of the projection process were adjusted to be consistent with the national population projections by single years of age, sex, race, and Hispanic origin. The entire process was then repeated for each year of the projection.

More information is available in the Census Bureau Population Paper Listing 47 (PPL 47) and Current Population Report P25 1131. These reports may be obtained from

Statistical Information Staff
Bureau of the Census
U.S. Department of Commerce
Washington, DC 20233
(301) 763-3030
http://www.census.gov

Top

Other Sources

National Education Association

Estimates of School Statistics

The National Education Association (NEA) reports enrollment, teacher, revenue, and expenditure data in its annual publication Estimates of School Statistics. Each year, NEA prepares regression based estimates of financial and other education statistics and submits them to the states for verification. Generally, about 30 states adjust these estimates based on their own data. These preliminary data are published by NEA along with revised data from previous years. States are asked to revise previously submitted data as final figures become available. The most recent publication contains all changes reported to the NEA.

Additional information is available from

National Education Association—Research
1201 16th Street NW
Washington, DC 20036
http://www.nea.org

Global Insight, Inc.

Global Insight, Inc. provides an information system that includes databases of economic and financial information: simulation and planning models; regular publications and special studies; data retrieval and management systems; and access to experts on economic, financial, industrial, and market activities. One service is the Global Insight Model of the U.S. Economy, which contains annual projections of U.S. economic and financial conditions, including forecasts for the federal government, incomes, population, prices and wages, and state and local governments, over a long term (10- to 25 year) forecast period.

Additional information is available from

Global Insight, Inc.
1000 Winter Street Suite 4300N
Waltham, MA 02451-124
http://www.globalinsight.com/

Top


Would you like to help us improve our products and website by taking a short survey?

YES, I would like to take the survey

or

No Thanks

The survey consists of a few short questions and takes less than one minute to complete.