Skip Navigation

Chapter 9—Data: Knowing What You Have, Identifying What You Need

Once the decisionmakers and stakeholders have considered the types of questions an LDS can answer, the organization should assess what data it has and what it will need to answer the questions they deem most useful. Most agencies already have a large amount of data, especially if they are collecting and maintaining data on individual students. However, while more is not necessarily better, the data currently maintained may not be sufficient to answer questions stakeholders have determined are important. This section offers some data an education agency may need to achieve its goals.

Your data may be stored in a central data warehouse, or in many separate data stores. Likely source systems include the

checkmark icon
  • student information system;
  • curriculum management system;
  • program systems (e.g., Title I);
  • student transportation system;
  • food services system;
  • assessment/accountability system;
  • human resources and teacher certification system;
  • financial system;
  • instructional management system;
  • student health system; and
  • library/media system.

After taking an inventory of what data it has, cataloguing all current and planned data collections, and identifying where data items are housed (and which system is the authoritative source for each data element), an agency should determine if it should collect any additional data.

While the education community often focuses on student traits and learning outcomes, truly informative education research also requires context—the students' learning opportunities and learning climate. In addition to outcomes, therefore, data users should look at information on the inputs and processes that contribute directly and indirectly to student learning. For instance, in which programs does the student participate? Who are the student's teachers? What classroom strategies are used? Are there differences in student learning opportunities by race, sex, and/or socioeconomic status (e.g., representation in special education and non-college-preparatory tracks, teacher experience levels, resources, expectations)? What are the local financial and hiring practices?

Keep in mind that while individual data items about students and staff are extremely valuable in efforts to monitor and understand student experiences, deeper analysis will view certain data elements in concert (as "derived" data elements, indexes, or indicators) and track them longitudinally. Doing so allows data users to examine the relationships between various aspects of reality and illuminates the trends that occur over time, showing what educational inputs contribute to what kinds of results for which students. For instance, when evaluating the success of a particular program, researchers may look at more than the participating student performance on assessments. The relative effectiveness of a particular instructional strategy, or strategies, may also depend on context and input variables such as the background and preparation of the teachers implementing the program, how closely the program is implemented, and the characteristics of the students receiving the instruction. And a fair evaluation of the program will look at a host of discrete measures of success in addition to year-end summative test scores. Such holistic methodologies that combine a range of relevant data can generate powerful guidance and help educators more effectively meet individual student needs.

magnifying glass icon Using the right data architect for your LDS

The usefulness of your LDS will be greatly affected by its data architecture. A good data architect can create a flexible data model from the outset that will help avoid extensive (and expensive) changes later on. In addition to helping the agency identify the right data elements, the data architect can help create a fully integrated system by defining the relationships among those elements at different levels: conceptual (relationships among major concepts), logical (in terms of a data manipulation technology such as a relational database or XML), and physical (in terms of a particular product and means of storage such as a server or disk drives). (ANSI 1975)

Of course, an agency may opt for an off-the-shelf, one-size-fits-all data model rather than building a custom solution with an in-house data architect. This approach may potentially reduce risk and time to implementation, but is likely to work better for districts than for states because of the wider variety of district-level products and the fact that districts' research needs are usually less extensive. While a data architect can be invaluable in designing a data model specific to the needs of a state, existing models can be useful if two conditions exist:

  • Someone on staff has sufficient expertise in data architecture and data modeling to make a valid judgment that the commercial product meets the needs of the organization.

  • Someone on staff or on the LDS design team is capable of making the modifications that will almost inevitably be needed to meet the state's specific needs.

magnifying glass icon Education indicators

Many resources are available to help determine what data are needed to answer education questions. For more information on developing and using education indicators to measure status and outcomes, see

  • Forum Guide to Education Indicators (Forum 2005)

    This document is designed to help readers properly create, use, and interpret education indicators. It also identifies standard definitions and calculations, and warns of common misuses of education indicators.

  • From Information to Insight: The Point of Indicators (ESP Solutions 2007)

    This document discusses various types of education indicators as well as education "indexes," which are combinations of related indicators that offer more thorough views of educational values and trends than single indicators can provide. It also discusses the selection of data elements required, and the establishment of thresholds, to indicate the need for action.

  • Comparative Indicators of Education in the United States and Other G–8 Countries: 2006 (NCES 2007)

    This report presents 20 indicators used to compare the education system of the United States to those of other G–8 countries. Indicators focus on population and school enrollment, academic performance, context for learning, expenditure for education, and education return, educational attainment, and income.

  • Buried Treasure: Developing a Management Guide From Mountains of School Data (Wallace Foundation, 2005)

    Geared towards district-level management, this report presents 7 key types of schoollevel education indicators. The authors suggest that less may be more: rather than developing an indicator for every need, they encourage parsimony.

  • Study of Leading Indicators of Educational Improvement (Annenberg Institute for School Reform, 2008)

    This study looks at leading indicators used to identify early signs of academic progress before test scores come in. These indicators may help agencies think about what questions they want to explore and the data they will need to answer them.

When identifying new data for collection, overarching goals should be used as a framework for selecting new elements and education agencies should collect and store only those data that will benefit the enterprise. Elements that stakeholders think would be nice to have, but which do not lend themselves to achieving stated goals, should be avoided. In addition, the data collected should capture the appropriate level of detail. For instance, when collecting data on attendance, should you collect by day, by period, or by some other unit of time? If attendance by day is sufficient, an agency may not want to burden staff further (National Forum on Education Statistics 2009). Also, widely accepted standards and definitions should be followed so that the records are consistent and comparable to other agencies' data.

Table 4 presents many of the key types of data that may be contained in a P–12 LDS and used for longitudinal analyses, but it is not exhaustive. Appendix C offers sources of more detailed and exhaustive data. Ultimately, agencies should collect all other data required for state and federal reports, as well as other key data necessary to answer its stakeholders' questions.