This book, Effectively Managing LDS Data, is the third in a four-part series about longitudinal data systems (LDS). The first book in the series, What is an LDS?, focused on the fundamental questions of what an LDS is (and what it is not), what steps should be taken to achieve a sound system, what components make up an ideal system, and why such a system is of value in education. The second book, Planning and Developing an LDS, focused on the early stages of LDS development to help state and local education agencies determine what they want their LDS to accomplish and what they need to achieve these goals. The present installment discusses organizational issues aimed at moving the project forward; and at ensuring the data are of high quality so that users may leverage them with confidence for data-driven decisionmaking. It looks at the establishment of governance structures and processes, getting the right people in place, and creating committees and working groups of diverse expertise to oversee and inform the process. This process is ultimately aimed at improving data quality and increasing the use of those data to improve education. This book also explores ways to ensure data quality through staff training, validation procedures, and establishment and adherence to data standards. Finally, the document discusses the challenges of securing the system to protect individual privacy and the confidentiality of education records.
Figure 1 lays out the major issues discussed in each of the four books in this series. For more information on the purpose, format, and intended audience groups of this series, see Book One: What is an LDS?
The ungoverned education agency
The history and consequences
The state education agency’s data collection and management practices had come about over time, driven mainly by compliance and funding.
Various program areas were created to focus on specific federal surveys, and staff collected the data needed to do their jobs. Program
area staff administered the surveys, followed their own quality assurance processes, and maintained and secured the data in their
own silo systems. And, of course, data were reported as required by the federal government. Individual managers took their own approach
to directing staff and organizing work, and coordination across program areas was limited.
Over time, the country became more interested in education data. Education stakeholders wanted more information for accountability purposes and to better understand what programs and instructional strategies worked. They wanted data to inform decisionmaking at all levels; and to improve administration, instruction, and student performance. The bottom line was, they wanted data from across the agency and they wanted them fast. This changing environment posed many problems for the agency. Requested analyses required linking across silos, or data integration in a central data store. Before this could happen, duplicate, inconsistent data had to be reconciled. However, once the integration work began, more inconsistencies were discovered than anyone had imagined. Data quality had to be a higher priority; security had to be improved; and the data elements collected had to serve business and stakeholder needs, not just meet federal requirements. Better methods of sharing data had to be devised if the agency was to meet the growing demand for a “P–20” system. And better, more consistent protocols were needed to make data sharing more efficient and prevent improper dissemination. The chief information officer decided something had to be done. Having seen a presentation at a national conference, he was convinced a process called “data governance” could help address the agency’s problems.
Throughout this series, important terms and topics will be highlighted in sidebars. Notable subject matter will be easily identified by the following icons:
|Bright ideas||Tips||District difference|