Chapter 1: Technology Planning and Policies
Chapter 2: Finance
Chapter 3: Equipment and Infrastructure
Chapter 4: Technology Applications
Chapter 5: Maintenance and Support
Chapter 6: Professional Development
Chapter 7: Technology Integration
This document was developed through the National Cooperative Education Statistics System and funded by the National Center for Education Statistics (NCES) of the U.S. Department of Education.
The Technology in Schools Task Force, whose members represent many levels of the educational system, wishes to acknowledge the efforts of many individuals who contributed to the compilation and development of this document. In particular, John Clement of the Education Statistics Services Institute, and Lee Hoffman and Carl Schmitt of the National Center for Education Statistics, provided guidance and insight from inception to final form.
The task force also wishes to thank the following external reviewers, who examined the draft and made many valuable suggestions, some of which were adopted:
Barbara Clements, Evaluation Software Publishing, Inc.;
Sara Fitzgerald, Consortium for School Networking;
Ann Flynn, Technology Leadership Network, ITTE-National School Boards Association;
Laurence Goldberg, Director, Technology and Telecommunications, Abington School District, Abington, Pennsylvania;
Melinda George, Executive Director, State Educational Technology Directors Association (SETDA);
Julian Katz, Supervisor, Data Analysis, Howard County Public Schools, Ellicott City, Maryland;
Keith Krueger, Executive Director, Consortium for School Networking;
Lawrence Lanahan, Education Statistics Services Institute;
Tim Magner, Director, Schools Interoperability Framework (SIF), Software and Information Industry Association;
Catherine Mozer Connor, Office of the Inspector General, U.S. Department of Education;
Jeffery Rodamar, Planning and Evaluation Service, U.S. Department of Education;
Craig Stanton, Office of the Undersecretary, Budget Service, U.S. Department of Education; and
Geannie Wells, Director, Center for Accountability Solutions, American Association of School Administrators (AASA).
Last but by no means least, the task force acknowledges with gratitude the efforts of those who edited the handbook and prepared it for publication: Deborah Durham-Vichr, editorial consultant; and Martin Hahn (editor), Cecelia Marsh (proofing), and Mariel Escudero (design and layout) of the Education Statistics Services Institute.
This guide began with discussions within the National Forum on Education Statistics (the Forum) about the number and diversity of technology-related surveys that schools, school districts, and state departments of education are asked to complete. Consensus developed that agreement on the important questions, and an understanding of how answers to these questions might be assessed, would serve an important public policy purpose. A Forum Technology in Schools Task Force was created and began its work in January 1999.
This document is the result of the task force's work over more than 3 years and represents a joint effort of state and local education agency representatives who are involved with issues related to technology in schools.
The Forum is a representative body sponsored by the National Center for Education Statistics (NCES) of the U.S. Department of Education in collaboration with the states. The purpose of the Forum is to help improve the quality, comparability, and timeliness of data used for education policy at all levels of government. The mission of the Forum is to develop and recommend strategies for building an education data system that will support local, state, and national efforts to improve public and private education throughout the United States.
The results of the task force's labors provide a resource for educators and policy makers who are responsible for assessing the need for, and the effects of, technology in schools. The strategy chosen by the authors has been to identify key questions on the use of technology in educational management and instruction, and to specify how such questions might be answered. Throughout, the task force's intent has been to suggest, not prescribe. This guide provides a wide range of options and suggestions for technology administrators to adapt assessment to their school's situation and needs. The indicators and data elements listed in the handbook are a larger collection than any school district may want to establish.
Feedback and More Information
Please note that this guide is also available on the Forum's web site. Since technology and schools evolve, this document will require continual revision. We urge readers and users to provide comments, examples of data collection efforts, and materials. Feedback can be provided to the Technology, Dissemination and Communication (TD&C) Committee of the National Forum on Education Statistics, through the web site at http://nces.ed.gov/forum, where information about the Forum, its membership, meetings, and working groups can be also be found.
A number of references in the Handbook refer to web sites on the Internet. These references were current at the time of publication; however, the authors cannot guarantee that they will continue to work into the future. The online version of the Handbook will have its references periodically updated and web sites checked. Readers finding expired web site references are asked to look elsewhere on the same web site, to communicate with web site owners, and to communicate with the handbook authors through the Forum web site at http://nces.ed.gov/forum.
Users may also be interested in related Forum products, available on the web site above, such as:
This handbook is intended to facilitate the assessment of technology used to support elementary and secondary education in the United States. It is designed to help decision makers and technology users prepare, collect and assess information about whether and how technology is being used in their school systems. To make assessments that will be the basis for good decisions about the distribution and use of computers in the educational environment, well-focused data are necessary.
Since computer-based communications technologies are continually evolving, and since their distribution throughout the education system is continually changing, responding to the demand for technology data requires ongoing information gathering. Deciding what levels and types of technology are required and/or deployed to accomplish instructional or management goals requires information and insight into the roles that technology plays in the education system.
Since education groups of all kinds-from policymakers at various levels, to commercial interests, to professional associations, to education managers and planners-repeatedly ask nearly the same questions, coming to agreement on standard questions and answers can help reduce redundancy and improve comparability in the questions asked and the answers provided. More timely and accurate data collection might in turn lead to reduced frequency of collection; it should certainly lead to more consistent reporting.
Ad hoc technology surveys are expensive and time-consuming for all participants and rarely produce information that can be compared across states or districts or over time. Much of the information needed about the status and use of technology resources in schools can be provided by existing information systems or obtained from available records that schools or school districts may keep about their computer and software purchases, use, and maintenance. But some information may be more appropriately gathered by way of specially designed and administered surveys using questionnaires focused on those specific issues. Building the capacity to answer key policy questions into management systems, whether for property, staff, or instructional support, can lead to better data with less effort.
Because the role and impact of technology in the education system are extremely pervasive and the need to know correspondingly broad, this guide deals with the integration of a wide range of electronic technologies into support of school management and instruction. Topics include not only the availability of equipment and software, but also function: the ways of using computers and networks, and other equipment, to support all aspects of the school enterprise.
Key audiences for this handbook are those people who collect, store, publish, or use information about technology in its applications in schools and districts. These include educators and educational administrators-teachers, principals, and technology coordinators-as well as hardware and software vendors and information collectors and users. Other important audiences are program managers and planners at federal, state, and district levels.
The guide is organized around key questions that the Technology in Schools Task Force authors have determined to be central, pertaining to the type, availability, and use of technology in education systems. The task force was composed of state education agency managers and school district technology coordinators, practitioners, and leaders; they discussed among themselves and polled their colleagues to identify the most commonly asked, and most important, questions about technology in schools.
The key questions are grouped into seven primary topics, each with a chapter:
For each topic, authors identified key questions and how they could best be answered. A measure, the result of which answers the key question, is called an indicator and more than one indicator can be provided for a given key question. Much of the panel's discussions dealt with what indicators were the proper measures for key questions-ones that were both measurable and meaningful as responses to the key questions, and that ideally would retain their meaning across time and technological evolution. Indicators are based on single items of information called data elements. Data elements may be combined in various ways to produce indicators.
After listing key questions for the topic and an overview, each chapter defines the topic precisely in order to delimit the area of assessment and then discusses the indicators that provide answers to key questions. Technology administrators will have a range of suggestions and options to adapt to their own assessment needs. Indeed, making it possible to adapt suggestions for assessment to a school district's requirements is a major purpose of this guide.
The indicators and data elements that comprise answers to key questions include a range of information that may extend beyond the requirements of a given school or school district. The document is deliberately broad in scope in order to meet a diverse range of needs. On the other hand, the information included may not reflect all the needs of some school settings. It should be possible, however, to gain sufficient insight from the items provided to construct what is required to evaluate the status of technology in a given school environment.
In recent years, schools have invested heavily in putting technology-especially computers and their associated infrastructure-in the hands of students, teachers, and administrators. Many people involved in education, from legislators to teachers to parents, as well as the general public, want to know what technology exists in schools and how that technology is being used. These are a few of the questions that are typically asked:
This guide has been developed to help answer those questions listed on the previous page and many others related to them. It is meant to fulfill several purposes:
This document was prepared for the people who must request, collect, assemble, or assess information on technology in schools. The main intended audiences include:
Others within the educational environment who may directly benefit from this handbook include teachers who are looking for information on technology proficiency standards, survey developers who want to compare ideas for their own questions, and software vendors who create information management systems for schools.
Since education groups of all kinds-from policy makers at various levels, to commercial interests, to professional associations, to education managers and planners-repeatedly ask nearly the same questions, agreement on standard questions and answers can help reduce redundancy and improve comparability in the questions asked and the answers provided. More timely and accurate data collection might in turn lead to reduced frequency of collection and more consistent reporting.
The term technology in schools can have many different meanings in different contexts and times. As used in this guide, technology pertains to the full range of computer and computer-related equipment and associated operating systems, networking, and tool software that provide the infrastructure over which instructional and school management applications of various kinds operate. And, in order to assess the effects of technology, this document goes beyond equipment and infrastructure. It includes how, how well, and by whom technology is used, as well as the resources that are required for user support. Such aspects as libraries and information services; security needs, both for the protection of facilities and equipment and for the assurance of the safety of both students and staff; the integration of technology into such areas as facility design and professional development and training-technology extends to all these parts of the educational enterprise.
For the purposes of this handbook, equipment includes both hardware and software, such as:
It is also important to consider the institutional knowledge base of schools and districts as a factor in technology in schools; it serves as a foundation for an effective system and can be observed in patterns of institutional behaviors that provide continuity to the educational system.
Key Questions —>Indicators —>Data Elements —>Unit Records
This guide is organized around key questions that are asked about the distribution and implementation of technology in the educational environment. They reflect the primary concerns about technology of decision makers and stakeholders in the educational enterprise. They may be asked by anyone inside or outside the educational environment, but are usually asked by decision makers who have an impact on the distribution of resources. Key questions often pertain to the type, availability, distribution, and use of computer technology and peripherals, as well as related software and numerous other related factors.
Key questions often turn out to be complex and multifaceted when scrutinized with a view to gathering information that would provide a useful response. Take a simple-sounding key question, such as "How many computers are there in this school district?" On the surface, it would seem that the person asking the question knows what information is available and what is to be done with the answer. However, the person doing the work of getting the information finds all sorts of dilemmas. First, what is really meant by computer? Does an old computer stored in a closet still count? What if a computer doesn't work any longer?
A second dilemma is where to get the information. Are there records about computers that were purchased or does someone have to count how many computers there are? In this fashion, apparently simple key questions may require considerable elaboration in order to clarify what information is to be collected and make sure that it is measurable.
In any case, the person asked to gather this information needs to develop some measures that will help arrive at a satisfactory response. Those measures that provide answers to the question are called indicators.
A data element is a single item of information or measurement in a database (or other collection of information) that is the basis of an indicator. For the sake of brevity and narrative clarity, the data elements for all chapters are indexed by key question and indicator in Appendix A. Appendix B then offers examples of rules used to combine data elements into indicators.
Some indicators are simply based on a single data element, while others may require more complex combinations of data elements. For example, the number of computers is a simple indicator. A more complex indicator would be the percentage or ratio of the number of "functioning" computers to the number of students. In some cases, more than one indicator may be required to provide a meaningful response to a key question.
collection of data elements for a single unit (which could be a single computer, a single technology user-teacher, student, or administrator-or a single classroom) is called a unit record. The individual item about which a series of data elements is collected is a unit. In effect, information (data element[s]) is collected about a computer (unit on which a record is kept), and, hence, a unit record is created. For example, the year a computer was purchased is a data element. The information about its repair condition or its location are also data elements. The computer is the unit. The string of information about that computer becomes the unit record.
Readers, according to their goals, can use the key questions and indicators by topic to develop information to support decision making. They can:
Users can begin by reviewing the complete list of key questions on page iii of the handbook and then refer directly to the chapter that covers a key question of interest. They can then consider one or more of the indicators that help to answer that question. Or, they can go directly to a topic of interest, such as finance, and study key questions and indicators dealing with that topic only.
Each of the substantive chapters begins with a list of key questions, followed by a narrative overview and definition of the chapter topic. Each key question is then discussed in turn, listing one or more indicators. Terms are described when they are first used. Where relevant, an example unit record is provided. Each chapter ends with a list of resources and references.
The material and key questions on technology related to education are grouped into seven chapters, based on the best judgment of experts in technology in education upon review of available materials.
This handbook condenses a great deal of information: nearly three dozen key questions and several hundred indicators and data elements. The intent has been to provide a comprehensive list of indicators and data elements from which users may choose standard terms and measures for their own purposes. Creating a database or a computer system to represent all this information would be a substantial burden for technology coordinators who spend most of their time supporting users. The handbook authors offer suggestions and alternatives for indicators that answer key questions; they do not prescribe that all of this information be collected. Rather, it is expected that users will choose indicators and data elements that address issues of particular interest and importance to their schools and districts.
Users should bear in mind that while the information included in this handbook is based on the best and most current assessment by experts, technology is extremely dynamic and subject to continuous and rapid change. Adapting the handbook’s information to new technology and applications is part and parcel of using this guide. Users with ideas for changes should also see the note on “Feedback and More Information” at the end of the Foreword.
The guide's indicators of technology availability and use can be paired with locally determined measures of student achievement, operational efficiency, or other outcomes, so as to assess the relation between technology inputs and desired results.* This handbook does not directly address student or management outcomes, beyond evidence of deployment and utilization of technology in the K-12 setting. Outcome measurements (not themselves technology indicators) are beyond the scope of this document.
This handbook also does not directly address measurement issues, such as the reliability and validity of the data elements listed. Measurements are, to varying degrees, reproducible over time and across inquirers and forms of inquiry; and they are, to varying degrees, also accurate reflections of the concepts they purport to measure (as determined by a consensus of stakeholders, or other means). These issues matter, and much is written about them, but their proper consideration exceeds both the space available and the competence of our panel.
The purpose of this document is to allow decision makers to make choices about the various kinds of information they need, to select some questions that are truly "key," and to focus and organize data collection and information management to produce useful information, so as to make better decisions.
(Note: There is one part of the story for each chapter of the handbook.)
Jane is settling into her new job and is holding a meeting with key staff members to learn more about the programs at Freshlook County Schools. Today she is meeting with John Techno, the district’s technology coordinator.
The first two questions she asks him are “What did we spend last year on technology in the district?” and “I started out as a science teacher; how is technology being used in science instruction?”
John answers, “I can get you the expenditure numbers from our technology plan. And as for science instruction, each high school has wiring drops in every science lab and three Pentiums® running Windows 98 in each lab.”
Jane says, “I appreciate the information, John, but what does that tell me about how students are using technology to learn science?”
John replies, “Well, Dr. Neussup, I don’t know, but I’ll find out.”
[To be continued…]
Information pertinent to key questions may be obtained from a variety of sources:
* For a recent summary of issues and findings in evaluating technology’s impact on student outcomes, see The 1999 Secretary’s Conference on Educational Technology: Evaluating the Effectiveness of Technology http://www.ed.gov/rschstat/eval/tech/techconf99/index.html; in particular, there are a number of white papers on assessing technology in relation to student outcomes. Unfortunately, there is much less published on the impact of technology on school management and function.