Technology in Schools
NCES 2003-313
November 2002

Technology in Schools: Suggestions, Tools, and Guidelines for Assessing Technology in Elementary and Secondary Education-Home Page


Members of the Technology in Schools Task Force


Chair

Tom Ogle
Director, School Core Data
Missouri Department of Elementary and Secondary Education

Members

Morgan Branch
Director, Technology Service, Curriculum and Instruction
Tennessee State Department of Education
Bethann Canada
Director, Information Technology
Virginia Department of Education
Oren Christmas
Assistant MEIS Administrator, Center for Educational Performance and Information
Michigan Department of Education
Judith Fillion
Division Director, Program Support
New Hampshire State Department of Education
Ed Goddard
Evaluator, Federal Programs Department
Clark County School District, Nevada
N. Blair Loudat
Director, Technology and Information Services
North Clackamas School District, Oregon
Tom Purwin
Director, Education Technology/Information Systems
Jersey City Public School District, New Jersey
Andy Rogers
Director, Instructional Technology Applications
Los Angeles Unified School District, California
Mike Vinson
Superintendent
Tupelo Public School District, Mississippi

Constants

John Clement
Education Statistics Services Institute
American Institutes for Research
Lee Hoffman
National Center for Education Statistics
U.S. Department of Education
Carl Schmitt
National Center for Education Statistics
U.S. Department of Education

Top


Master List of Key Questions

Chapter 1: Technology Planning and Policies

Chapter 2: Finance

Chapter 3: Equipment and Infrastructure

Chapter 4: Technology Applications

Chapter 5: Maintenance and Support

Chapter 6: Professional Development

Chapter 7: Technology Integration

Top


Acknowledgments

This document was developed through the National Cooperative Education Statistics System and funded by the National Center for Education Statistics (NCES) of the U.S. Department of Education.

The Technology in Schools Task Force, whose members represent many levels of the educational system, wishes to acknowledge the efforts of many individuals who contributed to the compilation and development of this document. In particular, John Clement of the Education Statistics Services Institute, and Lee Hoffman and Carl Schmitt of the National Center for Education Statistics, provided guidance and insight from inception to final form.

The task force also wishes to thank the following external reviewers, who examined the draft and made many valuable suggestions, some of which were adopted:

Barbara Clements, Evaluation Software Publishing, Inc.;

Sara Fitzgerald, Consortium for School Networking;

Ann Flynn, Technology Leadership Network, ITTE-National School Boards Association;

Laurence Goldberg, Director, Technology and Telecommunications, Abington School District, Abington, Pennsylvania;

Melinda George, Executive Director, State Educational Technology Directors Association (SETDA);

Julian Katz, Supervisor, Data Analysis, Howard County Public Schools, Ellicott City, Maryland;

Keith Krueger, Executive Director, Consortium for School Networking;

Lawrence Lanahan, Education Statistics Services Institute;

Tim Magner, Director, Schools Interoperability Framework (SIF), Software and Information Industry Association;

Catherine Mozer Connor, Office of the Inspector General, U.S. Department of Education;

Jeffery Rodamar, Planning and Evaluation Service, U.S. Department of Education;

Craig Stanton, Office of the Undersecretary, Budget Service, U.S. Department of Education; and

Geannie Wells, Director, Center for Accountability Solutions, American Association of School Administrators (AASA).

Last but by no means least, the task force acknowledges with gratitude the efforts of those who edited the handbook and prepared it for publication: Deborah Durham-Vichr, editorial consultant; and Martin Hahn (editor), Cecelia Marsh (proofing), and Mariel Escudero (design and layout) of the Education Statistics Services Institute.

Top


Foreward

This guide began with discussions within the National Forum on Education Statistics (the Forum) about the number and diversity of technology-related surveys that schools, school districts, and state departments of education are asked to complete. Consensus developed that agreement on the important questions, and an understanding of how answers to these questions might be assessed, would serve an important public policy purpose. A Forum Technology in Schools Task Force was created and began its work in January 1999.

This document is the result of the task force's work over more than 3 years and represents a joint effort of state and local education agency representatives who are involved with issues related to technology in schools.

The Forum is a representative body sponsored by the National Center for Education Statistics (NCES) of the U.S. Department of Education in collaboration with the states. The purpose of the Forum is to help improve the quality, comparability, and timeliness of data used for education policy at all levels of government. The mission of the Forum is to develop and recommend strategies for building an education data system that will support local, state, and national efforts to improve public and private education throughout the United States.

The results of the task force's labors provide a resource for educators and policy makers who are responsible for assessing the need for, and the effects of, technology in schools. The strategy chosen by the authors has been to identify key questions on the use of technology in educational management and instruction, and to specify how such questions might be answered. Throughout, the task force's intent has been to suggest, not prescribe. This guide provides a wide range of options and suggestions for technology administrators to adapt assessment to their school's situation and needs. The indicators and data elements listed in the handbook are a larger collection than any school district may want to establish.

Feedback and More Information

Please note that this guide is also available on the Forum's web site. Since technology and schools evolve, this document will require continual revision. We urge readers and users to provide comments, examples of data collection efforts, and materials. Feedback can be provided to the Technology, Dissemination and Communication (TD&C) Committee of the National Forum on Education Statistics, through the web site at http://nces.ed.gov/forum, where information about the Forum, its membership, meetings, and working groups can be also be found.

A number of references in the Handbook refer to web sites on the Internet. These references were current at the time of publication; however, the authors cannot guarantee that they will continue to work into the future. The online version of the Handbook will have its references periodically updated and web sites checked. Readers finding expired web site references are asked to look elsewhere on the same web site, to communicate with web site owners, and to communicate with the handbook authors through the Forum web site at http://nces.ed.gov/forum.

Users may also be interested in related Forum products, available on the web site above, such as:

Top


Executive Summary

This handbook is intended to facilitate the assessment of technology used to support elementary and secondary education in the United States. It is designed to help decision makers and technology users prepare, collect and assess information about whether and how technology is being used in their school systems. To make assessments that will be the basis for good decisions about the distribution and use of computers in the educational environment, well-focused data are necessary.

Since computer-based communications technologies are continually evolving, and since their distribution throughout the education system is continually changing, responding to the demand for technology data requires ongoing information gathering. Deciding what levels and types of technology are required and/or deployed to accomplish instructional or management goals requires information and insight into the roles that technology plays in the education system.

Since education groups of all kinds-from policymakers at various levels, to commercial interests, to professional associations, to education managers and planners-repeatedly ask nearly the same questions, coming to agreement on standard questions and answers can help reduce redundancy and improve comparability in the questions asked and the answers provided. More timely and accurate data collection might in turn lead to reduced frequency of collection; it should certainly lead to more consistent reporting.

Ad hoc technology surveys are expensive and time-consuming for all participants and rarely produce information that can be compared across states or districts or over time. Much of the information needed about the status and use of technology resources in schools can be provided by existing information systems or obtained from available records that schools or school districts may keep about their computer and software purchases, use, and maintenance. But some information may be more appropriately gathered by way of specially designed and administered surveys using questionnaires focused on those specific issues. Building the capacity to answer key policy questions into management systems, whether for property, staff, or instructional support, can lead to better data with less effort.

Because the role and impact of technology in the education system are extremely pervasive and the need to know correspondingly broad, this guide deals with the integration of a wide range of electronic technologies into support of school management and instruction. Topics include not only the availability of equipment and software, but also function: the ways of using computers and networks, and other equipment, to support all aspects of the school enterprise.

Key audiences for this handbook are those people who collect, store, publish, or use information about technology in its applications in schools and districts. These include educators and educational administrators-teachers, principals, and technology coordinators-as well as hardware and software vendors and information collectors and users. Other important audiences are program managers and planners at federal, state, and district levels.

The guide is organized around key questions that the Technology in Schools Task Force authors have determined to be central, pertaining to the type, availability, and use of technology in education systems. The task force was composed of state education agency managers and school district technology coordinators, practitioners, and leaders; they discussed among themselves and polled their colleagues to identify the most commonly asked, and most important, questions about technology in schools.

The key questions are grouped into seven primary topics, each with a chapter:

  • technology planning and policies;
  • finance;
  • equipment and infrastructure;
  • technology applications (software and systems);
  • maintenance and support;
  • professional development and training; and
  • technology integration.

For each topic, authors identified key questions and how they could best be answered. A measure, the result of which answers the key question, is called an indicator and more than one indicator can be provided for a given key question. Much of the panel's discussions dealt with what indicators were the proper measures for key questions-ones that were both measurable and meaningful as responses to the key questions, and that ideally would retain their meaning across time and technological evolution. Indicators are based on single items of information called data elements. Data elements may be combined in various ways to produce indicators.

After listing key questions for the topic and an overview, each chapter defines the topic precisely in order to delimit the area of assessment and then discusses the indicators that provide answers to key questions. Technology administrators will have a range of suggestions and options to adapt to their own assessment needs. Indeed, making it possible to adapt suggestions for assessment to a school district's requirements is a major purpose of this guide.

The indicators and data elements that comprise answers to key questions include a range of information that may extend beyond the requirements of a given school or school district. The document is deliberately broad in scope in order to meet a diverse range of needs. On the other hand, the information included may not reflect all the needs of some school settings. It should be possible, however, to gain sufficient insight from the items provided to construct what is required to evaluate the status of technology in a given school environment.

Top


Introduction

In recent years, schools have invested heavily in putting technology-especially computers and their associated infrastructure-in the hands of students, teachers, and administrators. Many people involved in education, from legislators to teachers to parents, as well as the general public, want to know what technology exists in schools and how that technology is being used. These are a few of the questions that are typically asked:

  • How can technology support the educational vision for our district?
  • What are our technology needs?
  • Are our technology goals right for our needs?
  • Have we reached our technology goals yet?
  • Where has the money gone?
  • Are we doing as well as others?

Top


Purpose of this Guide

This guide has been developed to help answer those questions listed on the previous page and many others related to them. It is meant to fulfill several purposes:

  • to provide guidelines and tools to gather information on the presence and use of technology in schools;
  • to facilitate the development and maintenance of data on technology in schools;
  • to help reduce the redundancy and diversity in data collection and, simultaneously, to facilitate comparability in the information obtained; and
  • to increase awareness of the breadth of issues related to the deployment of technology in educational settings.
  • As it fulfills these purposes, the guide should help focus questions asked about computer technology in education so that more meaningful policy and discussion can emerge.

Top


For Whom Is This Guide Intended?

This document was prepared for the people who must request, collect, assemble, or assess information on technology in schools. The main intended audiences include:

  • Technology coordinators for schools or districts who need to store information in a database for later retrieval, so they can answer questions in technology surveys, inventories, etc.
  • Principals and school administrators who want to ensure that technology is being used effectively in their school or district. In addition, special program coordinators (e.g., for Title I, special education) may want to know how technology can support their program goals.
  • National, state, and local decision makers who are responsible for planning for technology in schools, allocating resources to the schools, and assessing the effects of technology in them.
  • Legislators and other policymakers (or their staff) who want to know how funds appropriated for school technology are being used.

Others within the educational environment who may directly benefit from this handbook include teachers who are looking for information on technology proficiency standards, survey developers who want to compare ideas for their own questions, and software vendors who create information management systems for schools.

Since education groups of all kinds-from policy makers at various levels, to commercial interests, to professional associations, to education managers and planners-repeatedly ask nearly the same questions, agreement on standard questions and answers can help reduce redundancy and improve comparability in the questions asked and the answers provided. More timely and accurate data collection might in turn lead to reduced frequency of collection and more consistent reporting.

Top


Defining Technology in Schools

The term technology in schools can have many different meanings in different contexts and times. As used in this guide, technology pertains to the full range of computer and computer-related equipment and associated operating systems, networking, and tool software that provide the infrastructure over which instructional and school management applications of various kinds operate. And, in order to assess the effects of technology, this document goes beyond equipment and infrastructure. It includes how, how well, and by whom technology is used, as well as the resources that are required for user support. Such aspects as libraries and information services; security needs, both for the protection of facilities and equipment and for the assurance of the safety of both students and staff; the integration of technology into such areas as facility design and professional development and training-technology extends to all these parts of the educational enterprise.

For the purposes of this handbook, equipment includes both hardware and software, such as:

  • computers and computer-driven equipment, as well as the peripherals that are attached to computers (such as printers, scanners, digital cameras, projectors, etc.);
  • servers, routers, switches, transceivers, and other equipment that support wired and wireless communication between computers, providing access to other computers, local- and wide-area networks, and the global Internet;
  • support for state-of-the-art telephone-based technology, including voicemail and fax technologies, that can improve instructional and administrative capabilities and support parent-school communication;
  • audio and video equipment (including satellite receivers and transmitters, cable boxes, and other items) used in distance education;
  • display equipment used in classrooms, including television monitors, opaque and transparent projectors, and electronic whiteboards; specialized calculators and computers, including personal digital assistants, graphing calculators, and measuring/data collection tools for such purposes as chemical or biological assay or weather measurements;
  • the infrastructure of wires and cables (and, more and more, the wireless systems) that support computer-based networking and video access. Although this infrastructure is formally part of school facilities, some elements are defined in this handbook since their specification matters to the operation of technology in schools; and
  • the software applications and programs that are pertinent to the education system. These include programs that are used to support instruction or control management processes.

It is also important to consider the institutional knowledge base of schools and districts as a factor in technology in schools; it serves as a foundation for an effective system and can be observed in patterns of institutional behaviors that provide continuity to the educational system.

Top


Organizing Principles of This Guide

Key Questions —>Indicators —>Data Elements —>Unit Records


Key Questions

This guide is organized around key questions that are asked about the distribution and implementation of technology in the educational environment. They reflect the primary concerns about technology of decision makers and stakeholders in the educational enterprise. They may be asked by anyone inside or outside the educational environment, but are usually asked by decision makers who have an impact on the distribution of resources. Key questions often pertain to the type, availability, distribution, and use of computer technology and peripherals, as well as related software and numerous other related factors.

Key questions often turn out to be complex and multifaceted when scrutinized with a view to gathering information that would provide a useful response. Take a simple-sounding key question, such as "How many computers are there in this school district?" On the surface, it would seem that the person asking the question knows what information is available and what is to be done with the answer. However, the person doing the work of getting the information finds all sorts of dilemmas. First, what is really meant by computer? Does an old computer stored in a closet still count? What if a computer doesn't work any longer?

A second dilemma is where to get the information. Are there records about computers that were purchased or does someone have to count how many computers there are? In this fashion, apparently simple key questions may require considerable elaboration in order to clarify what information is to be collected and make sure that it is measurable.

Top


Indicators

In any case, the person asked to gather this information needs to develop some measures that will help arrive at a satisfactory response. Those measures that provide answers to the question are called indicators.

Top


Data Elements

A data element is a single item of information or measurement in a database (or other collection of information) that is the basis of an indicator. For the sake of brevity and narrative clarity, the data elements for all chapters are indexed by key question and indicator in Appendix A. Appendix B then offers examples of rules used to combine data elements into indicators.

Some indicators are simply based on a single data element, while others may require more complex combinations of data elements. For example, the number of computers is a simple indicator. A more complex indicator would be the percentage or ratio of the number of "functioning" computers to the number of students. In some cases, more than one indicator may be required to provide a meaningful response to a key question.

Top


Unit Records

collection of data elements for a single unit (which could be a single computer, a single technology user-teacher, student, or administrator-or a single classroom) is called a unit record. The individual item about which a series of data elements is collected is a unit. In effect, information (data element[s]) is collected about a computer (unit on which a record is kept), and, hence, a unit record is created. For example, the year a computer was purchased is a data element. The information about its repair condition or its location are also data elements. The computer is the unit. The string of information about that computer becomes the unit record.

Top


Using the Guide

Readers, according to their goals, can use the key questions and indicators by topic to develop information to support decision making. They can:

  • use key questions to delimit the scope of an assessment;
  • use indicators to describe areas to be assessed;
  • use indicators as "authoritative support" for key questions, especially if standards-based indicators are used for measures (for example, in Chapter 7, Technology Integration);
  • use other indicators as items for surveys in and of themselves, or base surveys on measures listed; and
  • construct databases from data elements in this handbook, in order to establish a "data warehouse" on the status of technology in schools and districts.

Users can begin by reviewing the complete list of key questions on page iii of the handbook and then refer directly to the chapter that covers a key question of interest. They can then consider one or more of the indicators that help to answer that question. Or, they can go directly to a topic of interest, such as finance, and study key questions and indicators dealing with that topic only.

Top


What Topics This Guide Covers

Each of the substantive chapters begins with a list of key questions, followed by a narrative overview and definition of the chapter topic. Each key question is then discussed in turn, listing one or more indicators. Terms are described when they are first used. Where relevant, an example unit record is provided. Each chapter ends with a list of resources and references.

The material and key questions on technology related to education are grouped into seven chapters, based on the best judgment of experts in technology in education upon review of available materials.

  • Chapter 1-Technology Planning and Policies-addresses the documented strategies that provide direction for the acquisition, use, maintenance, and expansion of technology in the educational enterprise. Three major areas are addressed: vision, access, and integration.
  • Chapter 2-Finance-covers issues related to expenditure categories for technology.
  • Chapter 3-Equipment and Infrastructure-describes the availability of computers and other equipment in use in administrative and instructional settings, as well as the connection of computers and other equipment to local and wide-area networks and to the Internet.
  • Chapter 4-Technology Applications-pertains to the administrative (e.g., school management and record keeping) and instructional uses (e.g., instructional software or distance learning) to which computer technology is put.
  • Chapter 5-Maintenance and Support-focuses on the processes employed to maintain computer hardware and software (what organizations do to maintain technology systems) and what personnel are allocated and under what circumstances personnel are allocated for technology maintenance.
  • Chapter 6-Professional Development-pertains to professional development and training related to technology (i.e., tracking professional development opportunities being offered, who has taken courses, training and professional development needs, and potential effects of such training).
  • Chapter 7-Technology Integration-pertains to how and to what extent technology is a tool for administrative productivity, decision making, and instructional practice. Indicators in this chapter address student and staff proficiency in the use of technology, the integration of technology into the curriculum and teaching practice, and the use of computers and network systems in school management.

Top


What the Handbook Does and Does Not Do

This handbook condenses a great deal of information: nearly three dozen key questions and several hundred indicators and data elements. The intent has been to provide a comprehensive list of indicators and data elements from which users may choose standard terms and measures for their own purposes. Creating a database or a computer system to represent all this information would be a substantial burden for technology coordinators who spend most of their time supporting users. The handbook authors offer suggestions and alternatives for indicators that answer key questions; they do not prescribe that all of this information be collected. Rather, it is expected that users will choose indicators and data elements that address issues of particular interest and importance to their schools and districts.

Top


Sidebar Topcs


"The Changing Nature of Information"

Users should bear in mind that while the information included in this handbook is based on the best and most current assessment by experts, technology is extremely dynamic and subject to continuous and rapid change. Adapting the handbook’s information to new technology and applications is part and parcel of using this guide. Users with ideas for changes should also see the note on “Feedback and More Information” at the end of the Foreword.

The guide's indicators of technology availability and use can be paired with locally determined measures of student achievement, operational efficiency, or other outcomes, so as to assess the relation between technology inputs and desired results.* This handbook does not directly address student or management outcomes, beyond evidence of deployment and utilization of technology in the K-12 setting. Outcome measurements (not themselves technology indicators) are beyond the scope of this document.

This handbook also does not directly address measurement issues, such as the reliability and validity of the data elements listed. Measurements are, to varying degrees, reproducible over time and across inquirers and forms of inquiry; and they are, to varying degrees, also accurate reflections of the concepts they purport to measure (as determined by a consensus of stakeholders, or other means). These issues matter, and much is written about them, but their proper consideration exceeds both the space available and the competence of our panel.

The purpose of this document is to allow decision makers to make choices about the various kinds of information they need, to select some questions that are truly "key," and to focus and organize data collection and information management to produce useful information, so as to make better decisions.

Top


The story of Jane Neussup, the newly appointed superintendent of Freshlook County Schools…

(Note: There is one part of the story for each chapter of the handbook.)

Introduction

Jane is settling into her new job and is holding a meeting with key staff members to learn more about the programs at Freshlook County Schools. Today she is meeting with John Techno, the district’s technology coordinator.

The first two questions she asks him are “What did we spend last year on technology in the district?” and “I started out as a science teacher; how is technology being used in science instruction?”

John answers, “I can get you the expenditure numbers from our technology plan. And as for science instruction, each high school has wiring drops in every science lab and three Pentiums® running Windows 98 in each lab.”

Jane says, “I appreciate the information, John, but what does that tell me about how students are using technology to learn science?”

John replies, “Well, Dr. Neussup, I don’t know, but I’ll find out.”

[To be continued…]

Top


"Getting It All Together"

Information pertinent to key questions may be obtained from a variety of sources:

  • Administrative records associated with the purchase and maintenance of technology may already hold much information.
  • Standards created by a range of local, state, and national organizations can be converted to ratings systems. Ratings produced by applying such systems assess the status of standards-based indicators. Technology coordinators, administrators, or teachers may be asked to provide such ratings, ideally after training to improve consistency and under-standing.
  • External evaluators address issues of objectivity of training and measurement consistency. Ad hoc surveys, self-ratings, and observation may also be tools to use depending on needs, the data collector, and school setting.

Top


* For a recent summary of issues and findings in evaluating technology’s impact on student outcomes, see The 1999 Secretary’s Conference on Educational Technology: Evaluating the Effectiveness of Technology http://www.ed.gov/rschstat/eval/tech/techconf99/index.html; in particular, there are a number of white papers on assessing technology in relation to student outcomes. Unfortunately, there is much less published on the impact of technology on school management and function.

Top