Skip Navigation

Chapter 15—Are We “There” Yet? Evaluating Your LDS

The best time to think about LDS evaluation is during the planning phase. A key question to consider early on, and throughout the development process, is "How will we know if we've been successful?" Planning ahead of time for the evaluation ensures you will know how to measure progress right from the beginning of development and implementation.

During the planning phase, an agency should establish project goals and objectives that will become the very basis for the evaluation; as well as metrics on which the process will rely. In turn, your evaluator will be able to convert these objectives into evaluation designs and measurement protocols. Then, right from the start of the implementation, you (or the evaluator) can collect baseline data against which later data measures will be compared.

How well does your LDS deliver in terms of your stated criteria for success? How does it enhance operations, improve data quality, and facilitate better data-driven decisionmaking? Your ability to evaluate the system depends largely on how you planned the project from the very beginning. How clear was your vision of the desired system? Did you identify unambiguous and measurable goals? Clarity at the beginning will help you and your stakeholders evaluate if your efforts are paying off, and help you make adjustments to refine the system over time (see chapters 8 and 14).

The Subjectivity of "Success"

Without clearly stated goals and criteria for measuring how close you come to reaching them, success may be depend on the user's vantage point. For instance, technologically savvy users may expect cutting edge applications, while average users may be more interested in usability and user-friendly interfaces (Miller 2000). The expectations set at the beginning of the project may also affect perceptions. Leaders of the marketing and outreach effort, for instance, will affect how others view the system. Care should therefore be taken not to create unrealistically high expectations. If you paint too ambitious a picture when trying to win support, stakeholders may end up dissatisfied with the results (Staples 2002).

Measuring LDS Success

Before assessment begins, and ideally even before the project begins, agency leaders or stakeholder groups serving on an evaluation committee should reflect on some basic questions:

checkmark icon Who will evaluate the system? (Agencies commonly bring in an outside consultant.)
checkmark icon From whose perspective will the system's success be judged (agency leaders, IT staff, teachers, etc.)?
checkmark icon What criteria will be used to judge success? (Identify well-defined assessment criteria aligned with the established LDS goals.)
checkmark icon Will greater emphasis be put on timeliness or on progress?
checkmark icon What types of information should be used to gauge success (for instance, qualitative or quantitative)?
checkmark icon What methods will be used to gather and analyze the information?

And later, when interpreting evaluation results, consider changes that may have affected the project and the ability to achieve the original goals. Did political leadership change? Was there significant staff turnover in-house or in vendor staff? Were new laws passed? Did technology changes have any impact? Did new resources become available, or did a downturn in the economy affect the budget?

Evaluation methods

Evaluators may use various tools to gauge system success. Commonly used methods include

checkmark icon surveys (online and paper);
checkmark icon case studies; and
checkmark icon direct observation.

Evaluation criteria and questions

There are many possible questions you can use to evaluate your system's success. Table 5 presents a collection of some evaluation criteria and questions. These should be answered using one or more of the above methodologies. While these questions are qualitative in nature, you should include solid, measurable criteria in the evaluation whenever possible.