This blog is part of a guest series by the Cost Analysis in Practice (CAP) project team to discuss practical details regarding cost studies.
In one of our periodic conversations about addressing cost analysis challenges for an efficacy trial, the Cost Analysis in Practice (CAP) Project and Promoting Accelerated Reading Comprehension of Text-Local (PACT-L) teams took on a number of questions related to data collection. The PACT-L cost analysts have a particularly daunting task with over 100 schools spread across multiple districts participating in a social studies and reading comprehension intervention. These schools will be served over the course of three cohorts. Here, we highlight some of the issues discussed and our advice.
Do we need to collect information about resource use in every district in our study?
For an efficacy study, you should collect data from all districts at least for the first cohort to assess the variation in resource use. If there isn’t much variation, then you can justify limiting data collection to a sample for subsequent cohorts.
Do we need to collect data from every school within each district?
Similar to the previous question, you would ideally collect data from every participating school within each district and assess variability across schools. You may be able to justify collecting data from a stratified random sample of schools, based on study relevant characteristics, within each district and presenting a range of costs to reflect differences. You might consider this option if funding for cost analysis is limited. Note that “district” and “school” refer to an example of one common setup in an educational randomized controlled trial, but other blocking and clustering units can stand in for other study designs and contexts.
How often should we collect cost data?
The frequency of data collection depends on what the intervention is, length of implementation, and the types of resources (“ingredients”) needed. People’s time is usually the most important resource used for educational interventions, often 90% of the total costs. That’s where you should spend the most effort collecting data. Unfortunately, people are notoriously bad at reporting their time use, so ask for time use as often as you can (daily, weekly). Make it as easy as possible for people to respond and offer financial incentives, if possible. For efficacy trials in particular, be sure to collect cost data for each year of implementation so that you are accurately capturing the resources needed to produce the observed effects.
What’s the best way to collect time use data?
There are a few ways to collect time use data. The PACT-L team has had success with 2-question time logs (see Table 1) administered at the end of each history lesson during the fall quarter, plus a slightly longer 7-question final log (see Figure 2).
Table 1. Two-question time log. Copyright © 2021 American Institutes for Research.
1. Approximately, how many days did you spend teaching your [NAME OF THE UNIT] unit? |
____ total days |
2. Approximately, how many hours of time outside class did you spend on the following activities for [NAME OF UNIT] unit? |
Record time to the nearest half hour (e.g., 1, 1.5, 2, 2.5)
|
a. Developing lesson plans |
_____ hour(s) |
b. Grading student assignments |
_____ hour(s) |
c. Developing curricular materials, student assignments, or student assessments |
_____ hour(s) |
d. Providing additional assistance to students |
_____ hour(s) |
e. Other activities (e.g., coordinating with other staff; communicating with parents) related to unit |
_____ hour(s) |
Table 2. Additional questions for the final log. Copyright © 2021 American Institutes for Research.
3. Just thinking of summer and fall, to prepare for teaching your American History classes, how many hours of professional development or training did you receive so far this year (e.g., trainings, coursework, coaching)? |
_____ Record time to the nearest half hour (e.g., 1, 1.5, 2, 2.5) |
4. So far this year, did each student receive a school-provided textbook (either printed or in a digital form) for this history class? |
______Yes ______No |
5. So far this year, did each student receive published materials other than a textbook (e.g., readings, worksheets, activities) for your American history classes? |
______Yes ______No |
6. So far this year, what percentage of class time did you use the following materials for your American History classes? |
Record average percent of time used these materials (It has to add to 100%) |
a. A hardcopy textbook provided by the school |
_____% |
b. Published materials that were provided to you, other than a textbook (e.g., readings, worksheets, activities) |
_____% |
c. Other curricular materials that you located/provided yourself |
_____% |
d. Technology-based curricular materials or software (e.g., books online, online activities) |
_____% |
Total |
100% |
7. So far this year, how many hours during a typical week did the following people help you with your American history course? Please answer for all that apply |
Record time to the nearest half hour (e.g., 1, 1.5, 2, 2.5) |
a. Teaching assistant |
_____ hours during a typical week |
b. Special education teacher |
_____ hours during a typical week |
c. English learner teacher |
_____ hours during a typical week |
d. Principal or assistant principal |
_____ hours during a typical week |
e. Other administrative staff |
_____ hours during a typical week |
f. Coach |
_____ hours during a typical week |
g. Volunteer |
_____ hours during a typical week |
They also provided financial incentives. If you cannot use time logs, interviews of a random sample of participants will likely yield more accurate information than surveys of all participants because the interviewer can prompt the interviewee and clarify responses that don’t make sense (see CAP Project Template for Cost Analysis Interview Protocol under Collecting and Analyzing Cost Data). In our experience, participants enjoy interviews about how they spend their time more than trying to enter time estimates in restricted survey questions. There also is good precedent for collecting time use through interviews: the American Time Use Survey is administered by trained interviewers who follow a scripted protocol lasting about 20 minutes.
Does it improve accuracy to collect time use in hours or as a percentage of total time?
Both methods of collecting time use can lead to less than useful estimates like the teacher whose percentage of time on various activities added up to 233%, or the coach who miraculously spent 200 hours training teachers in one week. Either way, always be clear about the relevant time period. For example, “Over the last 7 days, how many hours did you spend…” or “Of the 40 hours you worked last week, what percentage were spent on…” Mutually exclusive multiple-choice answers can also help ensure reasonable responses. For example, the answer options could be “no time; less than an hour; 1-2 hours; 3-5 hours; more than 5 hours.”
What about other ingredients besides time?
Because ingredients such as materials and facilities usually represent a smaller share of total costs for educational interventions and are often more stable over time (for example, the number of hours a teacher spends on preparing to deliver an intervention may fluctuate from week to week, but the classrooms tend to be available for use for a consistent amount of time each week), the burden of gathering data on other resources is often lower. You can add a few questions to a survey about facilities, materials and equipment, and other resources such as parental time or travel once or twice per year, or better yet to an interview, or better still, to both. One challenge is that even though these resources may have less of an impact on the bottom line costs, they can involve quantities that are more difficult for participants to estimate than their own time such as the square footage of their office.
If you have additional questions about collecting data for your own cost analysis and would like free technical assistance from the IES-funded CAP Project, submit a request here. The CAP Project team is always game for a new challenge and happy to help other researchers brainstorm data collection strategies that would be appropriate for your analysis.
Robert D. Shand is Assistant Professor in the School of Education at American University
Iliana Brodziak is a senior research analyst at the American Institutes for Research