Spring 2009 Elizabeth M. King and Jere R. Behrman.

Timing and duration of exposure are relevant for all evaluations but particularly so for the evaluation of social programs

Impact evaluations often ignore the importance of timing and duration: how long after a program has been launched one should wait before evaluating it; how long treatment groups should be exposed to a program before they can be expected to benefit from it, either partially or fully; and how to take account of the heterogeneity in impact that is related to the age of the beneficiary and the duration of exposure. The true impact of a program may not be immediate or constant over time, yet many evaluations treat interventions as if they were instantaneous, predictable changes in conditions and equal across treatment groups. Whether the treatment involves immunization or a more process-oriented program such as community organization, there is often no consideration of the possibility that the effects differ depending on variations in program exposure.

A new article by King and Behrman argues that timing and duration of exposure are relevant for all evaluations but particularly so for the evaluation of social programs. The reason is that these programs often involve changes in the behaviors of both service providers and service users. If one evaluates too early, one risks finding only partial or no impact; if one waits too long, one risks a loss of donor and public support for the program or the scaling up of a badly designed program.

There are many sources of variation in timing and duration of exposure that can affect program impact. Organizational factors affect the leads and lags in program implementation. For example, a program requiring material inputs (such as textbooks or medicines) relies on the arrival of those inputs in the program areas, and the timing of the procurement of the inputs by the central program office is not necessarily an accurate indicator of when those inputs reach their intended destinations.

Impact may be underestimated if a program is evaluated before it has run its course. For example, trainees who attend only part of a training program are likely to benefit its less than those who complete it. Moreover, some intended beneficiaries of a program may be slow to accept the program—or may not do so until after they have learned about its benefits, such as by observing the gains to early takers.

In some cases estimates of the impact of a social program after its completion could still understate its full impact for reasons that are external to the program design. One such reason is that the program might yield additional and unintended outcomes in the long run. For example, a microfinance project may not only provide women employment and income but also improve the future status of their daughters.

Another reason is that spillover effects pertaining more to compliance than to timing can appear and intensify with time. Control groups or groups other than the intended beneficiaries might find a way of obtaining the treatment, or they might be affected simply by learning about the existence of the program, possibly because of expectations that the program will be expanded to their area.

There may be heterogeneous responses to programs that are related to the age of the beneficiary. For example, early childhood development programs, such as infant and child feeding programs, target children soon after birth because epidemiological and nutritional evidence indicates that a significant part of a child´s physical and cognitive development occurs before age three and so the returns to the programs are highest at these ages. Besides timing, the duration of exposure during this critical age range also matters—that is, not only whether a child receives the program before age three but also whether the child does so for much of the interval between birth and age three.

What can be done to incorporate timing and duration effects in program evaluations?

First, improve the quality of program data, especially the administrative records about the design and implementation details of a program.

Second, choose the timing of the evaluation keeping in mind the time path of program impacts. The learning process for program operators or for beneficiaries could produce a curve showing increasing impact over time, while a Hawthorne effect could show a very early steep rise in program impact that is not sustainable. Examining long-term impacts could point to valuable lessons about the diffusion of good practices over time.

Third, apply an appropriate evaluation method that takes into account timing and duration effects. In pilot programs it is possible to explore the time path of the impact by allocating treatment groups to different lengths of exposure in a randomized way, thus yielding operational lessons about program design.

Elizabeth M. King and Jere R. Behrman. Forthcoming. "Timing and Duration of Exposure in Evaluations of Social Programs." World Bank Research Observer. Also available at http://wbro.oxfordjournals.org/ under "advance access." Vol.3 (3).