Contents - Previous - Next


This is the old United Nations University website. Visit the new site at http://unu.edu


Stage 5: evaluating net outcome


The sequence of stages discussed so far precedes the evaluation of net outcome, and might lead to an early decision that further evaluation is unnecessary,. This would be the case if, for example, the programme has been shown to have been inadequately implemented (e.g., in terms of delivering the intended amounts of goods and services to the targeted population), or the programme objectives may be judged as unattainable by examining the programme plan. The decision on whether or not to proceed to evaluate the net outcome of a programme depends on whether or not it is necessary to do so and whether or not it can be done. (See TABLE 2.5. STAGE 5: Evaluating Net Outcome)

TABLE 2.5. STAGE 5: Evaluating Net Outcome

A. If data estimating gross outcome are available:

  1. varying programme delivery
  2. possible confounding factors (e.g. socioeconomic status)
  1. cross-tabulate by these factors and compare groups
  2. control statistically for confounding and correlate with programme delivery using multiple regression

This may give more plausible inferences on association of programme with outcome (i e. estimate net outcome). If these data are not available, or more certainty is needed, and surveys are considered worthwhile. go to B If not: STOP.

B. If survey to be carried out:

C. Evaluate estimates of net outcome

This may give additional plausibility to inferences on association of programme with outcome.

At this stage, an estimate of the gross outcome will be available. If further information is required concerning whether observed changes are likely to be due to the programme (i.e., moving from gross to net outcome), there are several possibilities.

These have been introduced in chapter 1.

  1. If there is information on varying levels of programme delivery, associations of levels of delivery with outcome can be investigated statistically.
  2. If, as well as this. there are data on possible factors confounding the relationship between programme delivery and outcome (e.g., socio-economic status), then these confounding variables can also be taken into account.

In the first place, this can be done by cross-tabulating, e.g., for varying programme delivery and socioeconomic status. Secondly. and particularly if a number of possible confounding variables are measured, multiple regression analyses (correlational analyses) can be used to simultaneously take account of several confounding variables, as well as programme delivery. In this case, one is investigating the effect of programme delivery controlling, for example, for socio-economic status. The sign, significance, and magnitude of the coefficients for variables representing programme delivery are estimated in relation to outcome, with confounding variables in the equation.

The next step would be to control for confounding employing various evaluation designs (see table 1.3.). This may sometimes be done by retrospective matching of a treatment and control group, but is more effective when designed into the evaluation before the intervention is initiated. The plausibility of the conclusions depends substantially on the choice of design, and can be increased if the effects of confounding variables are properly controlled for by either design or analysis or both. Two possible designs, which have been referred to in chapter 1 and which may be suitable for such situations are described below. (see Casley & Lury, (8); (and table 1 .3.).

 

Interrupted Time-Series Design

When a without-programme comparison group is not available, comparisons can be made with outcome data on the treatment group before the treatment begins. This usually requires a series of observations before and after treatment. If the effect of the treatment is sufficiently "sharp" - giving a rapid change in values of the outcome variables - this may provide evidence for an effect of the programme.

This design is not always possible for the evaluation of nutritional effects of programmes, since time-series data on the nutritional status of a population before the programme may not exist and post-intervention collection of time-series data may require a long period of time and be costly. If such data were, however, available, regression techniques can be employed to answer the question of whether the intervention had changed the trend of changes in nutritional status of the population in a significant way and the direction of this change. In this case, a variable representing time is introduced into the equation, and a variable representing programme delivery is analysed with respect to outcome, again taking account of other confounding variables.

 

Non-equivalent Control Group Design

When having a comparison group and a pretest is feasible, but randomization is not, a non-equivalent control group design may be used (see table 1.3.). Likely confounding variables should first be identified, measured, and controlled for statistically in the analysis. Analysis can be done through matching or statistical control or both.

"Matching" seeks to identify the major confounding variables and constructs treatment and comparison groups such that they would resemble each other as closely as possible on the matching criteria. Matching can be attempted either before the treatment begins or after it has ended and the outcomes have been measured. The success of this approach depends on the extent to which the confounding variables correlated with the outcome can be identified. The difficulty with matching is that, first, it is not always evident what these major confounding variables are; and second, there are likely to be quite a number of them. This complicates the matching process.

Statistical control to estimate net outcome tries to isolate the impact of the intervention by compensating for the differences that may exist between the treatment and comparison groups. The methods include simple techniques of standardization and stratification and correlational analyses. All of these statistical methods are particular applications of various regression techniques (see Judd and Kenny (9) and Anderson et al. (10).

Much of the data required will be derived from sample surveys. The variables include indicators of outcome (e.g. nutritional status) and major potential confounding variables (e.g., socio-economic status). Time-series data are preferable to cross-sectional data, although more often only the latter feasibly can be collected. Some considerations in designing the survey include sampling, stratification, and confounding variables.

Sampling: The sample should, if feasible, be randomly selected, so that the probability of inclusion of each household or individual in the sample is known. This allows inferences to be made concerning the population from which the sample is drawn. Considerations for deciding sample size are given in chapter 1.

Stratification: Participants and non-participants should be sampled separately if they are clearly distinguishable. If not, post-stratification is used. Major potential confounding variables can also be used for stratification - e.g.. geographically. by socio-economic status.

Confounding variables: Major potential confounding variables are selected on the basis of past investigations and a priori knowledge. Examples are: socio-economic status, e.g., income, wealth indicators; community variables such as access to services, ecological conditions. water supply and sanitation, participation in other programmes.

A large body of literature exists on such methods for social programme evaluation. These evaluations, however, are generally in the nature of evaluative research, rather than operational programme evaluations, and will not be further expanded upon here.


Stage 6: move to a built-in evaluation


Many of the difficulties discussed here may be mitigated by building an evaluation procedure into the implementation of the project. In some cases, a decision to do so could be an important outcome of the sort of evaluation described here, which will often be "mid-term." Some advantages of doing this are that:

- baseline data from the mid-term evaluation will be available
- time-series data can be organized
- adaptation of the programme can be much more usefully made as soon as the need becomes apparent (rather than complaining about what should have been done some time previously as in many post facto evaluations).

One basic approach to on-going evaluation has been set out in the context of nutritional surveillance. which we have referred to as "adequacy evaluation" (see Mason et al. 14). This is intended mainly for the use of programme managers. Adequacy evaluation covers both process and outcome. It essentially addresses two questions:

- Is the programme being delivered as planned to the intended target group? (i.e. the same questions as for process evaluation).
- Is the (gross) outcome acceptable?

Answers to both these questions lead directly to decisions on programme implementation. A negative answer to the first question leads to reexamination of the programme organization and management. A negative answer to the second questions should lead to further investigation as to why the programme is apparently failing to meet its objectives in terms of effects on the population.

There are two requirements for this adequacy evaluation. First, clear and quantified definition of target groups is needed. Second. there needs to be a definition of adequacy which involves both defining the units in which outcome is to be measured, and setting levels of these units which will be considered adequate. It is important to note that from adequacy evaluation it should be possible to derive costs relative to activity, and costs relative to gross outcome, either for the programme target group or in the population as a whole. It will not be possible to derive a true estimate of cost-effectiveness, meaning cost per unit of net outcome due to the project, because only gross outcome is assessed.

There are few examples of successful and continuing evaluations of outcome; and even good monitoring of programme process is not common. Nonetheless, experience is being gained, and serious attempts to put forward and apply practical approaches are being made. For example, a useful handbook on monitoring and evaluation of agricultural and rural development projects has recently been published by the World Bank (8), and chapter 14 of this publication is devoted to prescribing how to build in on-going evaluation systems into nutrition programme. Further progress in this area depends on allocation of the necessary funds, and the political will to evaluate.


References


  1. H.W. Riecken, "Practice and Problems of Evaluation: A Conference Synthesis." in R.E. Klein et al., eds., Evaluating the Impact of Nutrition and Health Programs (Plenum Press, New York, 1979), pp. 363-386.
  2. D.R. Gwatkin, J.R. Wilcox. and J.D. Wray, Can Health and Nutrition Interventions Make a Difference?, ODC Monograph No 13 (Overseas Development Council, Washington, D.C., U.S.A., 1980).
  3. I. Beghin/FAO. "Selection of Specific Nutritional Components for Agricultural and Rural Development Projects" (Nutrition Unit. Institute of Tropical Medicine, Antwerp, Belgium. 1980, mimeographed).
  4. J.B. Mason, J.-P. Habicht, H. Tabatabai and V. Valverde, Nutritional Surveillance (Cornell University. Ithaca, New York) (in press), (1982).
  5. J.-P. Habicht and W.P. Butz, "Measurement of Health and Nutrition Effects of Large-Scale Nutrition Intervention Projects," In R.E. Klein et al., eds., Evaluating the Impact of Nutrition and Health Programs (Plenum Press, New York, 1979).
  6. W.D. Drake, R.l. Miller and M. Humphrey, "Final Report: Analysis of Community-Level Nutrition Programs," Project on Analysis of Community-Level Nutrition Programs, Vol. I (USAID-Office of Nutrition, Washington D.C., 1980).
  7. D. Chernichovsky, "The Economic Theory of the Household and Impact Measurement of Nutrition and Related Health Programs," In R.E. Klein et al., eds., Evaluating the Impact of Nutrition and Health Programs (Plenum Press, New York, 1979).
  8. D. Casley and D. Lury, A Handbook on Monitoring and Evaluation of Agricultural and Rural Development Projects (Johns Hopkins Press, Baltimore, 1982).
  9. C.M. Judd and D.A. Kenny, Estimating the Effects of Social Interventions (Cambridge University Press, Cambridge, 1981).
  10. S. Anderson, A. Auquier, W.W. Hauck, D Oakes, W. Vandaele and H.I. Weisberg, Statistical Methods for Comparative Studies: Techniques for Bias Reduction (John Wiley & Sons, New York. 1980)


Contents - Previous - Next