Contents - Previous - Next


This is the old United Nations University website. Visit the new site at http://unu.edu


GENERAL MODEL OF EVALUATION PROCEDURE

The general model envisaged for the built-in routine evaluation procedure is described here as a point of reference. (This is partly based on procedures discussed at a UNICEF/ Cornell workshop on Nutritional Surveillance in Eastern and Southern Africa in May 1982 [8] and on concepts of hierarchical/non-hierarchical information systems given in a background paper for that meeting [9].)

A representation of the procedure (or system) for routine evaluation is given in figure 1. The key concept is to identify the points in the administrative structure at which management decisions are made (stars in the figure), what these decisions are, and what minimum information is needed to base these decisions on. High priority is given in the JNSP to community participation in running the programme activities. This means that information is needed in villages, for use in villages. Much of this information can be informal. This also provides the data source that, with suitable summarization, could provide the needed data at the district level. Again, summarized data from the district level should be useful centrally. Further steps in the administrative hierarchy that are not included here for reasons of simplicity-e.g. provinces-can be fitted in.

If possible, the system should be set up in such a way that all (or most) of the information passed from one level of administration to another has already been used at the less aggregated level. In fact, at the individual level the primary information is often collected anyway-for decisions on diagnosis and treatment either at home or during clinic visits in the example in figure 2. The system is illustrated using the operational objectives of home visits by village health workers targeted to households with underweight children and establishment of village health committees (VHCs), with improvement in nutritional status as the impact objective. The village-level systems can operate autonomously when passing information on to the district level.

The primary source of information in the illustration is household visits. Households are visited by volunteer health workers at regular intervals, and in this example children are weighed and examined for certain illnesses. The weights and symptoms, where applicable, are recorded, and a decision is made by the health worker as to whether the child needs to be referred to the clinic or whether other action is needed. At the same time other services may be rendered, for example supply of oral rehydration salts, education, etc. The health workers may then periodically summarize the weight data to produce an assessment of village progress. A second source of data may be from the clinic itself, where children may be weighed and their health assessed for the usual reasons for diagnosis and treatment.

FIG. 1. System for Routine Evaluation (Information Flow and Decision-Making)

FIG. 2. An Example of the System for Routine Evaluation at the Local Level Operating through Home Visits by Village Health Workers and Diagnosis of Patients at the Clinic (Symbols as in fig.1 )

Again, these data may be summarized and periodically used to assess progress in the village. If the organization at the village level is going ahead, a VHC or development committee etc. may be established. The periodic reports based on home visits and/or clinic information should be useful to the VHC for its own planning. This is likely to involve the use of its own resources and occasionally serve as a basis for requests for other inputs from the district level. The types of simple data tabulations likely to be suitable are given later, under "The Design Document," in relation to this example.

Assessments such as these at village level provide the raw data for evaluation at the district level. The reports should be compiled over a suitable time period and forwarded to the district office. If they have already been required in the villages, the additional work involved should be minimal and the main motivational problem in compiling data overcome. At the district office the information from the programme villages within the district can again be summarized for use by the relevant district organizational body. Decisions at this level are likely to involve allocation of resources under the jurisdiction of the district, and again on occasions to support cases for requesting additional resources from the central administrative office. Provided there are decisions that are really made at the district level, there should be adequate motivation for compiling the necessary data coming from the villages. Examples of tabulations for use in district offices derived from this illustration are also given in "The Design Document." Finally, a further summarization of data from the district level should provide the essential information at the central level.

STARTING A PROCESS FOR DESIGNING THE EVALUATION

Successful evaluation will need co-operation from a range of people and institutions. The same applies to designing a workable evaluation system. There is every advantage in bringing those who will be involved in running the system into its design. This again depends on the programme planning and should be part of the planning process: If the planning includes community involvement (as is intended), then local consultations on the programme plan should include the evaluation element. If the programme is intended to evolve through decentralization of decisionmaking, the evaluation component will (a) be needed to guide this evolution, and (b) itself need to adapt as the programme develops. This evolution of the programme can refer to targeting of project activities or to modification of these activities, e.g. changing education methods, or indeed to replacing activities found to be ineffective or unfeasible with others, e.g. changing from distribution of supplementary food to promotion of home food production.

Those who should be involved in the process of designing the evaluation include the programme planners, the programme management, and representatives of the communities and services involved; in addition, some special expertise in health and nutrition data and their analysis and in evaluation procedures will be valuable. Among the early steps recommended is holding meetings of such groups to begin planning.

In this context, a number of clarifications are often needed. The most important concerns who does the evaluation, for whom, and for what purposes. The routine evaluation component, which should be the major effort, is done by those carrying out the programme itself, for their own use, and for those supporting them higher up in the administrative structure. The purpose is to help the programme operate, to identify problems in the planning and implementation of the programme, and to help timely correction of difficulties or problems found. Getting a clear understanding of this view of evaluation is essential, because otherwise co-operation will be less than enthusiastic. It should not be seen, as it so often is, as outsiders coming in and trying to find fault.

The impact evaluation component, if it is included, should in fact have a similar purpose in the long run. That is, to work out what is effective and what is not, so that resources and efforts can be progressively shifted towards the effective interventions. It may be pointed out that the present status of knowledge is woefully inadequate to choose with confidence the best interventions, both in general and certainly in the specific circumstances of an individual country or area. This discussion needs to be held at the central level, to encourage support of the policy-makers and of the institutions with the necessary capabilities for helping the impact evaluation.

The first steps therefore aim to bring in the crucial people and institutions and to gain their support, to clear up any misunderstandings, and to reassure where necessary that evaluation is not threatening, that on the contrary it is an integral and essential part of the programme. Some discussion of the general model of an evaluation system may be appropriate, as mentioned earlier under "General Model of Evaluation Procedure." This can lead in to considering how such a model can be adapted feasibly to local circumstances.

The next step is to start to decide in concrete terms what to do. The objective of preparing the design document provides a framework. The group of people/institutions brought together to discuss the general plan could now be turned into a working group, or at least into an overseeing group or steering committee to supervise the planning of the evaluation. With luck, the overall planning of the programme may be proceeding on similar lines, in which case the evaluation planners can be a sub-group in the overall planning process.

The logic of the design document, for routine evaluation, is to

a. review the programme objectives, operational and impact, at different administrative levels;
b. identify who needs what information to make what decisions on programme management and implementation;
c. specify the information output needed, drawing up dummy (blank) tables;
d. specify possible data sources, reporting formats, etc.;
e. return through this process from the beginning to work towards a feasible plan of the evaluation (e.g., there is no point in specifying a certain operational or impact objective if progress in meeting it cannot be assessed; equally, there is no point in specifying an impact measure that is not known to be or is not likely to be responsive to the intervention).

The work will involve field visits, observing current work practices (including reporting), discussing information used with those currently making decisions on existing programmes, drafting possible reporting formats, and so on. Assignment of the work may be to members of the working group themselves, or additional manpower may be hired for the task. The work may take some time. In the context of a consultant's visit it may be decided that the best outcome of the visit will be to begin this process, presumably agreeing on some deadline for completion of the first draft of the design for review. This means that a work plan for planning the evaluation may need to be set up and agreed on. Similarly, funds will need to be allocated. The following section gives details of what may be needed in the format of the proposed design document.

THE DESIGN DOCUMENT

The outcome of the preliminary assessment of evaluation needs is a tentative design for the evaluation procedure. As emphasized above, this preliminary assessment should itself provide a momentum for getting the evaluation going-it should involve the people and institutions who will run the evaluation, and the design should be the product of their thinking. It is useful to aim at a specific product that lays out what needs to be done and can perhaps be the beginning of an operating manual for the evaluation procedure itself. This section gives some suggestions and illustrations for this product, referred to (for want of a better term) as the "design document." This document should not exist by itself but should be part of the programme plan, as one section, annex, etc.

The necessity of linkage to planning is absolute here. The design document has to include the programme objectives, quantified (albeit often on the basis of guesswork). The process of coming up with an evaluation design may in fact contribute to clarifying the programme objectives. Yet again, the procedure is seen as iterative. Routine evaluation and impact evaluation are treated separately for convenience here, although they will be linked in practice. A possible outline for the design document is as follows:

Routine evaluation

- Statement of the programme's operational and impact objectives, with disaggregation of these to the smallest unit at which evaluation data will be used and decisions made (usually intended to be village level, if community participation is to be real, and/or health centre or clinic).
- Identification of decision points, and decisions to be made (for example, village health committees, district programme offices, central programme management).
- Information needed to support these decisions, and dummy tables.
- Information sources, reporting forms, tallying and summary forms, reporting schedule.
- Steps needed to set up evaluation procedure.

Impact evaluation

- Comparison groups.
- Confounding factors.
- Analytical capability.

Routine Evaluation

1. Programme Objectives

Objectives may be set by aggregating village objectives, or disaggregating central and district objectives, or a combination of these. The results should give details, for the total programme area (which could be national), district, and village, on activities, targeting, organization, and outcome. A district statement of objectives (following the usual illustration) might be on the following lines: "Thirty of the 100 villages in the district are targeted. In these, village health committees (VHCs) will be set up, and one village health worker (VHW) per village will be trained. The VHW will visit all households (average 200 per village) every year, and households with malnourished children every month (60 per month). Monthly visits will include education, oral rehydration salt supply, and referral as needed. A reduction in malnutrition of two cases per 100 is aimed at, from 30 per cent prevalence at the beginning of the first year, to 20 per cent prevalence after five years." (Note: If the initial prevalence is unknown, the first evaluation results will do.)

A village plan could be the village-level equivalent of the above. The overall programme plan would be the aggregation of the district plans.

The plans have obvious implications for supplies (e.g. oral rehydration salts), training, educational materials, clinic support for referral, and so on. Not all these implications will be referred to below. We will use home visiting and nutritional outcomes as the examples. In reality, additional or alternative programme activities will be included district by district or village by village. However, similar principles will apply.

2. Identification of Decision Points and Decisions to Be Made

Decision points and decisions to be made will usually be defined by activity or groups of activities, possibly arranged to have the same target group.

At the village level, the programme may be managed by a village health committee. It will be necessary to define what sort of decisions they can make for disposal of their own resources (e.g. the work of the village health worker and supplies provided by the district office). They may be concerned with ensuring that home visits are carried out with sufficient frequency and adequately reach the intended malnourished children. If this is not happening, they may wish to tighten up on supervision. If it is happening, they may be concerned whether nutritional status is improving as intended. Equally, they may wish to monitor delivery of supplies from the district level and have a basis for requesting additional assistance as needed. Fairly simple information is required for this, as illustrated below.

At the district level, the relevant questions include:
- Are VHWs carrying out home visits adequately?
- Are they reaching malnourished children?
- Is the development of organization at the village level proceeding satisfactorily-e.g., are village health committees being set up, meeting, etc.?
- Are supplies, provisions, and so forth being satisfactorily delivered from the district to villagers?
- If the above are working adequately, is the intended reduction in the prevalence of malnutrition coming about? The decisions resulting from the answers to these questions may involve supervision, further training, additional support of other types, and so on.

Finally, at the central programme management level, there is a further set of questions, as follows:
- Are districts succeeding in implementing the programme as planned in terms of overall activities?
- Are the activities reaching the targeted villages?
- Are the organizations being set up as intended?
- Are supplies, supervision, and so forth being satisfactorily delivered?
- Is the overall reduction in prevalence of malnutrition on track?

In this framework there are five types of information that are regularly important. These are:
- activity monitoring (e.g., Are the planned number of household visits being carried out?),
- targeting (e.g., Are these home visits reaching the intended households; are programmes being implemented in the intended villages?),
- organization (e.g., Are VHCs being set up?),
- logistics (e.g., Are supplies getting from districts to villages?),
- outcome (e.g., Is the prevalence of malnutrition de clining?).

3. Information Needs and Dummy Tables

Examples of the information outputs that could answer the questions outlined above are given in this section. It is considered essential to reach this level of detail relatively early in designing the evaluation. Experience has shown that it is the procedure of drawing up dummy tables itself that begins to define precisely how the system might work, what problems are likely to be encountered, and so on. The examples refer to the general model shown in figures 1 and 2.

a. Village level

Activity monitoring: For the example of the VHW, the activity monitored could be number of home visits for education, provision of oral rehydration salts, and referral of sick children. The source of this information would be the VHW's own reporting. The purpose is to check that the VHW's coverage of home visits is in line with the operational objectives in the plan.

Targeting: The example is whether the VHW is successfully reaching the targeted households (e.g., those with children of less than 80 per cent weight for age). In this example, full coverage every year is assumed as a basis for village targeting. Dummy table 1 is a model of the table to work toward, with sample figures inserted. Using those figures, the coverage (proportion of malnourished children visited) = 20/30 = 67 per cent. Focusing (proportion of malnourished children in the households visited) = 20/40 = 50 per cent. The population prevalence of malnutrition = 30/100 = 30 per cent. (The concepts of coverage, focusing, etc. are given in references 3 and 7.)

Outcome: The outcome indicator used as an example is the prevalence of children of less than 80 per cent standard weight for age. The table should be as shown in dummy table 2. Under some circumstances it may be worth investing in vital registration within the village, e.g. as a function of the village health committee.

b. District level

Activity monitoring (e.g., of the VHWs' home visits): See dummy table 3.

Targeting:
i. Within-village targeting, for targeted villages-see dummy table 4.
ii. Between-village targeting-see dummy table 5. The programme was implemented in four out of six villages targeted (67 per cent) and in one village that was not targeted. Switching resources from the untargeted village (top right-hand cell in the table) to one of the targeted villages without the programme (lower left-hand cell) is indicated.

Dummy Table 1

  Weight for age Total
< 80% >= 80%
Visited 20 20 40
Not visited 10 50 60
Total 30 70 100

Dummy Table 2

  Time 1 * Time 2* *
Number of children < 80% W/A divided by total number of children* * * 30/100 28/100

* E.g. six months ago.
** E.g. now.
*** Or by weight gain, etc.

Dummy Table 3

Village No. homes visited No. visits planned* % Completion of plan Implementation > 75%?**
1 40 50 80 Yes
2        
.        
.        
.        
n        
Total        

* Operational objective.
** Criteria for evaluating attainment of operational objective.

Dummy Table 4

Village No. malnourished targeted No. malnourished reached % Coverage * Focusing > population prevalence?**
1 30 15 50 Yes
2        
.        
.        
n        
Total        

* Criteria needed for adequate coverage, from operational objective.
** Test criterion for evaluating whether targeting reaches malnourished preferentially.

Dummy Table 5

  Number of villages
Targeted Not targeted Total
Programme implemented 4 1 5
Programme not implemented 2 3 5
Total 6 4 10

Logistics: A summary of the delivery of supervisory visits and so forth should be included here.

Organization: See dummy table 6.

Outcome (evaluated quarterly or annually);
i. Village-level outcome (assessed at the district level)-see dummy table 7.
ii. District-level outcome-see dummy table 8.

c. National level

Activity monitoring (e.g. of VHWs): See dummy table 9.

Dummy Table 6

Village VHC formed? VHC met? Budget voted? Budget spent?
1 Yes Yes No No
2        
.        
.        
n        
Total        

Dummy Table 7

Village % Malnourished Change, cases per 100 Adequate?**
Previous Now - 2 Yes
1 30 28 +4 No
2 36 40    
.        
.        
n        
Total        

* E.g. six months ago.
** The change in prevalence of malnutrition regarded as adequate is the outcome objective for the village, in this case perhaps 2 cases per 100 per six months.

Dummy Table 8

Village Present prevalence Population No. malnourished
1      
2      
.      
.      
n      
      District
      prevalence*

*This should then be compared with the previous prevalence.

Dummy Table 9

District Number of villages  
With programme > 75% of planned Implementation* Etc.
1 5 4  
2      
.      
.      
n      
Total      

*Operational objectives of overall programme should define criteria for evaluating extent of implementation regarded as adequate.

Dummy Table 10

District Number of villages
Coverage >70%* Focusing > population prevalence* Etc.
1      
2      
.      
.      
n      
Total      

* Need operational objectives.

Targeting:
i. Within-village targeting-see dummy table 10.
ii. Between-village targeting-see dummy table 11.

Organization: See dummy table 12.

Outcome: i. Village-level progress in reducing malnutrition (assessed at the national level)-see dummy table 13.
ii. District-level progress in reducing malnutrition (assessed at the national level)-see dummy table 14.

Dummy Table 11

District No. villages targeted No. targeted villages receiving programme % Delivery*
1 6 4 67
2      
.      
.      
n      
Total      

* Needs operational objective

Dummy Table 12

District Number of villages % Implementation in targeted villages
VHC formed VHC met
1 3 3 50
2      
.      
.      
n      
Total      

Similar information could be tabulated for mortality data if they are available.

4. Information Sources, Reporting Formats, Etc.

There are many possible sources of data, depending on different programme activities. We have focused on administrative data as these are usually the most feasible to collect. However, household surveys, periodic village censuses, establishing village vital registration, and so on, may all be more appropriate under varying conditions. Time and space preclude discussing these here, but this aspect should be developed in future guidelines.

Here again, a crucial step in designing the system is to draft suitable forms for reporting and summarizing. For example, prevalence of malnutrition can be tallied from road-to-health charts. In Indonesia, a tallying system from road-to-health charts provides the numbers of children gaining and losing weight, and these data are aggregated progressively at each level up the administrative structure. Similar considerations apply to process data. This step depends on the outputs needed (e.g. as suggested in the previous section), and defining outputs and designing forms should be done iteratively.

Dummy Table 13

District No. villages with adequate reduction of malnutrition
1  
2  
.  
.  
Total  

Dummy Table 14

District Present prevalence of malnutrition Population of preschool children No. mal nourished Etc.
1        
2        
.        
.        
n     National prevalence of malnutrition  

Where nutritional surveillance has been set up for programme management (e.g. in Costa Rica), it has been possible to actually simplify existing reporting forms. This may well be common experience and should be aimed for. Superimposing an additional reporting task on a village worker is unlikely to be well received; making the reporting simpler, by cutting out unnecessary data and streamlining the system, on the other hand, may recruit good will.

Samples of reporting and tallying forms are quite widely available (e.g. reference 12). These may provide useful guidance, although forms for each specific situation may be needed and should certainly be field-tested.

Planned reporting schedules tend to err on the side of too frequent, at least for outcome data. For programmes such as the JNSP may support, evaluating changes in outcome once or twice a year may be sufficient. On the other hand, data on activities, targeting, and organization are likely to be needed more frequently to allow deviations in programme to be corrected in time; monthly reporting at the village and district levels may be appropriate, depending on the local organization. The distinction between how often data are recorded-which could be daily for data derived from clinics or home visits-and how often summarized and reported (e.g. monthly) is obvious but should not be forgotten.

5. Steps Needed to Set Up the Evaluation Procedure

Once the system for routine evaluation has been outlined, the requirements for getting it running must be defined. These will depend on local circumstances and resources. The following subjects need to be covered: - assignment of responsibilities for data collection, for supervision, and for summarization and tabulation, interpretation, and transmission of data; - training; - field-testing procedures; - provision of equipment (e.g. scales, report forms).

Impact Evaluation

The case for using some resources for evaluating impact may need to be made, since often there are misunderstandings of its role. A number of points are important and were referred to briefly earlier. Routine evaluation does not give any idea of impact, because the changes that would have taken place without the programme are not known. This may often mask the effect of a programme. For example, if a programme succeeds in preferentially reaching the malnourished by screening or by targeting worse-off areas, a straightforward with/without programme comparison at one time will show that those with the programme are more malnourished. This can lead to the false conclusion that the programme is ineffective. Second, effects of the programme may be masked by "noise" and more detailed study will be needed to find the effect. Third, it is important for those using resources for the programme (government and donors) to know if the programme (or parts of it) is having the effect hoped for in order to replicate the successful parts in future and bring about long-term improvement. The alternative is to hope blindly for the best. In sum, the positive intention of impact evaluation must be stressed: to enable scarce resources to be used efficiently to tackle the problem.

Having made these points, suggestions for design of impact evaluation, usually on a subset of the programme (e.g. by area), are needed. The considerations are given in some detail in Mason and Haaga (5, section 4.) Attention to design at an early stage is essential. Often an institution with research capability may need to be brought in to help with the design and also with the subsequent analysis. Again, dummy outputs should be produced as part of the design. At least the following considerations need to be laid out in the design document.

Choice of Comparison Groups

In choosing groups for comparison, the object is to get comparisons of conditions with and without the programme and/or before and after the programme. Options for doing this are given in Mason and Haaga (5, section 4). These comparisons cannot be exact, and compensation for inexact matching can be made by measuring unmatched factors (e.g. socio-economic status) that are likely to be associated with the outcome to be measured (e.g. nutritional status).

Confounding Factors

This refers to alternative explanations for the results obtained, which need to be taken into account. Certain types of confounding (e.g. differences in socio-economic status between comparison groups) can be taken into account to some extent by analysis, if these are measured. Thus, appropriate variables need to be identified early so that they can be measured at the right time. Other threats to validity, such as changes going on in the overall population (e.g. from economic change) can also be adjusted for. A third important consideration concerns regression artefacts: e.g., if only selected malnourished children are considered, certain of these may improve anyway; this trend can sometimes be allowed for in the design of the evaluation.

Analytical Capability

In contrast to routine evaluation, assessment of impact requires established analytical capabilities, often including computing facilities. In practice, this often means that a research institution should be involved. Suitable institutional arrangements should be defined and given in the design document.

REFERENCES

1. Health Programme Evaluation, WHO Health for All Series, No. 6 (WHO, Geneva, 1981).

2. D. J. Casey and D. A. Lury, A Handbook on Monitoring and Evaluation of Agriculture and Rural Development Projects (World Bank, Washington, D.C., 1981).

3. J. B. Mason, J.P. Habicht, H. Tabatabai, and V. Valverde, "Nutritional Surveillance for Programme Management and Evaluaton," in J. B. Mason, J-P. Habicht, H. Tabatabai, and V. Valverde, Nutritional Surveillance (WHO, Geneva, 1984), pp. 140175.

4. H. E. Freeman, P. H. Rossi, and S. R. Wright, Evaluating Social Projects in Developing Countries, Development Centre Studies (OECD, Paris, 1979).

5. J. B. Mason and J. G. Haaga, "Note on a Framework for Monitoring and Evaluation of UNICEF/WHO Nutrition Improvement Programmes," Cornell Nutritional Surveillance Program Working Paper Series, No. 16(1983).

6. D. R. Gwatkin, J. W. Wilcox, and J. D. Wray, Can Health and Nutrition Interventions Make a Difference? Overseas Development Council Monograph 13 (Praeger Publishers, New York and London, 1980).

7. J. B. Mason, J-P. Habicht, and H. Tabatabai, "Basic Concepts for the Design of Evaluation Programmes during Implementation," in D. E. Sahn, R. Lockwood, and N. S. Scrimshaw, eds., Methods for the Evaluation of the Impact of Food and Nutrition Programmes (United Nations University, Tokyo, 1984), pp. 1-25.

8. UNICEF/Cornell Nutritional Surveillance Program, Report of a Workshop on Social and Nutritional Surveillance in Eastern and Southern Africa, Nairobi, Kenya, 1982.

9. K. Williams and J. Mason, "Social and Nutritional Surveillance in Eastern and Southern Africa," background paper for Workshop on Social and Nutritional Surveillance in Eastern and Southern Africa, Nairobi, Kenya, 1982.

10. D. T. Campbell and J. C. Stanley, Experimental and Quasi-experimental Design for Research (Rand McNally, Chicago, III., USA, 1966).

11. T. D. Cook and D. T. Campbell, Quasi-experimentation (Houghton Mifflin, Boston, Mass., USA, 1979).

12. World Health Organization, Programme for Control of Diarrhoeal Diseases, Manual for the Planning and Evaluation of National Diarrhoeal Diseases Control Programmes (WHO/ CDD/SER/81.5: WHO, Geneva, 1981).

 


Contents - Previous - Next