Contents - Previous - Next


This is the old United Nations University website. Visit the new site at http://unu.edu


Hunger, health, and society

Nutrition intervention and evaluation: A call for reflection-in-action

William D. Drake, Roy 1. Miller, and Donald A. Schon
Community Systems Foundation, Ann Arbor, Michigan, USA

Largely as a result of a comprehensive analysis of community-level nutrition programmes, an analysis supported by the Office of Nutrition of the United States Agency for International Development (1), Community Systems Foundation has become a strong advocate of a new and imaginative approach to both the conduct of health/nutrition programmes and their evaluation. Called "reflection-in-action" (R-I-A), this approach calls for the regular and rapid analysis of quantitative data generated routinely during the course of an intervention for the purpose of sharpening and/or modifying programme design elements as well as improving the quality of programme implementation. The data should include indicators of impact as well as measurements of the performance of the service delivery system. The analysis must be carried out with the aid of (or entirely by) practitioners with intimate knowledge of field conditions and, therefore, the ability to discern potential competing explanations for observed trends in the indicators, and to articulate the arguments supporting or refuting those explanations.

This paper examines some of the far-reaching policy implications of R-I-A. To set the stage, we begin by summarizing the observations and deductions that led us to formulate R-I-A as a method of intervention and a methodology for evaluation. Then we will describe the features of R-I-A as suggested by those observations and deductions. Further clarification of R-I-A is provided by an example of a partial application of the methodology in an evaluation of the P. L. 480 Title II supplementary feeding programme in Sri Lanka. The balance of the paper is devoted to a discussion of the policy implications that must be considered in conjunction with any serious effort to implement R-I-A.

THE CASE FOR R-I-A

The arguments underlying our commitment to R-I-A have been presented in considerable detail elsewhere (2). Rather than repeat those arguments in full here, we will summarize them briefly.

We assert that the social and environmental setting for the typical nutrition intervention-one that spans years-is in constant turmoil. The ever-present changes in this intervention milieu are unpredictable and often unique to a given intervention. Two distinct consequences of this non-constant setting for intervention are:

- the best method for attacking a particular problem in a community will change over time in response to changes in the local environment and/or social setting, and

- traditional methods of evaluating the impact of a particular programme- methods based on sound experimental design-will falter because of the inability to hold all the necessary factors constant and/or the inability to account for those factors known to vary over time.

Let us first consider the failure of rigorous experimental practice in nutrition evaluation. The results of an evaluation based on experimental and/or quasi-experimental design are deemed credible only if four conditions are met:

(i) Controls-Groups of matched subjects, insulated from all changes other than those due to a specified treatment, must be defined and maintained throughout the intervention.

(ii) Pre-design of experimental conditions - Factors likely to confound the analysis must be identified before the intervention begins to direct data collection activities and, thereby, to enable subsequent statistical analysis to account for their effects.

(iii) Constancy of experimental conditions - All conditions must be kept constant (except, perhaps, those identified for consideration in the subsequent statistical analysis), especially those pertaining to the delivery of service throughout the intervention.

(iv) Separation of practice from research-Analysts must be removed from the interveners and the intervention itself to maintain objectivity in their subsequent review of the data.

Difficulties in maintaining the first three of these conditions give rise to competing explanations for observed quantitative changes in variable factors thought to measure programme impact. For example, if a group receiving food supplements shows greater positive change in nutritional status over a year's time than the control group, but also lives in a community enjoying a better local harvest than the controls', the change in nutritional well-being might be attributed with equal confidence to the harvest or the supplement.

Whole classes of competing explanations in the absence of rigorously maintained experimental conditions have been catalogued and discussed; some classes cast doubt on the generalizability of the conclusion in other contexts (3). In the nutrition field, more prominent plausible explanations for shifts in the overall nutritional status of a population, particularly of preschoolers, include the entry and exit of subjects from the groups over time, age differences between the groups, natural phenomena, such as localized droughts or disease epidemics, and economic changes unrelated to the intervention. With regard to the fourth condition, defining satisfactory experimentation, failure to separate the research from practice casts doubt on analytically drawn conclusions either because of the alleged Hawthorne effect or the presumed bias of the analyst.

Many published evaluation studies do not fully articulate these alternative explanations. There is an unfortunate proclivity to attribute trends in impact indicators to the presence of the programme without complete exploration of other, equally plausible causes for those trends. In our review of community-level nutrition programmes and in subsequent work on the evaluation of large-scale nutrition interventions delivering service at the community level, we emphasized the search for plausible alternatives because the conditions for credible experimentation were not fully met. At best, we could point to a change in the nutritional status of a population and enumerate a variety of other factors that may have contributed to that change; for example, changes in the make-up of the target populations, inherent differences between controls and treated populations, changes in the economy or unpredictable changes in environmental factors, such as climate or water availability. In most cases, through exhaustive statistical analysis, we were able to cast doubt on the plausibility of many of the competing explanations of the observed changes, but in no case were we able to prove to our satisfaction that changes were, in fact, due exclusively to the intervention.

The inability to meet the conditions for credible experimentation is not unique to nutrition interventions -most social interventions are equally ill-suited to classical models of experimentation. However, three problems specific to nutrition intervention in the developing world contributed to the indeterminacy of our own analyses of community-level nutrition programmes:

(i) Dirty data. Practices of data collection, storage, and processing were defined and implemented with inadequate care. Therefore, the quantitative data generated during each intervention were often inaccurate or internally inconsistent (or both).

(ii) Inadequate measurement Owing to the complex aetiology of malnutrition, it is particularly difficult to invent a measurement (or set of measurements) that will accurately reflect all aspects of the disease. Although still the best source of nutritional status measurement, anthropometry is imperfect and has certain characteristics that contribute to misleading results when reviewed using traditional analytic methodologies. Specifically, the results of most analyses of anthropometric data are, in part, dependent on the reference standard and classification scheme used in defining nutritional deficiency. Also, each of the various anthropometric measurements reflects a different aspect of undernutrition and responds differently to intervention.

(iii) Inadequate data. Because we interacted with interventions from a distance, both in time and geographically, we could not generate sufficient timely, situation-specific data that would have permitted better discrimination among competing explanations for changes in nutritional status indicators.

In fairness, we must point out that most of the case studies subjected to our own careful scrutiny were not conceived of as experiments. Rather, they were interventions designed to achieve humanitarian goals. However, we believe that over the long time-frame of most interventions, even those designed and implemented as experiments according to the four features described earlier, the same indeterminacies will emerge. A review of the literature on interventions conducted for research purposes supports this belief, as does our own work where the more carefully defined interventions (in terms of experimental design) proved to be as indeterminate with regard to impact as those done purely for the purpose of helping those most in need.

The reasons for the failure to meet the conditions of sound experimental practice, particularly those as" sociated with the changing environment and service delivery system, account, in part, for the failure of many intervention designs. The design that appears to be best at one point in time may need modification as various conditions in a community change. Thus, when implemented, a well-conceived design may fail to produce the hoped for responses. Paradoxically, an approach that meets with success in a community may also require modification because of that success. Once some behavioural change is embedded in a community or some modification of the infrastructure is completed, the interveners must move on to attack other aspects of the situation creating the problem of malnutrition.

In response to the need for dynamic planning and implementation in nutrition intervention and, independently, the need for methods of evaluation other than those dependent on rigorous application of the features of experimental design, we urge the nutrition community to consider an alternative style of intervention, Reflection-in-Action.

A BRIEF DESCRIPTION OF R-I-A

R-I-A calls for the rapid and frequent analysis of quantitative data by those with the maximum knowledge of the conditions existing during an intervention. This knowledge, subjective though it may be, facilitates the proper interpretation of trends in the data. In turn, proper interpretation leads to modification of the programme, the data collection system (to substantiate hypotheses raised by routine analyses), or the procedures of analysis. In a broader sense, R-I-A can be thought of as more than a built-in system of self-evaluation as suggested here. As a method of learning from social intervention, R-I-A denies the proposition that constant and unchanging laws govern events in social systems. The unique nature of every social situation requires the reformulation of laws and the relationships between actors and objects in a system given that unique situation. Prior experience, interpreted correctly, serves as the "model" for such reformulation. However, it is necessary, under R-I-A, for the intervener to maintain a constant dialogue with his situation, a dialogue that will result in profound changes in that situation as well as basic changes in the image of the situation (and its problems) held by the intervener (4).

We can identify six features of R-I-A:

1. Explicit Specification of the Framework Underlying the Intervention Strategy. R-I-A calls for the definition of a "model of the local nutrition system" to guide the design of an appropriate initial intervention as well as the design of the data-gathering and analysis system. The model itself should be flexible and subject to change in response to lessons learned through analysis during the intervention.

2. Continuous Monitoring of Both Data-Gathering Procedures and Intervention Strategies. To implement R-I-A, a data collection and analysis system must be designed, implemented, and used as the basis for dynamic planning during the intervention, The data-gathering system should be simple at the start, focusing on selected impact and process indicators, and it, too, must be flexible and subject to change throughout the intervention. For a suggested starting point for the design of such a system for supplementary feeding programmes or other nutrition interventions calling for the weighing of children, see Miller and Sahn (5). Performance of the data-gathering procedures must be monitored along with trends in the data to minimize the possibility that those trends are an artifact of changing accuracy in measurement and/or recording activities.

3. Periodic Redesign of the Data-Gathering Procedures and Intervention Strategies. In response to monitoring of the data-collection procedures and the trends revealed by the analysis of the data as they emerge, interveners should cope with observed changes - in the environment, the target population, or the intervention methodology - by implementing appropriate redesign strategies. This can be achieved only through the rapid feedback of analytic results to the field practitioners.

4. Collaboration between Researchers, Practitioners, and Subjects Throughout. In contrast to the classical experimental approach or the typical evaluation format featuring the short, but intense, site visit by evaluation specialists, R-I-A calls for the continuous involvement of all parties to an intervention in the analysis and redesign process. This serves as a feasibility check on ail proposed changes, as an incentive for rapid adoption of those changes, and, most importantly, a review of the accuracy of the interpretation of the analysis leading to change.

5. Use of Derived Experiments. Routine analysis of data should generate questions that require additional research, possibly in the form of small experiments carried out within the context of the larger intervention. One technique of value here is the disaggregation of the data to assist in isolating factors confounding the more aggregate analyses.

6. Accounting for Confounding factors in the Redesign Process. Rather than attempting to control for confounding factors in the experimental context, practitioners of R-I-A try to detect and react to those confounding factors. At times, interveners can use such confounding factors to their advantage in redesigning their intervention components.

We submit the hypothesis that R-I-A would improve both the quality of intervention, through the dynamic planning implied by the continuous analysis of impact and process data, and the quality of the evaluation, through the explicit consideration of factors that generally confound efforts to apply traditional experimental models. In particular, we assert that R-I-A, as described above, minimizes the negative influences of dirty data, inadequate measurement, and inadequate data on both the redesign of intervention and the evaluation of intervention impact. The continuous monitoring of the data-gathering procedures combined with the increased perception of the usefulness of data on the part of workers in the field, in response to feedback generated by the ongoing analysis of the data, should encourage the elimination of dirty data. R-I-A cannot resolve the problems of inadequate measurement; however, it can reduce the possibility that conclusions drawn are artifacts of the measurements rather than the results of actual changes in nutritional well-being by encouraging longitudinal analyses and the corroboration of those analyses with qualitative judgements from the field. Finally, the use of derived experiments and the modification of the datagathering schemes offer ways for addressing inadequate data by creating an increased flexibility for dealing with changes in the overall situation in a timely and innovative manner.

AN EXAMPLE OF A PARTIAL APPLICATION OF R-I-A

Perhaps understanding of R-I-A can best be enhanced through an example. An evaluation project was undertaken during 1982 with the objective of testing as much of the reflection-in-action model as possible. The project was an evaluation of the Food-for-Peace grant programme in Sri Lanka (6). One central question of the study was whether the food donation programme had a favourable impact on the nutritional status of children under six years of age in that country, and, if so, by how much? In order to maximize the involvement of field-level practitioners in the evaluation-a condition essential in the implementation of R-I-A- the project was partitioned into three phases.

Phase I followed the ordinary approach of designing a study that attempted, as well as possible, to resolve the question of nutritional impact from the programme. For the children under six years of age who participated in the thriposha programme, a field data collection effort was implemented that ultimately generated some 12,000 weight and age observations for 1,800 children. Since Thriposha is distributed in Sri Lanka as one component of a package of health services, caution was taken to partition the sample in a variety of ways to control for possible differences in impact reflecting variations in the delivery system or recipient population from place to place. This partitioning of the sample was limited by the practicalities of field conditions and the realities of budgetary constraints. The sample was stratified by geographic region, by the type of health care provided, by the governmental unit responsible for staffing and funding the distribution centre, by the socio-demographic characteristics of the family of the recipient child, and by the age of the child.

Social and demographic family variables included age of mother and father, parental occupation as a surrogate for income, number of siblings, and birth order. Of course, many more data elements could have been proposed -in fact, the number of possibilities was limitless. The decision as to which subset to collect was based on the cost and feasibility of data retrieval balanced against what our prior experience had shown to be most critical. Our judgements were reviewed by local practitioners and modified to conform to their experience as well. In short, while the initial design was not flawless, it was quite comprehensive when compared to similar evaluations done in other countries. The design showed promise of uncovering relationships between nutritional status of the beneficiaries and the extent of their exposure to the programme if any such relationship existed. Phase I did not depart from typical evaluation designs based on the principles of sound experimentation except in one important aspect-there was a priori recognition that there would be, at best, some indeterminacy resulting from the analysis even if measurable change in nutrition status was confirmed.

The second phase of the evaluation incorporated more of the elements of the model of R-I-A. Phase II consisted of an exhaustive analysis of the data gathered in Phase I. A report was prepared consisting primarily of tables, charts, and graphs showing the relationships between variables deemed to be of potential importance. Characteristic curves (graphs plotting nutritional status against age of child) portraying age-cohort comparisons of the nutritional status of children who had participated in the programme for a substantial period of time to the status of children who had just recently entered the programme provided concrete evidence that participants of all ages were nutritionally better off than new entrants. Tables were included presenting evidence that the magnitude of the benefit derived from participation appeared to vary by the type of organization rendering service and by the physical setting of the clinic, rural or urban. Nutritional status was shown to be related to several of the other stratification variables as well.

With the preliminary report completed, Phase III, the formal field-level evaluation, began. Armed with specific results of the analysis, the evaluation team engaged in a dialogue with local practitioners concerning the causes of the observed results. In several instances, the interpretations of the analysis were modified substantially from what seemed obvious from the analysis. For instance, differences in outcome that seemed to be related to the type of unit providing service and type of health care offered in conjunction with the food supplement were found to be highly correlated with the clinic staffing levels per beneficiary. This competing explanation was first proposed by one of the local staff and then tested with on-the-spot observations of clinics in several different parts of the country. Thus, what appeared to be strong evidence in support of the hypothesis that the type of governmental unit administering the programme played a role in causing positive response to it was reinterpreted to be evidence supporting the role that staffing levels play in ensuring effective clinic operation.

It is important to note here that data on staffing levels had been gathered at the onset and, initially, found to be of no importance. The reason for this lack of detection of a relationship during the early rounds of analysis was that the variables selected for testing the hypothesis were too aggregated. in short, the study design team, even though it included knowledgeable local staff, had selected and measured the wrong variable. It was only after successive analysis that enough local perspective was gained to narrow and focus the definition of staff to a level that revealed the relationship between staff size and nutritional impact. it could be argued that, had the correct variable been measured at the onset, this problem would not have arisen. However, it is our belief that the often subtle process of deciding precisely which variable is most appropriate to gather before analysis is extremely difficult. Regardless of the expertise of the analyst, mistakes and omissions are usually made and, as a consequence, the iterative process embodied in R-I-A becomes essential

Since Phase III was concerned more with the determination of attribution rather than outcome, the issue of competing explanations was central to this part of the study. The Sri Lanka evaluation, like most such studies, contained several possible competing explanations for the observed outcome besides the intuitively appealing one that the programme worked. The R-I-A model called for a comprehensive enumeration of competing explanations, even though some were beyond the scope of inquiry.

During the four-week in-country study period, the evaluation was able to muster evidence for rejecting all significant competitors, thereby concluding, with some degree of accuracy, that the observed improvement in nutritional status was attributable to the bundle of services associated with thriposha distribution. Had we not been able to articulate those competing explanations and perform on-the-spot research, our evaluation would have been inconclusive.

It should be noted, however, that it was not possible to separate the effect of thriposha from the package of other services rendered at the same time as the food supplement was given, nor was it feasible to generalize the findings regarding food supplementation to other settings. While an evaluation based on more rigorous experimental design would have had, in theory, the potential for broader generalization on some dimensions, in practice, a much higher degree of indeterminacy would have been the result.

Adoption of R-I-A in an evaluation of the sort done in Sri Lanka provides some evidence concerning the usefulness of the approach. However, as mentioned earlier, only some elements of the model were applicable. Missing were the elements building in the evaluation and on-the-spot research as part of the ongoing programme. An application including these elements remains to be done.

IMPLICATIONS OF R-I-A FOR INTERVENTION EXPERIMENTS IN NUTRITION

An approach to nutrition interventions that does not recognize the instability, unpredictability, and uniqueness of the local environment will often be of little benefit to recipients. Attempts to evaluate the efficacy of such programmes using traditional methods in which data are structured to conform to a static, but careful, experimental design will generally be less determinate than an approach relying more on detailed knowledge of local practitioners. The alternative advocated here, R-I-A, is both an intervention style and an evaluation process. It is a context-specific, iterative approach capable of adapting to ever-changing local conditions through the rapid feedback of the analysis of locally generated data to individuals at the site of intervention. R-I-A treats all interventions as experiments and recognizes that all experiments are subject to the same varying context surrounding most interventions.

What then are the implications of these arguments for the conduct of intervention-oriented experiments in nutrition? We classify these implications into four broad categories: (i) recipients participating in the programme; (ii) change agents implementing intervention programmes; (iii) social scientists and practitioners, and (iv) the relationship between donor and implementing agency.

Recipients participating in a programme. Because of the high potential for major disruption of a local society or culture in response to the infusion of external resources during a major intervention, care must be taken to retain a favourable balance between benefit and cost to the recipient, where costs are defined in the broadest sense. It is very easy for a professional dedicated to reducing child malnourishment to assume that any help provided is a net gain to the recipient. But we have found that many interventions show little, if any, measurable improvement in the nutritional status of the target population. {Caution must be taken to distinguish between measurable impact and impact itself. If the research design is faulty, or if there is sufficient misclassification in the observed variables, true impact will be understated.) If one recognizes that, typically, considerable resources are devoted to such programmes, it could be argued that a far better option would be to provide a direct supplement to the recipient or to fund the development of general infrastructure such as transportation and education systems.

In addition to the tangible resources of the donor, one must consider the time, energy, and commitment of the persons participating in the programme. The mother and child who devote a day walking to and from a health post in order to attend a well-baby clinic in the expectation of receiving an allotment of weaning food have a considerable investment in the programme. Furthermore, nutrition programmes sometimes call for change in firmly ingrained cultural habits such as intra-familial food distribution patterns. Even if the programme does not ask explicitly for such changes, they can occur anyway, often creating dependencies. For example, in on-site feeding programmes, families often alter the intra-familial food distribution patterns to compensate for the fact that one family member is fed elsewhere. Similarly, family purchasing decisions change in response to the increase in disposable income associated with receiving free food or other services, with the net effect being the creation of dependencies on the programme. In any intervention, there are significant costs, broadly defined, to the recipient and there are not always the corresponding benefits.

The it-l-A model begins with the assertion that the proposed intervention may need to go through one or more iterations before it provides a useful service. It does not presume to be a remedy, only the first step in a process. When the intervention process is conceived in this way, the relationship between recipient and programme must be viewed in non-traditional ways. The recipient of services is not automatically a beneficiary experiencing a net gain in welfare, but rather a participant in a local experiment conducted with the objective of developing a strategy that will lead to the attainment of certain benefits. Under these conditions, the participant in the local experiment should be entitled to some measure of protection if the programme is unsuccessful. The policy implication of this argument is that an important design criterion for a nutrition intervention is that it contain a plan or process for responding to this unfavourable contingency.

Provisions should be included for changing the mix of services delivered by the programme if benefits derived from existing components do not live up to expectations. Also, participants should be informed of the experimental nature of the activity they are about to engage in. Finally, provisions must be made, in advance, for smoothing the transition back to the original local conditions in the event that the intervention is terminated. These provisions should include tangible resources earmarked for this purpose.

Change agents implementing intervention programmes. Analysts with the technical skills needed to conduct research or evaluation studies and practitioners with the skills to work effectively in developing countries come from different educational backgrounds and are motivated by different interests, the practitioner being action-oriented and the researcher more interested in the search for knowledge. R-I-A calls for the attributes of both to be present in one individual, or at least represented equally on a team. Thus, to staff a project based on R-I-A, one must merge diverse skills in a single individual, use consultants more wisely to fill highly specialized needs, or hire more people - options that cost more money.

The model also calls for a blend of behaviours. At the field level, there is often a short-term conflict between expending resources {time or money) on gathering and analysing data and rendering services to beneficiaries. We have seen several instances in the field where this conflict is quite real. Too frequently, the conflict manifests itself when political pressures for expanded coverage are brought to bear on the change agents; that is, where short-term political interests are placed above long-term effectiveness.

We believe that there are ways of turning this potential conflict into a condition beneficial in both perspectives. When the information gathered for research or learning purposes is the same information used to operate and manage an intervention, there are several gains. First, the removal of errors made in taking, recording, and/or storing data elements-errors that invalidate subsequent analyses based on the data-becomes possible and relatively easy. Second, the correction of errors not only helps the analysis but also improves programme operations. Missed diagnosis of children at-risk is reduced or eliminated. Management questions, such as staffing imbalances, surface more readily and, therefore, can be corrected in a timely manner.

Finally, there is the potential for more useful dialogue among practitioners. We say "potential" because a related problem, that of over-reaction to quantitative data, must be resolved before achieving improved dialogue. Without a minimum critical level of expertise in experimental design and in the interpretation of quantitative analysis, practitioners can and do react prematurely or improperly to mathematical results, and in doing so, do more harm than good. Decisions may be based on improperly or incompletely developed "hard" data at the expense of field wisdom. Note, over-reliance on "hard" numbers is not limited to practitioners in the field; a similar pattern exists within funding bureaucracies.

The remedy for this problem of over-reaction in local decision-making lies in integrating practitioners into the datagathering and analysis process. If practitioners participate in the entire study design, analysis, and interpretation process a few times under real field conditions, they become armed with sufficient knowledge and perspective to make proper interpretations when applying the results to decisions. Thus, the remedy for the problem of appropriate interpretation for decision-making purposes is precisely the same remedy as that needed for conducting careful analysis for research purposes-local practitioner involvement.

New Skills, Norms, and Attitudes for Social Scientists and Practitioners

Social scientists, particularly those trained in analytic methodology, are prone too often to become enthralled with their technique. There is a tendency to retreat to what is familiar and to what has provided a competitive edge during past endeavours, namely, sophisticated analytic techniques. While these most certainly have usefulness, they are a very small component in the repertoire of skills needed for the R-I-A model Again, uniqueness, instability of environment, and unpredictability of the local condition that force intensive scrutiny and emphasis on local, context-rich information demand that skills more akin to the anthropologist and change agent must be emphasised. Even if they want to, scientists can no longer avoid the effects that their results have on decision-makers. Consequently, a knowledge of, and the ability to deal with, the broader implications that analysis might have become essential. In summary, scientists using the R-I-A model must strive to broaden their skill levels beyond those currently prescribed.

Behaviour norms may have to be altered as well. Emphasis on numerical analysis of clean data as presented in current academic journals must be replaced by a norm that more fully includes concern over the quality of that hard data, the integration of "soft" data sources into an analytic framework, and the ability to accept and articulate indeterminacy. More important, the need to become close to local conditions, whether it be as a participant-observer or in conjunction with local colleagues, may reduce apparent academic productivity. Productivity of a social scientist is often measured by scholarly publications. Currently, the most effective way for the social scientist to show productivity along this dimension is to emphasize analysis and written description of the interesting findings stemming from analysis, either substantive or methodological. Alternatively, the time and energy spent in developing close working relationships in a local setting are often under-recognized. Yet the R-I-A model demands active local participation beyond that required to obtain data.

In some instances, attitudes as well as skills and norms need to be adjusted. It is very easy for scientists to be overly impressed with their capabilities, especially when much of the community in which a scientist works operates to reinforce that belief. A more humble attitude about who really has the important pieces of knowledge in a local context is essential before a healthy relationship between practitioner and scientist can develop.

From the practitioner's viewpoint, change is called for, too. We have discussed the need for exposure to research design, analysis techniques, and careful interpretation of results. Our experience is that, in the context of using analysis for improved local decision-making, changes in the attitude of the practitioner are minimal if required at all. Active participation in two or three such analyses is sufficient to acquaint the practitioner with the requisite steps. While some formal training is highly desirable, on-the-spot training is more effective and feasible in view of the administrative responsibilities of the practitioner. Occasionally, when local administrators have had job experiences in the private sector, they have already acquired the necessary patterns and attitudes. The closest analogy to the R-I-A model may be the behaviour of the small, independent businessman who continuously adjusts his mix of services or products based on an analysis of feedback from customers and the local economy.

There is an exception to the case where only minimal change in attitude is required of the practitioner. Some practitioners believe, for whatever reason, that they know the proper remedy for a problem. In the nutrition field, we have heard, too often, "But how can feeding hungry people be bad?" R-I-A can begin only after such attitudes are dispelled.

Donor and Implementing Organizations

The most significant implication of our arguments in this paper comes in the relationship between donor and implementing organization. According to the R-I-A model, proposals from prospective change agents to possible donors would be couched in terms of a proposed starting position for a nutrition intervention together with a description of the process to be used for making incremental changes toward an improved programme design. This process would be based on a mechanism for obtaining and using feedback from the target population. Presumably, sponsors would make a decision to provide funds based on the quality of that redesign process, the knowledge embodied in the starting configuration for the intervention and, of course, the overall needs of the target population in relation to other alternatives.

Decisions regarding continuation of funding would be based on an R-I-A assessment of the efficacy of the programme and the target population's current needs. Formal evaluations would include a synopsis of the various iterations that the programme administrators had implemented, the reasons for those changes, and the outcomes achieved. The field component of the formal evaluation would include local practitioners whose role would be to discover additional competing explanations for the observed outcomes and assist in resolving indeterminacies whenever possible.

The changes required within donor agencies to enable implementation of R-I-A are profound indeed. Beyond changes in project design and evaluation criteria, the staffs of the donor organizations would be faced with a new set of political issues. It is one thing for the administrators of a foundation to propose to its board a well-defined, apparently coherent multi-year project designed to alleviate a pressing social problem. It is another thing to propose a programme that admits at the onset that it is only an attempt at problem resolution that has a low probability of success as originally conceived. The same difficulty exists' only more so, for administrators of government agencies seeking funding in a highly political environment. Governments, even more than foundations, are bound by a yearly budget cycle and, because of their high public visibility, are subject to strong pressures to short short-term, positive response to their actions). Frankly stated, the willingness on the part of administrators in donor agencies to be fully descriptive with regard to the uncertainties associated with a given proposed remedy places them in a vulnerable position. For there will always be other administrators who, in the interest of raising the political likelihood of gaining approval, will be quite content to propose projects in the traditional, less accurate, but far more comforting manner.

A CALL TO ACTION

We have presented R-I-A as an innovative approach to both the conduct of health/nutrition programmes and their evaluation. In many situations, an intervention (or an evaluation of an intervention) appears, in retrospect, to have followed the R-I-A model When interveners come to recognize that their schemes are ineffective, they react by making appropriate mid-course corrections; however, they rarely set out with the expectation that such directional change will be needed and, even more rarely, establish a routine procedure for developing a quantitative basis for directing such change. Similarly, evaluation specialists often alter their design or use statistical methods to compensate for unanticipated flaws in their experiments; but they, too, undertake their evaluations without a conscious plan to identify and study factors contributing to the indeterminacy of their analyses. In these situations, R-I-A is practiced more by accident than by design-more as a reaction to events than an action to shape events.

We are advocating a more conscious and determined effort to implement R-I-A, an effort that recognizes the need to learn while doing and to act in accordance with what is learned. At the very least, we ought to give it a try.

REFERENCES

1. W.D. Drake, R.l. Miller, and M. Humphrey, "Final Report: Analysis of Community-Level Nutrition Programs," Community Systems Foundation, Ann Arbor, Michigan (1980).

2. W.D. Drake, R.l. Miller, and D.A. Schon, "Community-Level Nutrition Interventions: An Argument for Reflection-in-Action," Community Systems Foundation, Ann Arbor, Michigan (1982).

3. D.T. Campbell and J.C. Stanley, Experimental and Quasi-Exportmental Designs for Research, Rand McNally, Chicago, Illinois (1966).

4. D.A. Schon, W.D. Drake, and R,l. Miller, "Social Experimentation as Reflection-ln-Action: Community-Level Nutrition Intervention Reconsidered," Community Systems Foundation, Ann Arbor, Michigan (1982).

5. R.l. Miller and D.E. Sahn, "Built-In Evaluation Systems for Supplementary Feeding Programmes-Why and How?" In Methods for Evaluating the Impact of Food and Nutrition Programmes, The United Nations University, Tokyo, in press.

6. W.D. Drake, J.N. Gunning A. Horwitz, R.l. Miller, H.L. Rice, and G. Thenabadu, "Nutrition Programs in Sri Lanka Using U.S. Food Aid (An Evaluation of P.L. 480 Title II Programs)," Community Systems Foundation, Ann Arbor, Michigan (1982)


Contents - Previous - Next