Contents - Previous - Next


Use of RAP in evaluation in UNICEF


Evaluation in UNICEF

Evaluation in UNICEF is defined as a process which attempts to determine as systematically and objectively as possible the relevance, effectiveness, efficiency and impact of activities in the light of specified objectives. It is a learning and action-oriented management tool and organisational process for improving both current activities and future planning, programming and decision-making [3]. Within this broad definition of evaluation in UNICEF, a number of activities of an evaluative nature can be identified: annual reviews of programmes which UNICEF staff hold with their colleagues in government, external evaluations by donor agencies of projects UNICEF is implementing together with government as part of a government development programme, thematic evaluations looking into certain initiatives at the global level, reviews by UNICEF advisers, and so on. A RAP evaluation, therefore, is one amongst a number of evaluation styles used to evaluate UNICEF's work.

Selection of PAP procedures

RAP, in UNICEF parlance, refers to a process rather than a particular set of methods. Exactly what type of assessment procedures are used depends on the job at hand. Funds available and the amount of time that can be put aside for an exercise are crucial factors. There is always a tension between carrying out a RAP too quickly with too small a budget and ending up with a process that has done little to further the area being studied. It is up to the participants in a RAP process to steer a course between field trip types of information gathering and long term, more classic types of social research. Chamber's principles of "optimal ignorance" and "appropriate imprecision" must be applied, but as pointed out by Cernea [4], it is the RAP practitioners' judgement call on what can be ignored and how much imprecision can be appropriately tolerated without self destroying penalties. Rapidity is relative.

In UNICEF, a RAP exercise includes, as a central base, the use of techniques derived from the field of anthropology [2] such as focus group discussions, observation, and unstructured one-on-one interviews that form a body of mainly qualitative data. The anthropological techniques being used are accepted by some anthropologists as being a valid way of condensing an anthropological investigation from the normal two years or so to the two or three weeks available for RAP. Of course a main proviso is that the area under investigation in a RAP is much narrower than the typical anthropological investigation and the end product is quite different.

Normally, the qualitative information derived from the RAP will be used in conjunction with quantitative data which might already be available or which might also be collected as part of the RAP investigation itself. Any other of a large number of techniques may also be used; for example, participatory mapping, analysis of satellite images, household surveys with analysis of data and discussion of results, group discussions, and so on. A RAP exercise could be related to any part of the classic programme cycle; the planning of a development programme, the monitoring of the implementation of such a programme, the evaluation of processes or the evaluation of coverage, costs and impact. But it should never be used alone for decision making. Rather, it is seen as a complementary method of assessment to the less formal field trip which will only take a few days, or the more complicated, more academic, old fashioned types of research which might take years to gestate.

Preparation for a RAP evaluation

RAPs should follow the general principles of evaluation. A great deal of care should be spent drawing up the terms of reference for the evaluation team; the original objectives of the subject being evaluated should be recalled; a background paper on the history of the subject should be prepared which includes references to the reasoning behind strategies adopted, differences between planned and actual expenditures, course corrections, the results of previous evaluations and so on; there should be a clear set of issues to be addressed during the RAP; the methods to be used, including outlines for interview guides and questionnaires should be drafted; a schedule for the process should be devised as much as possible; due to budgetary constraints fieldwork for a UNICEF RAP evaluation cannot usually take more than three to four weeks. Indeed much of the basis for how RAP evaluations are carried out are determined by the practical need to complete the process within such a time frame; evaluations need outsiders to guarantee the objectivity of the process and outsiders usually have to be hired as consultants whose time should be put to the best use.

It is often useful to have a wide process of review for a terms of reference; a RAP evaluation steering committee is often the best way in which to manage this and all other stages of an evaluation. Having said this, it is the nature of a RAP for the final product to sometimes end up being slightly different from what was originally asked for. In this respect the drafters of a terms of reference must be confident enough to let the evaluators use their judgement in the last analysis in deciding on what to concentrate.

Composition of an evaluation team

A crucial factor to the success of any type of evaluation is the selection of the members of the team. One of the advantages of a RAP evaluation is that it does not necessarily adhere strictly to a predetermined protocol. It is adaptive and flexible. The end result can, and usually ends up being a little different to what was originally envisaged in response to local circumstances. It is vitally important, therefore, that RAP team members are experienced and have a strong sense of objectivity. They must have a clear sense of what judgements they can fairly arrive at given the particular application of investigative methodologies they have used during the RAP and the information they have derived from the use of those methods. Another important attribute among RAP team members is cultural sensitivity especially with the application of qualitative information gathering.

Experience has shown that it is vitally important that a RAP team should include people who actually work on a project as well as outside investigators; the responsibility for impartiality and objectivity lies, of course, with the outsiders, while those associated with the subject being evaluated act as readily available resource persons with the inside knowledge; they ultimately know best what recommendations for action have a possibility of being taken up. Also, human nature dictates that people are more likely to understand the rationale, and then to follow up evaluation recommendations, the more they have taken part in the process; and finally, RAPs operate under time constraints and insiders can move things a lot faster than outsiders left to their own devices.

Members of the team should be evaluators with expert knowledge of using the data gathering methodologies being proposed in the terms of reference. It is extremely important that the team includes members from different disciplines. In this way biases of any particular discipline are more likely to be avoided and perspectives and interpretations of the team will be enriched.

It is essential that at least one of the team members is a good communicator. The emphasis of RAP evaluation is the offering of concise action oriented recommendations for improvement aimed at decision makers. Good presentations where findings are clearly expressed, are essential in RAP and a well, quickly written and concise report is also expected.

Presentations of findings and results can also be prepared for presentation for interactive discussions on desktop/laptop computers if the RAP team has the communication skills. These days no RAP team is complete without at least one laptop computer which can be used for drafting.

The place of RAP in participatory evaluation

While RAP is a particularly useful type of method to use for evaluations (of a participatory nature which may be carried out in development projects where decisions on the use of development resources are made in democratic ways) by a team of people from without the community looking in and finding out what people are thinking and doing, it can also be used by community members (certain techniques which are frequently part of RAP, like focus group discussions, under different labels such as "group meetings" form a normal part of participatory evaluation processes).

RAP evaluations have an especially important role to play in encouraging participatory development processes in projects which do not have a particularly strong element of participation in their planning and implementation stages. The rapid anthropological techniques used can ensure that significant attention is paid to the views of the people benefiting from the activities being evaluated. In this way, the RAP style evaluation can be used, if so desired, as a way of instilling a sense that more people's participation may be desirable in a development project.

Uses and the influence of computer technology on RAP

In many ways the evolution of RAP evaluations has been influenced by advances in computer technology, particularly laptop technology, and the availability of photocopiers. The process of wide consultation at all stages of a RAP is conducted with heavy reliance on the written word - or more particularly, on draft written words - on which comments are offered for incorporation into later texts and parts of presentations. This happens from the production of terms of reference, to the drafting of findings and recommendations. Quick turnaround times for the production and revision of written texts is crucial for as wide a consultation as possible given the limitations of time. Ten years ago it would have been impossible to have carried out an evaluative process with as wide a consultative process as is now considered normal in RAP evaluations.

Computer technology can also be put to good use in cases where quantitative data need to be gathered and analyzed as part of the process while a RAP team is on the move and dealing with other aspects of the evaluation.

Data collection methods

How does a RAP evaluation team go about collecting and analyzing information? The only factor limiting which methods can be used is time. The methods used in any particular RAP depend entirely on the objectives of the evaluation, the resources at the disposal of the RAP and the time available. There is nothing inherent in RAP which binds a team to use a particular set of methods. The use of rapid anthropological procedures is at the heart of the evaluative process called RAP. Here is a list of some of the methods which can be used in an evaluative RAP.

1. Review and analysis of other data and information available on the subject being evaluated. As a general principle no new information should be sought if it has already satisfactorily been found out by others. The investigation team should spend time identifying and interpreting such data sources including routinely collected data, evaluations, survey data, previous annual or special reports, etc.

2. Group interviews (focus group discussions) or community meetings. This technique, borrowed from commercial marketing, brings together groups of between half a dozen and thirty people for an extended discussion moderated by one of the investigators. The investigator is guided by a set of questions or topics prepared by the team beforehand. Discussions usually last up to two hours. Information is elicited on participants' views on the benefits of the subject being evaluated, what their understanding is of the activities' goals and to what extent these have been fulfilled. Caution is exercised against putting too much weight on the opinions of anyone who tries to monopolize the discussion, and it is the job of the moderator to make sure that all express their opinions.

3. Observation. Activities related to the subject being evaluated are observed, normally according to a protocol agreed upon when the terms of reference were being drawn up. For fair, general comments to be made on the observations this procedure must be carried out in a number of separate locations.

4. Interviews with key informants. A cross section of key informants is identified by the evaluation team and interviewed. Pertinent questions that should be raised with key informants are usually formulated by the evaluation team at the planning stage of a RAP and adopted during the process as required. Key informants could be trained experts, government officials, local politicians, or other knowledgeable people who can provide insights into the subject being evaluated. Obviously, an evaluation team could be seriously misled by the biased opinions of a few individuals. It is important, therefore, that the net be cast wide so that many opinions are collected. The onus is then on the evaluators to interpret what they have heard from their many sources. When discrepancies are found it is often necessary to extend the number and range of informants in order to reconcile them and arrive at a more correct picture.

5. Cost analysis. It is extremely useful to include cost analysis. But often there is not enough time to carry out a detailed cost analysis and corners are cut; a trained economist may not be a member of the RAP team. Costs to external development agencies, government and people are taken into account. The main aim here is to raise issues of sustainability of activities and to make sure discussions of options for greater efficiency take place on the basis of some facts, and broad orders of magnitude.

6. Quantitative data, collected in various samples such as community-based sentinel sites. Quantitative data collection during a RAP process has become an easier task with recent advances in computer and printing technology. Quantitative data collection, including the design and production of questionnaires, and the input, analysis and feedback of data in the field can be done much faster now than was the case ten years ago. A good example of the frontiers of field use of computer technology is the work being pioneered at the Centre for Tropical Disease Research, Acapulco, Mexico, where large community-based quantitative surveys, with sample sizes ranging into tens of thousands, have been locally planned, carried out, and the results fed back to the public within a period of days [5]. This use of computer technology in a RAP style evaluation, is in itself an example of a participatory evaluation method.

In the past five years UNICEF has used a series of RAP evaluations - using all of these methods - in a number of thematic evaluations which are oriented to identifying lessons learned; for example, a study of growth monitoring and promotion in seven countries. In a series of assessments of how social mobilization was used to achieve universal immunization, multidisciplinary teams participated in a wide range of mobilization activities in communities studied.

As part of the evaluation process, parts of the team, with normally three to four people in each group, spend two to four days in each of a number of villages. During this time guardians of young children are interviewed individually and group discussions are held. Observations of growth monitoring sessions are carried out. Interviews are held with key informants such as village leaders and health workers about the running of programmes of which growth monitoring forms a part. Random sample surveys are carried out to determine the coverage and frequency of growth monitoring activities, and costing exercises are carried out. When possible a debriefing and feedback is given to the villagers before the team moves on to the next village.

Disseminating initial findings, reaching consensus, making recommendations for action and final reports

After the RAP team has finished the assessment phase of the work, they are left with the task of going through a process of briefing people, getting feedback, and trying to reach a consensus on what needs to be done and by whom. In practice there are limits to how much consultation can be done as it may be difficult for all key decision makers to make themselves available to be able to consider and give balanced feedback to the RAP team during the tight timetable which is a feature of RAP.

A useful part of the consultation process is the production of written draft recommendations and findings produced for steering committee members and other key decision makers for their individual review and feedback. Much useful comment and feedback can come through this channel which would not come to light during group meetings. As with any evaluative exercise, evaluators are often in a position to say things which others, for one reason or another, cannot.

Developing concrete recommendations is a way of focusing discussions so that suggestions for improvement in the way development resources are used can be properly discussed. Such discussions are essential to ensure that the recommendations for action which are made by the RAP team are realistic and implementable.

As part of the de-briefing process a preliminary report is produced. Initially this may be a draft for discussion. The final report should be easily readable and accessible. The ideal size of the main body of a RAP report is no more than fifteen to twenty-five pages. Busy decision makers cannot afford the time it takes to dissect a weighty report to find key points requiring action. A UNICEF RAP report generally will include findings, lessons learned and recommendations for action. When possible, a timetable should also be included specifying when recommended actions should take place and who is responsible for the action.

The emphasis of a RAP report is information for action to a group of decision makers who should themselves be involved in the RAP process. Recommendations and findings are given prime space at the start of a report and less crucial information like methodologies and background information, which are usually well known to the key decision makers who are the main audience of the report, are placed towards the end. This style of report structure often overturns the conventions of academic report writing.

The final report has to be seen in the light of the whole RAP process. As noted, the process is as important as the final product.

Constraints to the application of RAP in UNICEF evaluations

A main aim of a RAP evaluation, the arrival at consensus on recommendations for action, is never an easy task. If key decision makers are absent when debriefings are taking place, they do not give feedback in time for their comments to be taken into consideration in the production of a final report. Consensus on what to do is sometimes not reached. Key decisions cannot be made because of extraneous political factors. Sometimes it is found that too much emphasis is placed by some key decision makers on a final written report when in a RAP it is the process - the raising of issues, the discussions, the debates - rather than the product, which is of greatest value when it comes to finally influencing concrete actions.

A number of lessons can be drawn from recent UNICEF experiences using RAP. First, it is crucial that key players make themselves available to take part in as much of the process as possible rather than dwelling on a final written product; active participation of decision makers at several key stages is a crucial part of the RAP process. This is a key difference between RAP and other types of evaluative processes.

Second, a RAP team must be expert at communicating findings in a clear and analytical way so that issues can be quickly grasped and understood.

Third, the RAP process is facilitated when the principles of RAP are well understood by key decision makers taking part in the process. Sometimes there can be cultural problems associated with the RAP style; consensus on some subjects may be something which would normally have taken several months or years; there may be very good reasons why consensus is not wanted; decision makers may have a negative reaction to a draft recommendation and, rather than entering into a debate so as to try and reach consensus, refrain from taking further part in the process.

Fourth, most of these pitfalls and others can be avoided if careful attention is paid during the initial stages of a RAP on how the process should proceed.

A RAP evaluation can be a challenging process for those who are taking an active part. Resources are slim, the days are long, and the issues being tackled are usually broad; mistakes are made, but usually, on balance, UNICEF RAP evaluations turn out to be positive processes in which many lessons are learned and ways are found to better the way resources are being used to benefit the well being of children.

Conclusion


In summary, what does the RAP process as used by UNICEF describe? It is born from the realization in UNICEF that many projects are most usefully evaluated in a less rigorous, less expensive, and less time-consuming way than was classically the case in the past but that a more rigorous approach than the classic field trip is also required. RAP is a compromize between the classical academic style of examination, and one person making recommendations based on a few interviews with key people and perhaps a field trip to a project site. It is not meant to replace either of these approaches, but rather it is a complementary process. A key aspect of RAP is the emphasis placed on hearing the voice of the people through the use of anthropological techniques.

Action-oriented agencies sometimes need evaluation processes that will quickly lead to actions that can improve the projects being examined, and which at the same time make the opinions of the people whom UNICEF is trying to help heard. The RAP approach has proved to be a popular answer to this need.

Acknowledgement


The authors acknowledge the contribution of many UNICEF staff members, past and present, towards the development of RAP in UNICEF and to Samir Basta, Ibrahim Jabr, Twig Johnson, and Ngokwey Ndolamb, in particular.

Endnotes


1. The views expressed in this article are those of the authors, not necessarily those of UNICEF.

2. The following example serves to illustrate this point; it illustrates the general likelihood that British medical doctors will be able to interpret simple statistics of the sort typically used in medical journals. Wulff et al (Wulff, H. et al. What do doctors know about statistics? Stat. Med. 1987; 6: 3-10.) report that medical participants in a course on postgraduate research methods scored a median of 4.0 correct answers out of 9 multiple choice questions on elementary statistical expressions (SD <SE <p<0.05, p>0.05, and r). A random sample of more senior colleagues - doctors working in hospitals - scored a median of only 2.4 and among those who had qualified more than 15 years before the survey the score went down to 2.1. Their conclusion was that " the statistical knowledge of most doctors was so limited they cannot be expected to draw the right conclusions from those statistical analyses which are found in papers in medical journals. "

References


1. Chambers R. Rural Development: Putting the last First. Harlow, UK: Longman Press, 1983.

2. Scrimshaw SCM, Hurtado E. Rapid Assessment Procedures for Nutrition and Primary Health Care. Anthropological Approaches to Improving Programme Effectiveness. Los Angeles, CA: UCLA Latin American Center, 1987.

3. A UNICEF Guide for Monitoring and Evaluation. Making a Difference? New York: UNICEF, 1991.

4. Cernea MM. Re-tooling in Applied Social Investigation for Development Planning: Some Methodological Issues. In: Scrimshaw NS, Gleason GR, eds. Rapid Assessment Procedures: Qualitative Methodologies for Planning and Evaluation of Health Related Programmes. Boston, MA: International Nutrition Foundation for Developing Countries (INFDC), 1992: 11-24.

5. Anderson N. et al. The Use of Community-based Data in Health Planning in Mexico and Central America. Health Pol Plan 1989; 4(3): 197-206.

COMMENT:

In UNICEF RAP is being used to assess whether Growth Monitoring is working. There have been studies done now in eight countries.

COMMENT:

If a number of well trained and informed people interview and then come back and discuss, you can get some reasonable information. But we need to continue to try to improve this type of work.

COMMENT:

In INCAP, one methodological element used effectively, that may complement and add to RAP is "causal analysis." There is a manual published by WHO on this method.

COMMENT:

Operation research is more specific than RAP in that it is used primarily to test interventions. Operations researchers do not usually use an anthropological approach, but there is some overlap.


33. Institutionalizing the use of rapid assessment procedures in rural service agencies


Constraints to the institutionalization of rapid assessment procedures
The need for support from senior management
Steps required to alleviate these constraints
Conclusion
The role of development agencies
References


By Josette Murphy

Josette Murphy is Senior Monitoring and Evaluation Specialist, Africa RegionTechnical Department at the World Bank, Washington, DC.

This paper provides a perceptive and prescriptive discussion of RAP from a donor perspective. While RAP is described as an acceptable addition to the "tool kit" of data gathering methods used and endorsed by the World Bank, these methods remain outside the mainstream approaches of many of the Bank's economics-oriented staff. Based on the experience of bringing a new methodology into place within such an institution, the author argues strongly that high level "decision makers" in developing countries must be oriented toward the value of various data gathering tools, including RAP, and that such orientation should occur at an early stage in the programme planning process. The importance of choosing a method that fits the often inflexible "timing" requirements of decision making in planning and resource allocation processes is stressed. This and other papers appear to indicate that, both inside the World Bank and within several of the planning processes that it supports in developing countries, these methods are becoming more common. Eds.

WHILE GREAT STRIDES have been made in refining rapid assessment procedures, these are still used mostly by social scientists in the course of research or special studies. The purpose of this paper is to emphasize that implementing agencies (such as health, agricultural extension, and other government or non-governmental agencies that provide various services to the rural population) will need to integrate rapid assessment procedures into their normal diagnostic, monitoring, and evaluation activities before sustainable, participatory development can occur. This is likely to require profound changes in the concepts and procedures upon which these agencies operate. Development and research institutions should work with the implementing agencies to promote and facilitate this evolution.

Constraints to the institutionalization of rapid assessment procedures


The introduction of rapid assessment methodologies in the work habits of rural service agencies is likely to face four categories of constraints: the usual ones associated with any effort to encourage the circulation of information, those that face any effort to facilitate participatory development, planning and implementation mechanisms that hinder flexibility, and personnel supervision and reward systems that discourage initiative

1. The use of rapid assessment procedures faces the same general constraint of mistrust and over-bureaucratization as any other effort to encourage a flow of information between hierarchical layers and across departments. In agencies where the exchange of data is considered a potential threat to management or a value judgement on the performance of individual staff members, rather than as constructive feedback on implementation, any attempt to collect and disseminate information will encounter difficulties, whatever the proposed methodology.

2. Methods that elicit opinions, facts and desires from the rural population may entail a sometimes radical shift in institutional culture, especially in how managers and staff view their position in relation to their clients. Activities such as diagnostic studies, rapid assessments of existing practices, open-ended interviews, and group discussions of indigenous knowledge and beliefs, all implicitly recognize that the rural people hold knowledge and information of value to the agency that is providing services to them. The converse is also true: the implication is that the highly educated professionals and managers of these agencies do not know all the answers. Indeed, the problem may be that they have not yet found out what the real questions are. This de facto bottom-up flow of information involves a level of participation of the people concerned that goes against established patterns in centralized agencies.

3. The very purpose for which these methods are used entails some radical changes in the way programmes are planned, implemented, and evaluated. From a rigid set-up, in which objectives and strategies are formulated at the central level, together with a detailed "blueprint" work programme, to be followed with little modification, an attempt is made to shift to an iterative learning process, in which diverse methods are used to identify what the people are doing, why, and what their behaviour is in response to the services made available to them This implies the acceptance, before the fact, that the work programme, the implementation strategy, and indeed the objectives themselves, may need to be revised on the basis of experience.

Such a process calls for a degree of flexibility in planning and management unlikely to exist in highly centralized, top-down management structures, and is difficult to put into effect in agencies with uncertain resources and unreliable communications. Indeed, it may be difficult to accept by the very development agencies recommending the use of rapid assessment methods.

4. Flexibility and iteration in planning require changes in staff supervision and performance evaluation. Obedient implementation of a programme as authorized becomes less important than good observation skills, the ability to bring to light the reasons behind people's behaviour, and the capacity to question whether the logic behind the programme design, calendar, and strategies remains valid. This is difficult to achieve in highly centralized management systems, and may require a level of planning and managerial skills not always readily available in rural service agencies.



The need for support from senior management


A key lesson from World Bank experience is that such shifts cannot take place without the full understanding and active support of senior management in implementing agencies over a long period of time. Because of its fundamental concern that all investment projects contribute to the long-term improvement of management practices, the Bank considers that the measures to strengthen the capacity of the implementing institutions to monitor and evaluate their own activities should be planned for all Bank loans.

The Bank, as well as many other development institutions, emphasizes that a good monitoring system cannot be limited to financial and physical data on programme implementation. It must cover quantitative and qualitative evidence of the awareness and use of the available services by the intended beneficiaries and other rural people, together with selective evaluations of the resulting changes in productivity, income, or health status.

Managers and technical directors need answers on a regular basis to such key questions as:

1. Are the people aware of the availability of the service?
2. Do they take advantage of it?
3. Who does and who does not, and how does this compare with the intended beneficiaries?
4. What are the reasons (social, economic, technical, environmental or other) that explain their behaviour?
5. What does all of this mean for our programme?

However, to obtain and utilize such information on a periodic basis, and whenever necessary to quickly undertake a diagnostic study or a beneficiary assessment, the managers and technical staff in service agencies need to be familiar with qualitative, explanatory methodologies. Answers to the type of questions outlined above cannot be provided through rigid surveys, but may require the use of open-ended interviews, group and community interviews, systematic assessments of existing knowledge, attitude and practices and beneficiary assessments through participant-observation [1].

For the long-term, data collection activities that answer such questions encourage an increase in the people's involvement in programme implementation, monitoring and evaluation. The same effort is also necessary to encourage their increased participation in programme identification and design.

Steps required to alleviate these constraints


1. The sank has found that the first stage in building up the capacity of a service agency to make good use of whatever rapid assessment methods may be appropriate to its information needs is to work with its senior managers and technical managers, to ensure that they share the same understanding of information as a constructive management tool and not as a threat or judgement on individual performance.

2. The next step is to help these managers identify their information needs for programme planning and design, or for monitoring implementation progress and the people's utilization of the services provided. At that time, they should be introduced to the key methodologies that may be used to fill these needs quickly and efficiently.

3. It is only after this common concept is established, and a demand for constructive information has been created, that it becomes worthwhile to provide in-depth methodological training to technical and pare-professional staff.

The Africa Region of the Bank is making a major effort to help agricultural extension agencies strengthen their capacity to provide services appropriate to farmers' needs in an efficient manner. As part of this effort, the Bank co-sponsors, with groups of borrowers from neighbouring countries, a series of two workshops one year apart, in which each participatory agency is represented by a team including its most senior extension officials as well as the person(s) in charge of coordinating monitoring and evaluation activities for the agency [2, 3]. In parallel with these "training" activities, we have found that our own project officers must share the same understanding and show interest in and support of this effort throughout project preparation, appraisal, supervision and evaluations. We are therefore providing training and ongoing technical advice to our own staff along these lines.

Conclusion


The integration of rapid assessment procedures into the "toolkit" used by rural services agencies will be a difficult, slow process, but it is essential to sustainable development. It is a timely effort, that complements the increased attention given to popular participation.

The role of development agencies


Development agencies can play an important role in reaching this objective through their own actions, by showing that they give weight to the people's behaviour and rationale, and therefore to methodologies that provide a better understanding of human behaviour [4]. To succeed, they should not limit their role to providing technical support to practitioners, but rather to promote better understanding among rural service managers of the benefits their own agencies would derive from a systematic utilization of rapid assessment procedures on a sustainable basis.


Contents - Previous - Next