Contents - Previous - Next


Audiences for training


Some examples of communication research training for four key audiences are given below. The examples given are not exclusive to any group. With adaptation, they are appropriate for all groups.

Policy-makers

For top decision-makers, training makes explicit the types of data, limitations on their uses, their form, their presentation context, and key factors that affect their trustworthiness.

In general, few policy-makers are trained in social research and few care about the details of research. However, we have found it useful to counsel policy-makers on the limitations of data from different research methods. We classify research methods by the extent to which the findings give conclusions for (describe or explain) a total population.

This is done by contrasting methods by their power and reach.

DATA POWER

Describe: Less power. To describe is to tell what exists, what has happened, or what is happening in terms of size, number, frequency, direction, or type of behaviour.

Explain: Greater power. To explain human behaviour is to tell "why" something happens or "how" it is caused - e.g., why some people act one way and others act another way. Information that explains people's behaviour is more difficult, time-consuming, and costly to get. Explaining behaviour means telling why behaviour is different or why it changes under different circumstances. This usually involves measuring the same or similar people at different points in time and under different conditions - e.g., before, during, and after a health intervention.

DATA REACH

Extrapolate: This is an inference about the total population that is (a) based on evidence from a sub-group that does not necessarily represent the population; or (b) based on evidence from a different population. So, an extrapolation is a conclusion (without evidence) about people and conditions that may be different from those studied.

Generalize: A generalization is an inference about the total population based on studying only a part of it. The evidence is from a sample - a small-scale replica of the larger population. So, a generalization is a conclusion (with evidence) about people and conditions similar to those studied, but which were not necessarily studied themselves.

These four information objectives give the following scheme for classifying RAP and other research methods:


EXTRAPOLATE

GENERALIZE

DESCRIBE

What behaviour exists in a subgroup

What behaviour exists in the population

EXPLAIN

What causes behaviour in a subgroup

What causes behaviour in the population

The top of the schematic distinguishes methods that are known (generalize) or not known (extrapolate) to represent a larger population. The left side of the schematic distinguishes methods that do (explain) or do not (describe) measure changes in people's behaviour over time.

Thus, the next schematic contrasts methods by the types of data they produce:


METHODS USED TO EXTRAPOLATE

METHODS USED TO GENERALIZE

METHODS USED TO DESCRIBE

Participant Observation

Census


Non-Participant Observation

One-Time Survey


Focus Group Discussion

Registration Systems


Depth Interview

Aggregate Data


Surrogates

Content Analysis


Informants



Expert Panels



Projective Techniques



Non-Probability Survey



Pre-Testing



Specialty Surveys


METHODS USED TO EXPLAIN

Critical Incidents

Controlled Field Experiments


Case Studies

Multi-Time Surveys


Pilots/Demonstrations

- Gross-Change


Quasi-Experiments

- Net Change/Panel


- Interrupted Time Series

Simulations


- Non-Comparable Groups

Physical Lab Experiment


Human Lab Experiments


METHODS THAT EXTRAPOLATE DESCRIPTIONS (top left): These studies describe an unrepresentative part (sub-group) of the population. They are call "microstudies," "community studies," and "small-group studies." Many of these are RAP studies.

They tend to be unique to the researcher, impressionistic, uncontrolled, and only weakly quantitative. However, they can be unobtrusive and natural approaches to a problem, for which they can probe in-depth and produce data of high validity. They are used when we want "to know a lot about a little" - developing ideas for further, larger-scale study; knowing the cultural context for behaviour; probing hidden meanings; revealing broad attitudes or detailing specific behavioural nuances.

METHODS THAT GENERALIZE DESCRIPTIONS (top right): These methods describe a whole population based on evidence from all of the people (census) or from a group that represents all of them (sample). The "sample survey" is the most commonly used of such methods. However, it is important that the line between "description' and "generalization" not be to thickly drawn as the processes are often linked as "stages" or "phases" in an overall research design. For example, a focus group discussions conducted with a small, unrespresentative sample are often used to develop an interview protocol which is then used with a to more representative group in order to obtain results that can be generalized to a population.

Another approach uses the survey as a prototype that tends to produce superficial data, is obtrusive and unnatural (a brief, artificial exchange of information), structures most questioning, and is often based on people's self-reporting. However, the survey can reach many people in many places with many questions on many topics in a relatively brief time and for low cost per-unit-of-reliable-information. And, because of its standardization, the survey can be replicated. Such methods (including content analysis and registration systems) are used when we want "timely (not rapid) generalizations." Qualitative methods, such as the focus group and various forms of observation often use survey data as a base from which greater detail, and greater depth into the "whys" behind the survey results are sought.

METHODS THAT EXTRAPOLATE EXPLANATIONS (bottom left): These methods are used to explain cause-and-effect relationships within sub-groups. They do not generalize to larger populations, but are unique sub-group studies that are used when we want to know what causes behaviour in a subgroup, or when we want to know "a lot about why some people change as they do." Thus, they are best used when the focus is on change in a well-defined, intact sub-group, when time is given to in-depth study, and when equivalent groups can be compared for changes.

They are usually intensive, on-site studies of change. The case study is the prototype: A witness before, during, and after a health intervention. Case studies often use comparison groups to see differences when a health intervention is introduced to one group and not to another. They are often impressionistic and subjective, and the presence of the researcher(s) may cause effects. However, they tend more toward immersion into, than invasion of, the study culture, thus providing valuable insight. They often combined formal and informal, individual and group methods. And they are useful for training.

METHODS THAT GENERALIZE EXPLANATIONS (bottom right): In theory, these are the most powerful methods for explaining behaviour ("What causes what, and why?") in a large population. They are like physical-science laboratory experiments (with experimental and control groups) adapted to social settings.

In practice, they have been the most disappointing forms of development research. They usually involve heavy investments of time, money, and other resources. But we use them when we want to know what causes behaviour in the population; or when we want to know "the conditions under which many people change."

The controlled field experiment is the prototype. It is vulnerable to unknown, uncontrolled forces of "contamination." It assumes that we can control reality in a very disorderly world, and, thus, requires an exceptionally strong theoretical and empirical base. Such studies require great rigor - sometimes seizing control of the real world with unrealistic assumptions that prevent natural events from influencing the study groups. However, on the positive side, measurement is controlled and can be generalized, producing our most powerful quantitative conclusions about which causes produce which effects.

Training in research communication


Managers

For programme managers, training helps them to better plan, negotiate, and monitor studies; to improve their ability to analyze data; and to prepare more persuasive, prescriptive reports and presentations. Like policy-makers, managers also need training in data limitations. But a key area in which managers particularly need training - and one to which they respond readily - is in analysis of programming opportunities and constraints.

Researchers

For researchers, training gives a framework for planning and applying communication principles and dissemination techniques that brings communication into the entire process of conducting a study and encourages audiences to use data rather than leaving them on the bookshelf.

Often, social science researchers are the most resistant audience to the plea that they make their data more understandable and usable for the lay person. Research communication frequently serves as a "translator" of the technical to the lay. Unhappily, some researchers seem to need to wrap themselves in a cloak of mystique, preferring appearance to acceptance.

So long as the written document remains the dominant mode of research reporting, training for researchers will surround five assumptions for improving the usefulness of technical information:

IMPLICATIONS. In a 1988 evaluation of 44 UNICEF survey reports, we found that 3 percent of all text was devoted to implications and recommendations (and most reports used these two terms interchangeably). The other 97 percent was given over to details of the survey methodology, recitation of findings, and statistical tables. UNICEF-sponsored research reports are no better and no worse than other sponsored research reports.4

In contracting studies, we encourage policy-makers and managers to write into the contract the requirement that the final report must be submitted in the form of a set of implications for actions to take. This requires that the report be organized around the action implications rather than around the methodology or the findings. Decision-makers are more interested in "what to do" than in "what was found," although they need to know enough of the latter to trust the former.

Organizing reports around the implications does not obscure findings, but subordinates them to the actions they suggest. For credibility, enough findings are reported with each implication to justify it as the "voice of the people" rather than our personal opinion.

Recalling the "Contagion Route" in Afghanistan, here is a fictional example of, first, a traditional research report table of contents as opposed to, second, a research communicator's report:

An example of an outline from a traditional research report might look as follows in Table 1:

Table 1. Infant Mortality in Afghanistan

EXECUTIVE SUMMARY

Chapter:

1. Background and Purpose
2. History of Water Interventions
3. Methodology
4. Findings at the Well
5. Findings between the Well and Home
6. Findings in the Home
7. Conclusions and Implications.

An outline from a research report following research communication principles would be different, as seen in Table 2:

Table 2. Potable Water does not Save Afghan Infants: Following the "Contagion Route" from the Well to the Child

METHODS SUMMARY: Time, Place, Population, Method, Sponsor, Error

EXECUTIVE SUMMARY

Chapter:

1. Mothers are Unwitting Disease Carriers

• Summary of Implications
• Findings
• Conclusions
• What to Do

2. Mothers' Practices Improve With Role-Playing

• Summary of Implications
• Findings
• Conclusions
• What to Do

3. Designing a Motivation Programme That Works

• Summary of Implications
• Findings
• Conclusions
• What to Do

4. Ensuring Safety from the Well to Well-being

• Summary of Implications
• Findings
• Conclusions
• What to Do

In the traditional report, the Executive Summary is almost always a summary of findings. In the second example, the Executive Summary is a summary of step-by-step implications. Moreover, each chapter is headed by a 1-2 page Summary of Implications of what actions to take. The title, "Potable Water Does Not Save Afghan Infants," tries to make the point that titles are not unlike "headlines." They need not be biased but they still generate interest. Just as headlines sell newspapers, headings can "sell" reports.

JOURNALISTIC. The outline above suggests a second rule of research communication: write what ordinary people can understand ("If the Smiths can get it, the Smythes will get it, too."). In other words, write journalistically, in "popular" language. For some researchers, this rule is a threatening "vulgarization" of their argot. Not only do we want lay people to understand what we are suggesting they do, but adaptation of the research data is much easier as our presentations go from audience to audience, from policy-makers to managers to villagers. Also, people learn better by building on what they already know, than by adding new information (what we technicians know) to what they know. Simple, clear writing aids the knowledge-building process.

Moreover, rather than burdening and lecturing the reader in the nuances of research methodology, we detail our methods in an attachment, and usually summarize the methodology in a small box on page 1 of the report to answer the reader's natural questions of when the study was done, where, of which people, by what method, who was the sponsor (implicitly, a question of credibility), and how trustworthy are the findings - e.g., plus or minus 4 percent error range due to sampling.

THEMATIC. The report example above also suggests a third rule: write the report thematically, more like a composition than a dictionary. People think in related themes, not necessarily in the sequence of design, implementation, and analysis of a study. This means that the data are presented in the sequence of actions to take, not reported in the same sequence as they were found (i.e., the sequence of questionnaire items). Action implications are better understood if they are reported in the Step 1, Step 2, Step 3 ... series in which they would be carried out.

VISUALS/LEARNING AIDS. Another finding of the UNICEF evaluation noted above was that three of every ten study reports used visual aids (graphs, maps, photos, schematics, illustrations, etc.) to help the reader understand the data.5 In this age of dynamic, explicit computer graphics, text-only reporting is unacceptable if our purpose is to be understood. Data alone are rather sterile. Social scientists, however, are responsible for very few health program decisions about objectives, budgets, time schedules, administration, logistics, information systems, and other aspects of programming. As policy-makers and managers are our key audiences (at least initially), we must take pains to ensure their understanding. Graphically, visuals help understanding, if only because of the relief they give the reader. Verbally, anecdotes and verbatim quotes also aid understanding by enlivening and enriching the data. People learn better by examples than by abstractions.

Another aid to easy reading is the use of colour. If coloured graphics are too expensive, coloured pages are not. The Executive Summary is more distinctive if it is printed on coloured pages (say, blue). Additionally, each chapter's Summary of Implications should be on coloured paper (say, yellow) to distinguish it from the white-page chapter text.

PRODUCT LINE. Another contract stipulation we urge upon policy-makers is that all research studies be reported in six separate documents, which add no real burden on the researcher:

Briefing Paper: 1-2 page summary of implications and key supporting findings for top policy-makers - ministers, permanent secretaries. Give enough information to enable them to address national and international leaders and to instruct staff to follow up, if interest is provoked.

Research Note: 3-5 page expanded summary of implications and key supporting findings for health and other (non-health) programme managers. Give enough information to alert them to the study, its implications, and availability. And also indicate inter-sectoral actions needed (e.g., MOH, MOPW, MOI).

Research Memo: 4-7 page summary for health managers and practitioners rounding out the implications and supporting findings, providing enough information for on-site programme changes.

Talking Points: 1-3 page list of implications and key supporting findings for leaders and practitioners at any level to address a lay audience.

Report: Unlimited pages, full report with all details. Mainly for reference material and for distribution to a very select audience of health researchers.

Press Release: 2-4 page summary of implications and key supporting findings written thematically and with enough information on the methods used to enable the reporter/ commentator to competently describe the study without requiring other information.

Communication Professionals

For specialists planning the communication of study data, training is needed regarding problems of programme policy, planning, administration, logistics, field management, production and distribution, target populations, and research and evaluation objectives and measurements. Familiarity with programming is essential to effective communication of its success or failure.

For research purists, it is probably a bit galling to say that the effective communication of research information is a marketing problem. However, like a commercial product, data have to attract and hold attention before they are "bought" by the policy-maker. And they have to prove useful ("customer satisfaction") to be re-used. The following are a few examples of how research data have been effectively and attractively packaged to grab and hold high officials' attention:

CAMCORDERS. The USAID-funded Learning Technologies for Basic Education Project (LearnTech) brings innovative technologies for teaching and learning in developing countries.6 Interactive Radio Instruction (IRI) is the most widely used and most fully documented.6 While the effectiveness of IRI is clear, its is blurred in the eyes of the international donor community despite numerous evaluations and proven effectiveness. To overcome donor biases against radio instruction and to dramatize the IRI difference, the LearnTech project director took his camcorder unannounced into a Bolivian classroom and recorded the lively, spontaneous, cheerful interactions between the students and the radio "teacher." The 10-minute "Fun With Numbers" video leaves no doubt of the difference and the impact of IRI on student and teacher motivation, interest, and learning. Without a single statistic, the video evaluation "proves" the effectiveness of the technology.

FILM AND PHOTOS. Years ago in India, the UNICEF Project Support Communication (PSC) unit and the Regional Planning Service effectively dramatized the breakdown of health service delivery to rural villages. One project was documented by film and another by photographs.

Film: Survey interviews with Block officials and with villagers found great disparities in perceptions of the quality, coverage, and use of public health clinic services available to the villages. Block health officials generally saw clinic services as highly efficient, effective, and widely accepted and used by their rural clients. Those mothers who used the clinics often had bitter complaints about staff and services. Many more did not use the clinics than did. Moreover, ignorance of services was high and perceptions tended to be negative. To document the differences in perceptions, the PSC team interviewed several officials and several mothers, recording on film their answers to the same questions. Then, they produced a "split screen" film that showed the face of one official and of one mother at the same time. The face of the mother was frozen on one side of the film while the official answered a question about, say, mothers' use of the clinics. Then, the face of the official was frozen on the other side as the mother answered the same question - usually with a widely discrepant viewpoint. The voice-over gave the different percentages of answers among all officials and all mothers for each question. The differences in perceptions could never have been as compelling in a written research report.

Photographs: In another study, a physical, on-site inventory of rural clinics found great differences in the quantity and quality of medical staff, supplies, and equipment that were supposed to be in place and those that were actually in place. The PSC team created a photo album juxtaposing the ideal and the real. Thus, one page showed a photo of, for example, the intended number of clinic staff and the opposite page was a photo of the actual number of doctors, nurses, technicians, and support staff. Each photo was captioned with the percentage of actual conditions found: percentage of nonworking refrigerators, disabled vehicles, depleted supply cabinets, empty or dated compound jars, non-sterilized needles, and the like. Although not animated, the differences in the standard of services intended and the standard of services realized was quickly and explicitly understood by MOH policy-makers.

Although video, film, and photographs amount to no more than anecdotal data, their power can be so compelling, so complete, that they tend to produce generalizations. Where data can produce generalizations, the use of brief captions with voice-over or photos is much more effective than lengthy text in a report.

Training in source and audience segmentation


Research communication includes training in several other techniques, showing how to improve the supply of and demand for better data for decision-making. In particular, two key needs for communication research training are source and audience segmentation and use of the communication planning overlays.

SOURCE SEGMENTATION. Overlay 2 identifies the primary and secondary sources of information for both data collection and for discussions of the theory and methodology of the intended study. Source segmentation may include public and private, non-profit and for-profit, institutions and individuals who can provide social, economic, geographic, environmental, medical, demographic, and other qualitative and quantitative information in the form of articles, reports, management records, registration systems, maps, statistics, photographs, computer graphics, and other written and visual sources. Additionally, other information may be gathered personally from individual interviews, groups, experts, and other informants.

AUDIENCE SEGMENTATION. Overlay 3 identifies the primary and secondary audiences who might benefit from the information, its analysis, and its interpretation. For example, villagers or others who are being studied may well be chosen as a relevant audience for much of the information learned. So might service providers, programme managers at various levels, policymakers and advocates, including donors. Other audience segments include educators and researchers - those who develop, choose, and work with various methodologies of data gathering and analysis. It is in terms of these audiences that the "product line" described previously is developed.


Contents - Previous - Next