Contents - Previous - Next

The questionnaire syndrome

Although we could begin our discussion with many methods, I have selected the problem of the questionnaire survey as a way to address the broader issue of the need for new thinking about research among the rural poor. I do not aim to totally discredit the questionnaire (as you will note, I see a role for the sensible, focused questionnaire) but rather to illustrate how research tools can run astray from their original purpose and become ends of themselves rather than simply tools to help us understand farm house-holds and agrarian systems.

Ironically, my argument in Mexico ten years ago for more flexible and even qualitative methods was based not on anthropology (in which I received a PhD), but on quantitative sociology (in which I received my M.S. several years earlier). Little did I realize then (but have come to understand since) that, generally speaking, agronomists, economists or agricultural scientists, respective of discipline, are not professionally trained in the development and use of the questionnaire4. If they had been, I do not think that we would have gone as far on the questionnaire bandwagon as we have over the past few years.

Sociology, among all disciplines, is most responsible for the development of the questionnaire survey. Positivism, the belief that human behavior and institutions can be predicted and quantified as in the natural sciences, has been the dominant school of thought since the 19th century when the French philosopher Auguste Comte promoted his science of society theory. This orientation, along with the notion of Emile Durkheim's "social fact," which he argued can be measured and counted like physical phenomenon gave birth to the empirical school of western sociology and its agricultural counterpart, rural sociology.

The quantitative sociologist's primary tool for testing hypotheses about social behaviour is the questionnaire which is constructed to contain dependent and independent variables. Strict statistical analysis is applied to the results to determine if pre-formulated hypotheses are rejected or accepted. Random sampling and minimization of any bias are to maintain scientific respectability in the eyes of scientific peer groups. Many sociological surveys aim to uncover the relationship between a social behaviour and levels of education, income, social class, ethnic identity, family size, etc. In agricultural research, this kind of questionnaire reached its most popular zenith during the "diffusion of innovation" school of Everet Rogers (1963) who aimed to correlate technology adoption rates with everything from kinds of magazines read to social status to number of in-house toilets. During this period, many students from developing countries were taught this questionnaire method either through visiting foreign professors or their own national professors returning from places like Cornell University, Iowa State, and other American Land Grant Schools5.

For individuals who went on to become professional sociologists, however, experience taught that any questionnaire had its strengths and weaknesses. Strengths were (1) large numbers of people could be surveyed; (2) statistical analysis, random sampling, levels of confidence could be achieved with quantified data, and (3) generalization was possible. At the same time, important weaknesses were stressed: (1) what people say is not what people do; (2) the results are cultural and time-bound; (3) context of an activity is not revealed; (4) the person asking the questions introduces a bias since deference or untrue answers may be given; and (5) sampling is a tricky business, particularly in places where people do not have phones or mail boxes or do not live on linear streets; (6) analysis takes times, especially if done by hand and even when computers are available there is no guarantee of speedy results.

The truth of the matter, however, is that very few Third World agricultural and rural development workers or scientists became tropical Talcott Parsons. The problems surrounding questionnaires were quickly forgotten and the elaborate questionnaire has hung around to plague rural projects and people for more than 30 years. By the early 1980s, using the questionnaire had developed into an industry.

What had been forgotten is that the questionnaire technique was borrowed directly from academics where the objective was to empirically test hypotheses in order to write a graduate thesis or publish in an academic journal. The manner in which the questionnaire was formulated did not allow the instrument to discover systems, explore cognitive perceptions, or to help identify social and even technological problems. Least of all did the questionnaire solicit input and participation of farmers or "users" in exploring relevant questions. Instead, questions were formulated largely outside the farming community and therefore constructed in a language difficult for farmers to understand.

Another motivation to use the questionnaire, the pressure to produce "numbers," further subverted the need to "understand" in favour of the need to "impress" founders and scientific colleagues. Furthermore, for many rural development researchers a guiding principle dominated action: when in doubt about what first to do, "administer" a questionnaire. Who was it that pointed out that medical doctors also "administer" pills to their ill patients (when they do not know what else to do)? The questionnaire was used much like a comfort blanket to a small child.

I would like to offer a few frustrating examples I encountered with the questionnaire during my work with the International Potato Center which helped reshape my thinking and support my belief in the need for more diversified methods.

Case 1: Limitations of revealing relations (Philippine storage case)

A CIP post-harvest technologist (Robert Booth) and I were working with Philippine potato workers in generating a new method of low cost seed potato storage on the Mountain Trail (Benguet). When it became obvious adoption of the "technology" was taking place, the team traveled up the trail documenting informally the acceptance process. We took field notes, drew maps of diffusion routes, listened to key informants, and let farmer groups tell us what was happening. Yet the irritating sensation of the team was that no one would believe us unless we assigned "numbers" to the adopters, non-adopters, and statistically correlated this with other aspects. The job of constructing the questionnaire fell to me as the team's social scientist. Based on my prior sociological training, I constructed a seven-page questionnaire which began by asking the farmer a series of questions about ( 1 ) education; (2) size of farm; (3) crops grown; (4) family size; (5) distance from the main road, (6) marketing practices, and (7) income. These questionnaires were then given to enumerators we had quickly trained. The questionnaire was "administered" to 120 farmers and results sent to Lima, Peru where they were tabulated. The number of adopters was high but nothing else correlated. While the fact that a CIP-introduced potato technology was being adopted by farmers was received with enthusiasm by CIP and the Philippine national programme, our team was disturbed that the questionnaire did not seem to reveal new information other than numbers of adopters.

Subsequent follow-up by the CIP-Philippine team revealed why the questionnaire was not the only tool we needed, especially for guiding future research. Among these: (1) the enumerators felt awkward in filling out the difficult, sensitive (and perhaps irrelevant questions) so they often simply guessed and jotted in their best estimate; (2) the questionnaire did not get at locally sensitive questions related to power, including control of seed and inputs; and (3) the questionnaire channeled information gathering along basically disjointed lines of reasoning and not toward the complexity of the highland agrarian systems in which the potato producers were enmeshed. In short, we did not gather through the questionnaire why or how adoption was occurring. Even cost-benefit analysis of on-farm trials conducted later did not reveal any clear-cut monetary profits to adoption.

The storage team soon came under the accusation from critical colleagues that the adoption by farmers might be a fabrication by over zealous post-harvest technologists and their team social scientist. The answer to many of our questions, however, finally came to Booth and I during our next visit to the mountain trail and after a rather long social occasion of drinking San Miguel beer, eating dog meat and befriending mountain farmers. I will never forget a revealing response of one farmer which hit Booth and I like a bolt of lightning.

"You really want to know why we are using the storage method? I will tell you. We always have a problem getting good seed. Seed is the key to profitable production here on the mountain trail. For years we have been dependent for storing our seed in the Chinese controlled cold stores, in Manila, the same people who supply us inputs, especially chicken manure. The cost of taking our seed to Manila each year is high and on numerous occasions, due to power failure and other problems, we have lost whole lots of seeds. Whatever, the cost of moving it back and forth is high. Your method allows us to store seed here on our farms and gives us independence. But this is not something I am going to tell you and have you write down with my name on a piece of paper."

At the moment, a complex system of power relations, credit, inputs, and dependency was suddenly revealed in less than 30 seconds. A formal, predetermined questionnaire format several pages in length would never have reached to the heart of these important and sensitive relationships.

Case 2: The leap frog problem (Mantaro Valley Project, Peru)

This case illustrates the potential quicksand of large-scale surveys, however well constructed the questionnaire. An agro-economic team in this case set out to (1) determine constraints to potato production in a highland valley; (2) address these constraints through testing potential technologies in on-farm trials. A detailed questionnaire was first administered by an agro-economic team to select farmers in three agro-ecological zones previously determined through an anthropological informal survey. The team was trained and systematic data were gathered. The primary problem, however, arose through what I call the "leap-frog syndrome." The project was programmed over a three-year period, involving three growing/harvest periods [2]. This implied that the questionnaire must be executed, results analyzed and conclusions drawn at the end of each season to be able to plan in time for the next round of on-farm trials. In theory, this is possible. However, in a developing country where transportation is problematic, computing ability slow, team coordination a monumental task, and planning difficult, theory and praxis are far apart. Planting times cannot wait for rigid sequential steps involving (1) informal survey to design a (2) questionnaire to plan for (3) on-farm trials, unless each step is executed at breakneck speed. In this case, due to the size and complexity of the survey, there was time to only partially digest findings of the survey to be able to plan for the next on-farm trials. Hence, a leap in faith was required by the team at each step to be able to link premature and unanalyzed survey findings with the technologies to be tested.

This experience illustrates that methods must be modified to become servants to the whole research development process. As I will shortly argue, questionnaires can be valuable for applied rural development if they are (1) seen only as one research tool among many; (2) only as valuable as the results which can be applied practically and in time. In many countries, particularly in Asia, there is a "knee jerk" tendency to use a set standard of questionnaire which is dreamed up or copied from another questionnaire by someone who sits in an office. These questionnaires generally require long interview times (over an hour) and frequently concentrate on Rogerian sociological data - school, religion, position in farmers' organizations, etc. The way the questions are written (often read directly to farmers) is not appropriate for small-scale, subsistence producers who use a different logic for a given practice. Ken McKay (IDRC) in a recent trip report, after seeing a questionnaire to be given to tribal sweet potato farmers on the Philippine Mountain Trail, observed:

"Some questions focus on chemical fertilizers, but subsistence farmers do not use them; they have, instead, elaborate fertilization techniques with organic methods. Yields are difficult to estimate since the questionnaire asks for hectares while farmers use local measurements."

The language of the questionnaire that McKay (personal communication) criticized was phrased in the language of scientists not in the language of farmers and therefore did not lead to correct answers. Enumerators are under special pressure to bring back all blanks filled, especially the numerical data, in and ready for the calculator or computer. In cases like this, interviewers often become impatient after the questionnaire session and secretly fill in the blanks with what they believe the survey coordinator wants to find. The whole exercise finally ends up becoming absurd.

Case 3: Eliciting quantified data without the questionnaire (Potatoes in Nepal)

In 1982, I participated in a review of the Nepal National Potato Programme. The review team decided to find out about farmers' use of potato varieties. The first impulse of the review team was to construct a questionnaire (with all the irrelevant factors and stilted questions). My team was assigned to Nepal's western region. After the first day of using the questionnaire, we felt the questionnaire was not giving us systematic information and was awkward and embarrassing for farmers. Everyone was bored, especially farmers.

After debating the problem, the team decided to abandon the questionnaire and go to the local market and buy all the different potato varieties we could find. Local farmers added more to our sack of different varieties. With our small collection we began to trek northward through the potato areas. At each village or in farmers' fields we could stop and pour out our potatoes on the ground. Farmers quickly gathered around to tell us about our collection. They immediately became enthusiastically involved and through discussions detailed characteristics of each variety, arranging them by different local categories of appreciation: zone production, disease resistance, culinary quality, place in the meal, and so on. Interestingly, after a few interviews with our potato samples, we began to see how quantitative data could be obtained about varieties. We built a matrix on the ground where each variety could be related easily, for example, to multiplication rates in farmer terms, uses, barter and sale, disease problems, marketability, and zones. Stones and sticks were used to get at comparative values or simply lines were drawn. We found that we could easily and quickly gather (or normally farmers would tell us) very specific information for each variety. Within a very short time, a tremendous amount of very useful quantified data were collected. The quality of these data was much higher than with a questionnaire using categories that were outside a farmer's reality [3].

Case 4: Meeting a specific need (Identifying fallow periods in Bhutan)

Agricultural researchers should not always "throw the baby out with the bathwater." The focused questionnaire, following on the heals of informal methods, can yield important quantified information needed by outsiders on crucial special topics. In Bhutan, after conducting an informal study of potato farming systems in the Bhumtang Valley at the request of the Bhutan government, our team found that Department of Agriculture officials had specific concerns that could only be answered (and believed) by the focused questionnaire. The Bhutanese government, at the highest levels, had become concerned about sustainability and the preservation of traditional agriculture. The specific worry was about fallow periods involving potatoes, a new commercial crop. A few year earlier a potato craze hit many Bhutanese farmers in Bhumtang who were attracted to higher profits from this new cash crop. Farmers had been warned by extension workers about the need for proper crop rotation and about the dangers of disease build up and losing cropping diversity as a result of monocropping potatoes. There was no way to measure whether dangerous cropping patterns were emerging except to gather quantified data on each farm household and their cropping/rotation practices. The logical method to accomplish this was to develop a very specific one-page questionnaire which allowed us to gain the information required within a few days. The one-page questionnaire was supplemented and improved by drawing with farmers on paper or on the ground the fields of farmers and then discussing rotation patterns using the visual images. The results showed that indeed farmers were not rotating their fields properly and were probably headed toward fungal disease build-up and collapse of the new-found potato economy. Based on the results of this focused questionnaire, corrective measures were taken by the Bhutanese government.

Keeping the questionnaire in perspective

The questionnaire survey is but one useful (but often overused and misused) tool for doing research among farmers. In fact, I also suggest that the "empirical, hard-data" questionnaire survey can be considered the easy method; the "informal, soft-data" approach the hard one.

I began this paper with reference to a conference in 1980. A decade later ( 1990), I moved to the Philippines and took up the challenge of establishing and coordinating an Asian-wide (and we ultimately hope world-wide) network on the "User's Perspective With Agricultural Research and Development" (called UPWARD for short). Just to get some projects started, I solicited proposals in several Asian countries and planted a bit of seed money. Within weeks, the questionnaire Albatross came back to haunt me. Virtually every proposal 1 received contained a 15 page plus questionnaire (the tick-off kind). The more I pushed, however, for participatory methods (informal interviewing, household observation, farmer diagrams, etc.), the more resistance I faced. Finally in frustration, I aggressively cornered one project leader in the capital city and asked:

"Why do you insist on this tick-off questionnaire?"

"Because we have no time," was the reply covered with nervous giggles.

"No time. What do you mean?" I shot back, thinking that my methods were the epitome of Rapid Rural Appraisal.

"What you are suggesting, Dr. Rhoades, is hard - informal surveys take time in the field talking to people and it takes even more time to analyze the open ended question. What we want to do is give the questionnaire to our district staff in the field, let them get the data, and then we can deliver it to the computer person back here who will analyze the data for us. And they can not analyze anything but numbers."

So much for hard and soft science [4]! It is easy to set up scientific predetermined check boxes to be filled in and sent to someone in a computer lab who has no feel for the field reality. It's hard to go to the field and figure out the context and relationships that lead farm households to make decisions.

I close my case with the following question I recently found in a 1 5-page questionnaire at the end of a report from a Himalayan country designed to study women's participation in agricultural extension activities6. It illustrates why questionnaires dreamed up in offices have little chance to succeed.

What are the disadvantages of agricultural extension programs? (Please check the following:)


1. Disturbance in the social order.


2. Traditional varieties of crops may disappear.


3. Ecological balance may be disturbed due to use of pesticides, etc.


4. Disturbance in the soil structure.


5. Any other (specify).

The results were used to generate the flowing table (one of 135 tables generated in the study). The most striking feature of the table, however, is that 55% (sample of 48) of the men and 38% (sample 55) of the women felt that the major disadvantage of agricultural extension programmes was disturbance in the soil structure! "Any other" did not make it to the table, but who fills in the "other" line on questionnaires to begin with? Why were the four possible answers chosen over others that would seem more logical, e.g., (l) agents sometimes know less than farmers, (2) extension programmes do not always have relevant technologies (3) agents have no funds (4) work only with wealthy farmers (5) any other? Most of the other 135 tables provide dubiously similar information while the conclusions at the front of the report are quite interesting. This leads me to wonder if the conclusions were made largely independent of the questionnaire itself. If so, this is not the first study that makes a leap in faith and relies on good common sense derived from observation and informal discussions with farmers.

This example also illustrates what Chambers (personal communication) calls "investigator bias" which, once cross-checked, destroys all credibility in the questionnaire. By limiting the responses a farmer can give coupled with the well-known fact that rural people often give prudent and deferential replies, wild distortions from reality end up being reported as scientific fact. He cites a survey by his colleagues at the Administrative Staff College in Hyderabad, India. One of the surveys revealed that only one farmer out of 272 admitted using cross ploughing. The research team, however, walking through fields observed much cross ploughing. Using group interviews with farmers, the team learned that 29 percent said they cross ploughed.

Contents - Previous - Next