Contents - Previous - Next

This is the old United Nations University website. Visit the new site at http://unu.edu



The utility of clinic-based anthropometric data for early/timely warning systems: A case study from Niger - Nancy Mock, Mahmud Khan, David Mercer, Robert Magnani, Shawn Baker, and William Bertrand


Abstract

The purpose of this study was to investigate the utility of clinic-based anthropometric data for early/timely warning purposes as a part of nutritional surveillance systems. The study combines the unique circumstances of the 1984-1985 Sahelian famine with retrospective time series of cereal prices, morbidity, and anthropometric data. Thus, the behaviour of clinicbased anthropometric data is compared to that of the other series and to the known evolution of the Sahelian famine. A conceptual framework for the use of clinicbased anthropometric data is developed, and time-series and regression techniques are used to explore the hypothesis that routinely collected anthropometric data can provide early indications of impending food emergencies. The clinic-based data performed well in identifying the famine several months in advance of official recognition of a problem and donor mobilization of resources. This suggests that properly analysed clinic data can be useful for the early detection of impending food crises, though further research will be needed to determine the general applicability of this finding. Several methodological issues related to the use of this type of data are explored.

Introduction

In recent years, development policy makers and planners have increasingly looked to food and nutrition information systems as important sources of information in the design and monitoring of interventions aimed at preventing and alleviating malnutrition [1]. An important function of such systems is the identification of rapid, short-term deteriorations in nutritional conditions. Nutritional stress brought about by disruptions in agricultural production, sharp rises in commodity prices, and/or sociopolitical instability may have a relatively rapid onset, resulting in widespread malnutrition and mortality in a period of just a few months. Information systems that focus on detecting these types of short-term problems are referred to as early or timely warning systems.

Despite the wide availability of clinic-based anthropometric data in virtually all developing countries, relatively little use has been made of such data in early/timely warning systems. For a variety of reasons reviewed in the next section of this paper, routine anthropometric data are widely believed to be inappropriate for these purposes.

This paper re-examines the debate over the utility of clinic-based anthropometric data for early/timely warning system purposes. Using time series of relevant indicators gathered in Niamey, Niger, during 1981-1986, we critically evaluate the hypothesis that simple, clinic-based nutrition status indicators, when properly summarized and interpreted, are sufficiently sensitive and specific to food stress situations to be useful as early warning indicators. While the data analysed are not unique, the historical period covered by the data is significant. This period was marked by a major disruption in food availability, the famine experienced by Niger and other Sahelian countries in 1984-1985, and thus provides a unique natural laboratory for the investigation of the responsiveness of anthropometric indicators to food stress situations.

We first review the major issues surrounding the use of routine anthropometric data for early warning purposes. The rationale for the use of routinely available data is proposed, and a set of methods for the use of this type of data are posited. Next, using clinic data collected in Niamey over a five-year period, the responsiveness of various nutrition status indicators to changes in other indicators of food access and health status is evaluated by multiple regression and time-series techniques. The potential utility of anthropometric indicators to predict food stress situations is assessed by comparing the indicator series developed with the known history and management of the 1984-1985 famine. We conclude with a discussion of the applicability and limitations of the methodology proposed, along with suggestions for further methodological development.

Conceptual and methodological issues

While anthropometric data are routinely collected and used by health facilities for the nutritional management of individual children, there is a lack of consensus in the food and nutrition technical community as to the utility of such data for early/timely warning system applications.

One argument commonly advanced against using clinic-based data for any population-monitoring purposes is that patients/clients utilizing health clinic services are not representative of the general population. Resultant clinic-based estimates of community nutrition status are therefore biased, perhaps seriously so. While this is no doubt true in many contexts, it should be noted that the primary purpose of early/timely warning systems is not to monitor nutrition status levels in the general population, but rather to detect and facilitate the prevention of near-term food crises. Thus, an efficient strategy for information gathering would focus on those subpopulations that are the most sensitive or vulnerable to changes in food-system variables. To the extent that clinics serve these "atrisk" or "vulnerable" populations, clinic-based data are likely to detect changing conditions more quickly and efficiently than population-based approaches. In fact, a disproportionately high share of "at-risk" groups among the clinic users will make the clinic data much more useful for early warning purposes. Therefore, clinic data need not be representative of the community nutritional problems for the purpose of early/timely warning systems, although information on patterns of facility utilization would clearly be useful in interpreting the data. This concept is frequently used in epidemiological practice as part of a "sentinel surveillance" methodology.

A related criticism is that nutrition status, per se, is a final outcome rather than a predictive indicator of nutrient stress. While nutrition status is indeed the outcome of food deprivation at the individual level, food and nutrition monitoring systems are concerned with detecting changes in population groups. The inequality in assets and income distribution among households in a typical community renders some segments of the population more vulnerable to changes in food-system variables than others. Thus, these more vulnerable groups experience a deterioration in nutrition status sooner, perhaps months or even years earlier, than the general population. In Ethiopia, for example, Webb and Braun [2] found that food-crisis-related adjustments started more than three months earlier for the poorer population subgroups than for the middle-income groups.

The question of whether nutrition status indicators, even among the most disadvantaged segments of the population, are sufficiently sensitive to changes in food-system variables to serve as meaningful early warning indicators is, of course, germane to this debate. Very little empirical research has addressed this question. One study found that the highest correlations between cereal prices and nutrition status indicators were found with a time-lag of only three months in Wollo Province, Ethiopia [3]

A related criticism is that clinic-based anthropometric data are typically limited to weight for age, a measure that does not differentiate soft-tissue growth from skeletal growth (that is, acute versus chronic nutritional stress). While this may pose a problem for cross-sectional nutritional assessments undertaken for intervention targeting and planning purposes, it is less relevant in the context of food and nutrition monitoring. Here the objective is to detect changes in status over a short period of time. Unless there is a marked change in the nature of the user population of sufficient magnitude to distort trends in the data series, weight for age will reflect short-term changes in nutrition status, as it is highly correlated with weight for height and arm circumference [4], two commonly used measures of acute nutritional stress.

A review of the literature suggests that inadequate methods of summarizing nutritional data might also partially explain why clinic data may heretofore not have been viewed as useful. Conventional methods of summarization include the mean nutrition status and prevalence of malnutrition using various commonly accepted cutoff points [5]. Members of poorer households are expected to cluster near the lower tail of the distribution of nutritional indicators during a normal year. Thus, the monitoring of distributional trends with respect to a low cut-off point should logically provide earlier signals of food shortages than would the monitoring of the mean of the nutrition status distribution for the general population.

Where to set the cut-off point or threshold would logically depend on the expected distribution of nutritional measures during a "normal" year. For example, in communities where childhood nutritional measures have a distribution similar to that of one or more of the commonly used reference standards (e.g., the NCHS/CDC standard), one standard deviation below the mean ( - 1 SD) might be a reasonable cut-off point. In communities normally exhibiting higher levels of malnutrition typical of many developing countries, a lower cut-off point of -2 SD might be more appropriate.

A final criticism of the use of anthropometric measures is that they are influenced by a number of factors other than food availability, primarily morbidity experience. Nutritional status as measured by anthropometric data, then, may not be a specific indicator of food stress situations. This potential problem can, however, both be assessed and be corrected for in the context of routine monitoring systems.

Materials and methods

Three series of data are used in the analysis presented.

Food availability is measured by the price of the major staple food of the diet of Niger, millet. It can be argued that in a largely subsistence agricultural economy such as that of Niger, prices of staple foods represent a reasonable proxy for food access [6]. Monthly millet prices over the course of the study period were obtained from the government of Niger. The reported monthly prices are the mean prices of millet observed in different markets in Niamey.

The measure of morbidity used is the frequency of all diseases occurring to children under the age of five years reported by government health facilities in the city of Niamey during the study period. Aggregate diseases frequencies were used in lieu of more specific diseases or conditions in order to capture the underlying seasonal and medium-term patterns and trends of morbidity among children in the population studied. Limiting the diseases or conditions considered to those known to have more direct effects on childhood nutrition levels might, in fact, improve the precision of the results and should be investigated in future analyses. The morbidity data were obtained from the Ministry of Health's Reportable Diseases Information System. These data were only available through 1985.

Anthropometric data were collected from one of the largest meternal and child health centres in Niamey. Despite its size, the clinic is typical of MCH clinics in Niamey in that it serves a fairly heterogeneous population.

Clinic folders for 1,548 children seen in the clinic between 1981 and 1986 were selected retrospectively using a systematic-random selection procedure. Folders with a date of first visit that fell within the defined study period were eligible for inclusion. Information on the birth date was abstracted for each study child, as was information on the child's weight (in grams) and the visit date (used to calculate age in months) for the first and all subsequent visits during the study. A total of 8,815 clinic records representing the 1,546 children were abstracted.

Three summary measures of the nutrition status data (weight for age) were initially considered- mean Z scores, the proportion of visits by children with weights below -2 SD of the reference mean, and the proportion with weights below 80% of the reference median - and the first two of these were used for comparative purposes. Z scores and the percentage of the reference median were computed using software incorporating the NCHS/CDC standard reference population [7]. The mean Z scores, which were all negative, were squared to yield positive values. For the regression analyses, the proportion of visits by children below - 2 SD was exponentiated to base 100.

Time-series techniques were used to smooth and deseasonalize the indicator series. Data were deseasonalized by taking the difference between a given month's value and the five-year mean for the month. Morbidity frequencies were converted to a natural log scale before being deseasonalized.

The degree of covariation of nutritional, price, and morbidity indicators was examined using ordinary leastsquares regression techniques for both the raw and the deseasonalized data. Morbidity and millet prices were regressed on nutrition status using a lagged-response model, the underlying assumption being that changes in nutrition status follow (rather than precede) changes in the other two series. Price and morbidity indicators were lagged for between one and six periods for inclusion in the stepwise regression procedure. The procedure allows for the inclusion or exclusion of an independent variable, depending on its contribution to the coefficient of determination. The best fitting models for nutrition status measures, including the appropriate response-time lag, were selected for presentation.

The utility of any nutrition status measure as an early indicator of famine will depend on its ability to distinguish famine from non-famine periods, taking into account seasonality and trends. That is, for a measure to be useful, it must be able to distinguish a famine-related increase in malnutrition from an increase that is not famine-related. Logistic regression analysis was used to generate famine prediction models using values of each indicator lagged 3-6 months, raw and deseasonalized, as independent variables. These models describe the probability of a famine month occurring as a function of some combination of nutrition status indicators for one or more of the months preceding it by at least three months. The models have the following form:

P(famine) = {1 + exp [-(a + )]}-1

where i= 1 ... 4 and is the sum of the regressionderived coefficients times their respective lagged monthly malnutrition indicators. For example, for August of a given year, it would be

b 1 * Mmay + b 2 * MApril + b 3 * MMarch + b 4 * MFebruary

The degree to which a predicted probability correctly identifies a famine month is a measure of the model's sensitivity. The degree to which it correctly identifies a nonfamine month is a measure of its specificity.

Although it is not possible to pinpoint the beginning of the famine, for the purposes of this analysis a conservative estimate of July 1984 was used. From anecdotal reports, some areas of the country already were reporting immediately impending exhaustion of food supplies at this time. It is of interest to note that governmental and international intervention efforts were not realized until February 1985.

Findings

Table 1 shows the distribution of sample clinic records and children by year. Almost all of the clinic records (99%) were for children under two years of age. Most of the children (97%) made multiple visits to the clinic. Because the inclusion criteria required that the first visit occur between 1981 and 1986, the sampling method used resulted in a bias toward younger children during the first year of the study. Because of this potential bias toward younger children, records from 1981 were dropped and all subsequent analyses were carried out on records from children first visiting the clinic between 1982 and 1986. The final sample used comprised 1,249 children.

TABLE 1. Distribution of clinic records and children sampled, by year

 

Records

Patients

N

%

N

%

1981

999

11.3

287

18.6

1982

1,664

18.9

221

14.3

1983

1.370

15.5

249

16.1

1984

1,448

16.4

248

16.0

1985

1,636

18.6

273

17.7

1986

1,698

19.3

268

17.3

Total

8,815

 

1,546

 

Before preceding further, it was necessary to examine the possibility that the sampling procedure used, in which children rather than clinic visits were the sampling units, might introduce a bias into the findings that would not occur if visits had been independently sampled each month. Such a bias could arise if children who made multiple visits to the clinic were over-represented in the sample studied and if these children were positively selected from among the study population with respect to health status and more specifically with respect to nutrition status.

The data in table 2 support the latter possibility, specifically that children who made multiple clinic visits during the study period had higher mean Z scores, a finding that perhaps reflects greater concern about or knowledge of infant health and nutrition among the mothers of these children, better access to clinic services, or both. This would cause a bias, however, only if the sampling probabilities favoured children with multiple visits, or if a greater proportion of children had been sampled in earlier years than later. In either case, children making multiple visits would then be over-represented among the clinic records for subsequent years. The first concern is not supported by the sampling method, which was systematic and independent of a child's clinic history, and table 1 shows that there was no over-sampling in the early years. Therefore, in this case, the sampling design does not introduce bias.

Figure 1 summarizes basic characteristics of the principal data series. The three lines represent the three alternative summary measures of nutrition status: squared mean Z scores, the proportion of clinic visits made by children with weights below-2 SD of the reference mean, and the proportion by children with weights below 80% of the reference median. It is evident from the figure that malnutrition as defined by all three indices shows marked seasonality, with peak malnutrition occurring towards the end of each year. The figure also suggests excess malnutrition during two segments of the study period: the first beginning in late 1982 and the second in early 1984. Because the values for the latter two nutrition status indicators so closely parallel each other, further analysis is limited to the squared mean Z scores and the proportion of visits by children below - 2 SD.

TABLE 2. Mean weight-for-age Z scores by number of clinic visits and age

Number
of
visits

Age (months)

 

0-5

6-11

12- 17

18

Z score

N

Z score

N

Z score

N

Z score

N

1-6

-0.21

1,188

-1.22

603

-1.98

115

-1.06

15

7-12

-0.06

1,418

-1.20

1,339

-1.81

597

-1.68

109

13

0.04

270

-1.10

372

-1.50

288

-1.54

104

FIG 1. Weight-for-age measures of malnutrition-Niamey clinic data. 1982-1986 (smoothed)

FIG 2. Millet prices, 1981-1985

Figure 2 shows the trends in raw millet prices for the period 1981-1986. Note that the prices were highest in late 1981. They remained high until late 1982 and fell to their lowest level in early 1984, after which they rose dramatically, peaking again in mid-1985. Although the prices were high in the early 1980s and there was a slight increase in malnutrition among the clinic population, there was no formally recognized food emergency during this period. It is possible, and indeed national economic statistics suggest, that the high prices were a reflection of a more prosperous economy due to mineral-related export earnings. The second period of high prices, on the other hand, was almost certainly related to the massive crop failures of 1984 associated with the famine.

The principal features of the childhood morbidity data shown in figure 3 are strong seasonality and an unusual spike in 1985. The strong seasonal pattern, in which morbidity peaks systematically during the first half of each year, is likely related to strong seasonality of the various infectious diseases that are endemic in Niger, including diarrhoea, respiratory illnesses, and measles. The dramatic spike in morbidity in the spring of 1985 is most likely attributable to the severe measles epidemic that ravaged many of the Sahelian countries during this period. Measles epidemics often exhibit a cyclical pattern in non-immunized populations due to trends in population immunity [8]. In addition, the social and economic dislocation resulting from the famine and the living conditions of displaced persons in all likelihood contributed to the transmission of the disease.

FIG. 3. Child morbidity, 1981-1985

FIG. 4. Deseasonalized measures of malnutrition, 1982-1986 (smoothed)

Figure 4 shows the distribution of the deseasonalized nutrition status indicators. Note that these deseasonalized data distinguish more clearly between the two indicators-the prevalence of malnutrition (below-2 SD) and the squared mean Z scores-and contrast their potential utility as early/timely warning indicators. The prevalence measure increases consistently beginning in early 1984, while the squared mean score does not increase as dramatically. Indeed, the prevalence measure signals a departure from "normal" (mean) values at least six months ahead of the mean score, suggesting greater sensitivity along the lines hypothesized earlier. The prevalence measure also appears to be more specific than the mean score, which, unlike the prevalence measure, shows an excess over that expected during a nonfamine period in 1982. Thus, the early and consistent excess in the prevalence measure during the early famine period and the lack of excess during non-famine periods support its utility as a famine warning indicator.

The next issue to be addressed is the extent to which nutrition status indicators respond to changes in food availability and morbidity regimes. A summary of the findings examining prices and morbidity as predictors of nutrition status is given in table 3, with separate results for raw and deseasonalized data. These indicate that price and, to a lesser extent, morbidity are significant predictors of nutrition status. The deseasonalized data suggest that both measures of nutrition status lag behind grain prices by two to four months and behind morbidity by three to six months. The overall statistical fit of these models is moderate, with Rs ranging from 0.24 to 0.50, with somewhat better fits in the raw series.

Continue


Contents - Previous - Next