Contents - Previous - Next
This is the old United Nations University website. Visit the new site at http://unu.edu
Output: Nutritional outcomes
We now turn our attention to the second set of hunger indicators, those based on measuring outcomes of malnutrition. This section focuses almost exclusively on anthropometric measurements, both because they are widely used and because they are the subject of considerable controversy. However, it is important to note that micronutrient deficiencies are also important because of their significant functional consequences, described in the introductory chapter, and also because they can signal other problems in dietary adequacy. Correctly diagnosing clinical signs of micronutrient deficiency requires well-trained staff and more extensive data: making comparisons between populations is only now becoming possible. If means of identifying mild-to-moderate deficiencies become available, these could provide increasingly important tools for measuring hunger.
Nutritional status is most commonly measured, especially in young children, by anthropometry - measurement of dimensions of physical size, such as height or weight, and comparison with distributions of the same measurement in a presumably healthy and well-nourished reference population. Children whose weight falls below the range of normal variation for children of the same age observed in a reference population are identified as underweight. Underweight may reflect small stature, excessive thinness, or both. These two dimensions are differentiated in two more refined anthropometric measures - weight for height and height for age. If the child's weight falls below the range of normal variation for children of the same height, it is considered wasted. If its height falls below the range of normal variation for children of the same age, it is considered stunted. Wasting is generally interpreted as an indicator of acute malnutrition - a current or recent crisis involving extreme weight loss. Stunting, in contrast, indicates early malnutrition. Either a past episode (or episodes) of acute malnutrition, or a routinely limited diet over an extended period, has resulted in growth impairment, even though current nutrition may be adequate.
Anthropometry is used to assess adults as well as young children, but this is done less widely. Shortness, although it may be nutritionally caused, provides clues only to the individual's experience during childhood, so that adult heights are uninformative regarding current or recent nutrition. Thinness, however, implies current undernutrition for adults as it does for children. Thus, weight for height is relevant across the range of ages, as are such other measures of fatness as mid-upper arm circumference (MUAC), skinfolds, or the body mass index. Although it is thus possible to measure people of all ages using the same anthropometric indicators, this rarely occurs in practice. As a result, evaluation of commonly stated conclusions regarding age patterns of variation in nutritional status is often quite difficult.
Anthropometry as an indicator of individual malnutrition
Anthropometry alone is not sufficient to diagnose nutritional problems in individuals, although it identifies children whose situation should be examined further in making such a diagnosis.4 If the lower bounds of normal variation were set so low as to exclude all cases of healthy small size, much actual malnutrition would not register. In order to obtain a reasonable degree of sensitivity, cut-offs are set high enough that a small proportion of individuals fall below them, despite good health and adequate nutrition. At the same time, some who are naturally larger may fall above the cut-offs, even when they are, in fact, malnourished.
The lower limit of the range of normal variation in anthropometry has been variously operationalized. One common practice has been to define a cut-off at some set percentage of the median from the reference population. For weight for age, 80 per cent has been the most widely used cut-off, but milder or more severe underweight has been defined in terms of higher or lower percentage cut-off points. For other anthropometric measures, different percentage cut-off points have been identified. Recent work has more consistently used a cutoff two standard deviations below the mean of the reference population. Using standard deviations is preferred to using percentage cut-offs because comparability across measures and for the same measure at different ages is not compromised by greater or lesser variability. Roughly 3 per cent of healthy children will be more than two standard deviations below the mean of a healthy reference population, but most children this small are correctly identified as being at nutritional risk. Those more than two standard deviations below the mean are usually identified as moderately malnourished, while those falling three standard deviations or more below the mean are severely malnourished. Some researchers have suggested using a cutoff of one standard deviation in order to target nutritional supplementation programmes to those at risk of undernutrition (see comments in Popkin 1994). While it is desirable to enhance marginal diets of children who are showing no clinical signs of growth faltering, it is preferable to target children at greater risk, if resources are limited. It is worth repeating that none of these cut-off points has any necessary functional significance, despite their utility in helping to identify hungry individuals.
Meaningful individual diagnosis involves repeated measurement over time. Repeated measurement allows a child's growth trajectory to be compared with that of normal growth, rather than simply relying on a one-time measurement relative to a cut-off point. The repeated measurement process is referred to as "growth monitoring" and can be used to determine if small children are growing normally; it can also identify larger children whose growth has become compromised, even before their size drops below some cut-off. Weight loss, as opposed to unusually low weight, is a more reliable indicator of nutritional crisis. Similarly, a period during which no increase in height occurs tells us more than does a one-point observation of unusual shortness, which might have resulted either from such a crisis (or repeated crises) at any time up to the present or from a pattern of uninterrupted slow growth. Growth monitoring has been promoted as part of the UNICEF/WHO "GOBI" (growth monitoring, oral rehydration therapy, breast-feeding, and immunizations) initiative for child survival. Its utility in this context is in alerting the mother and the health practitioner to developmental problems at an earlier stage, and thus encouraging intervention before much damage has occurred.
Anthropometry as an indicator of malnourished groups
Cross-sectional anthropometric measures are less ambiguous indicators of nutritional problems for groups than they are for individuals. While the lesser growth potential of some individuals may cause them to register as underweight, wasted, or stunted, even when they are in good health and weld fed, such individuals will be rare. If the proportion of individuals so identified in any group is high, we can be more sure that malnutrition is a significant problem for this group than that each individual identified is necessarily malnourished.
This assumes, however, that the reference population defining the range of normal variation is appropriate - that the target population would, in fact, show the same distributions of weights for age, weights for height, and heights for age if it, too, were healthy and well nourished. The most commonly used growth standards for international research are US based, and there is real controversy as to whether US growth standards are necessarily applicable everywhere. It has been argued that, in populations facing nutritional constraint over the long term, small size may, in fact, represent a healthy adaptation rather than indicating a problem (see review in Osmani 1992). Since nutritional requirements are, in part, a function of body size, stunted growth helps keep requirements low for individuals and populations and may permit good health and normal functioning at intake levels that would be too low if size were larger.
This controversy has clear implications for measuring hunger. If populations vary with respect to their proportion of healthy individuals whose anthropometric measurements fall far below the mean of the reference population, using any absolute cut-off would bias comparisons of hunger prevalence between populations. Such an argument, in fact, makes cross-country comparisons of hunger virtually impossible: any variation in achieved stature or even in caloric intake could be explained as adaptation rather than evidence of hunger. Although we argue that cross-country comparisons are both desirable and useful, we also consider the arguments against our position very carefully. The remainder of this section reviews three major arguments for why US-based growth standards might be inappropriate. We then address the question of how much difference the choice of a growth standard makes to our understanding of who is hungry.
Natural selection for small body size?
"Healthy adaptation" to nutritional constraint may occur at the population rather than the individual - level. Evolutionary pressures may have favoured small size in populations facing a very constrained diet. This variant of the "small but healthy" argument suggests that genetically determined potential size is actually less in those populations (such as most of South Asia) where average body size is small by Western standards.
However, numerous studies have demonstrated that elite groups in such populations, less constrained in terms of both diet and health care, show growth patterns (at least in early childhood) that are quite similar to those reflected in the US-based growth standards. Genetic potential for these elite groups within populations of small average size does not appear strikingly less than for their Western counter
parts. Marked increases in stature from one generation to the next are commonly observed when individuals from populations of small average stature migrate into countries with larger average stature. These increases, which are generally associated with changes in the diet and health, are difficult to reconcile with the notion that small statures in the areas of origin result from genetically determined limited growth potential. Similar very rapid increases in average stature within populations undergoing dietary enrichment (Japan, China), often associated with modernization or increasing affluence, also suggest that growth potential is comparable in populations between which actual attained body sizes vary widely.
Individual adaptation to nutritional constraint?
Even if the small body sizes observed in nutritionally constrained populations are not genetically determined, an adaptive mechanism rooted in individual experience remains a possibility. We know both that nutritional constraint during childhood is a cause of permanently small body size and that small body size in turn reduces lifetime nutritional requirements. The question then becomes, at what cost is this lifelong economy won? We can separate the answer to this question into two parts - costs of being, and of becoming, small.
To be small in stature means, ceteris paribus, to be less powerful physically than if one were taller. Since the poorest, who are likeliest to suffer growth impairment, are also probably likelier than others to have to earn their living through hard physical labour, this cost may be significant in terms of lost productivity and earnings. In contrast, small size itself may not be a problem for those whose work is mental rather than physical.
Small mothers are also at higher relative risk of bearing low birth-weight babies. The physical mechanisms through which maternal stunting is linked to birth weight are not completely understood. It has also not been conclusively demonstrated that stunting in the absence of other nutritional problems (e.g. underweight, anaemia) or deprivation during pregnancy adversely affects birth weight (Osmani 1992). Nevertheless, the correlation between maternal height and birth weight is strong across a large number of populations. Even if maternal stunting serves only as a proxy for the conditions that place children at high risk, it is still a useful indicator of who is likely to be hungry.
Some nutritionists (e.g. Beaton 1989) argue that, while there is nothing wrong with being small, the process of becoming small is damaging. Increased risks of morbidity and mortality are among the major concerns. As Martorell (1995) argued:
Although growth retardation does not cause depressed immunocompetence, the factors that cause growth faltering, such as infection and inadequate intakes of specific nutrients, also result in immunodepression... children may get infected for reasons largely determined by their environment, but, once they are infected, the course of the infection will be influenced strongly by nutritional status, reflected by the degree of growth retardation.
Even if increased risks of morbidity and mortality could be ruled out, the advantages of small size should not be counted as cost free if the constraints resulting in small size also cause significant functional impairment. Associations of growth impairment with poor intellectual and social development are well documented, and there are plausible mechanisms by which the same processes of malnutrition that cause impaired growth may also cause developmental impairments.
The small size associated with nutritional constraint has often been equated with increased risks of mortality - so strongly so, in the minds of some, that the infant mortality rate has been suggested as "one of the best tools available for measuring the extent of hunger in a society" (THP 1983), and growth has been called "the most important single indicator of health" for a child (Grant 1990). Others (Mosley and Chen 1984) have recommended that analysts combine growth impairment and mortality into a single variable, with survivors classified according to the Gomez scale of weight for age (relative to the median from an applicable growth chart, 89-75 per cent is grade I malnutrition, 74-60 per cent is grade II, less than 60 per cent is grade III) and then creating a fourth (grade IV) category for non-survivors. An assumption implicit in each of these suggestions is not only that underweight is strongly associated with elevated risks of mortality but also that the form of this relationship is relatively invariant.
There is certainly reason to expect malnutrition to increase risks of morbidity and mortality, as discussed in chapter 1. Malnutrition does reduce resistance to some infectious diseases, with different aspects of the immune system affected by deficiencies of varying degree with respect to specific nutrients. Many studies demonstrate cross-sectional associations between malnutrition and morbidity (for a useful review, see Tomkins and Watson 1989) or mortality (Puffer and Serrano 1973). In cross-sectional studies, however, effects of illness on nutritional status confound the effect of nutritional status on illness, and effects of nutritional status itself on survival chances are confused with effects of illness on both. Longitudinal research linking anthropometry to subsequent morbidity or mortality is relatively scarce.5 When this is done, reverse causation can be ruled out, although surrounding circumstances (such as crowding or poor sanitation) that cause frequent illness could be responsible for both malnutrition and other adverse outcomes. At the extremes of malnutrition, there is no doubt that a range of adverse outcomes become increasingly likely.
However, some poor areas considered to have extraordinarily low infant and child mortality for their level of economic development (e.g. Sri Lanka, the state of Kerala in India) also have very high prevalences of underweight in small children. In contrast, some countries with the very highest rates of mortality in infancy and early childhood (including much of sub-Saharan Africa) exhibit relatively moderate prevalences of underweight. Since disease is a major cause of both malnutrition and death, a positive association between aggregate indicators of underweight and mortality would be expected even if malnutrition did nothing to increase risks of death. Deviations from the expected pattern at the aggregate level require further investigation. Further research might show that factors such as access to health care or maternal education mediate the relationship between body size and mortality; small stature may be a significant but surmountable mortality risk factor. If other variables, in fact, play an important mediating role in the body size/survival relationship, high proportions of underweight, wasting, or stunting might accurately reflect important variations in nutritional adequacy rather than simply measurement problems.
In summary, two additional conclusions about the desirability of individual adaptation to low food availability deserve emphasis. First, as outlined in chapter 1, limitations on physical activity due to lethargy during childhood have both physical and intellectual consequences. Second, stronger manual labourers may be able to earn more than enough to compensate for the increased caloric needs associated with their larger size. Although it seems likely that individual adaptation to low intake occurs regularly, anthropometric measurements are still useful hunger indicators.
Distinctive growth patterns of breast-fed and formula-fed infants
Applying US-based anthropometric standards to infants is another controversial area, since the US standard reflects the experience of mostly formula-fed infants who were supplemented with solid food fairly early in life (Akin and MacLean 1980). Furthermore, at the time the data for the standards were collected, more nutrient-dense infant formulas were used than is now the case. Under circumstances where adequate amounts of formula can be given and hygiene maintained, the use of modern infant formula leads to more rapid weight gain, at least after the first few months, than does breast-feeding (Ritchie and Naismith 1975; Stuff and Nichols 1989). This divergence in growth trajectories has been conventionally interpreted as showing that unsupplemented breastmilk is sufficient only for the infant's first four to six months, the period before the growth paths diverge. It is therefore recommended that other foods should be added to the infant's diet even if breast-feeding continues beyond the first four or six months (see, e.g., Underwood and Hofvander 1982).
Nevertheless, a number of researchers have argued that the standard reflects a less-than-optimal growth pattern (Akin and MacLean 1980; Huffman 1991; Whitehead and Paul 1984). Whitehead and Paul argued that there was no reason for concern when children fall behind standards based on "inappropriately constituted and administered formulae." This conclusion seems especially appropriate in view of medical evidence that breast-fed children who show growth faltering relative to the standard can be shown to be just as healthy or healthier, according to other measures. They have fewer respiratory infections and less incidence of diarrhoea (Chandra 1982), and their energy requirements are lower because of their smaller body size, lower heart rates, and lower metabolic rates - not because of lower levels of physical activity (Garza and Butte 1990). They are also unlikely to be undernourished since their caloric intakes do not increase when their diets are supplemented with solids (Garza and Butte 1990; Stuff and Nichols 1989).
Such findings lead researchers to question whether all infants ought to follow the growth patterns that can be achieved with infant formula. If more rapid growth is not advantageous, application of standards based on the experience of bottle-fed infants may overstate the prevalence of underweight in populations where most children are breast-fed and may also lead to a perception that supplementation is needed at ages at which breastmilk still fully meets the infant's needs. Where surrounding conditions make it difficult to maintain good hygiene, unnecessarily early introduction of supplementary foods that are likely to be contaminated may increase rather than lessen risks of malnutrition.
Growth standards still need to be developed that reflect the experience of exclusively breast-fed children and children receiving non-formula supplements. Until that time, cross-country comparisons using anthropometric measurements should be interpreted with special caution for the youngest children. The main difference in growth patterns between breast-fed and formula-fed children is faster weight gain in the formula-fed group (Garza and Butte 1990), and the greatest difference in weight gain is associated specifically with formula feeding, not other methods of artificial feeding. Therefore, comparisons of stunting (height for age) are less likely to be affected by use of the US-based standards than are comparisons of underweight. The greatest caution needs to be applied when comparing weight gain in populations where breast-feeding is common versus those where commercial infant formula is widely used.
How much difference does choice of growth standard make?
As the discussion above suggests, selection of an appropriate reference standard for anthropometric measurements is a contentious issue. For international purposes, use of the National Center for Health Statistics (NCHS) standard (based on the experience of children in the United States) has been recommended by the World Health Organization and is now widely accepted. Some countries, however, have chosen to develop and use local standards instead. Typical growth patterns do vary across populations, and the issue is which deviations ought to be viewed as problematic and avoidable.
Implications of this choice for our understanding of hunger are not trivial (Millman et al. 1991). Different anthropometric standards can yield very different estimates of the prevalence of malnutrition in the same population. For India, for example, estimates of the prevalence of underweight among children based on the NCHS or the local (Hyderabad) standard differ by 25.7 percentage points for 1989 (NNMB 1989). If local growth standards reflect the experience of less-than-healthy adaptation, their use will define real nutritional problems out of existence (Messer 1986).
Less obvious, but equally problematic, is that the choice of standard can also affect the analyst's understanding of which groups within a population are worse off. In particular, the contrast between the prevalence of malnutrition observed for males and females can be very much affected by the standard employed. Within each standard, separate reference values are defined for males and females, a complication necessary to capture typical healthy growth patterns for boys and girls. Any standard embodies the pattern of gender contrast that typifies the population on which it is based. A standard based on a population in which treatment of boys and girls differs in nutritionally consequential ways essentially defines the resulting differentiation of developmental paths as the norm. For example, the Hyderabad standard used in India, which is based on the experience of a population of urban middle-class children in southern India, incorporates a pattern of male advantage as compared with the US-based and internationally used NCHS standard. Table 2.2 contrasts median weights by age and sex in the two standards. While the Hyderabad standard in general defines lower weights as normal than does the NCHS one, the point here is that the downward shift in median weight associated with the use of the Hyderabad standard is greater for females than for males.
Table 2.2 Median weights (kg) for age and sex compared between Hyderabad and NCHS standards
Source: for the Hyderabad standard, NNMB (1989); for the NCHS standard, Dibley et al. (1987).
Distributions of weight for age that imply the same prevalence of malnutrition for boys and for girls as compared with the NCHS standard would inevitably show higher rates of malnutrition for boys than for girls if the Hyderabad standard were employed. Conversely, a situation that appears to be one of gender equality in malnutrition relative to the Hyderabad standard would show a female disadvantage if the presumably non-gender-biased NCHS standard were employed.
Table 2.3 shows the sharply different patterns of gender contrast that are observed when the same situation is viewed through the lens of one or the other weight-for-age standard. Tabulations of 1989 data for seven states of India (NNMB 1989) based on the Hyderabad standard show an apparent nutritional advantage for girls, startling in view of the frequency with which one hears that boys are favoured in that country. This surprising result is at least partly due to the fact that individual data are being measured against a standard that has a male advantage built into it. When the same data are measured against the presumably non-gender-biased NCHS standard, the apparent female advantage tends to disappear, although the expected male advantage still fails to become visible. We will return to the question of gender differences in nutrition in chapter 5. For present purposes, the important point is that the choice of reference population itself strongly affects the gender contrast we witness in anthropometry.
Table 2.3 Gender comparisons Underweight Indian children according to the Hyderabad and NCHS standards, 1989
|Standard||Percentage of boys underweight||Percentage of girls underweight||Female advantage (boys - girls)|
Source: NNMB (1989).
Relations among the hunger indicators
The assumption is sometimes made that patterns of variation or change observable in child anthropometry indicate variation or change in nutritional status for the population of all ages. The prevalence of underweight among small children is used as a leading indicator of malnutrition in famine early warning systems, and cross-sectional variations in underweight among small children are taken as indicators of likely concentrations of malnutrition at other ages as well. Little attempt seems to have been made to validate this wider application of the findings on child anthropometry by exploring its association with hunger indicators pertaining directly to other age groups. Given the crucial importance, for malnutrition among small children, of processes such as weaning and childhood diseases that are irrelevant to others, the use of childhood anthropometry as an indicator of nutritional conditions for adolescents and adults seems questionable.
As Heyer (1991) observed in her analysis of Kenyan data:
Child malnutrition is not at all closely linked with... poverty (whether measured in terms of income or expenditure)... or even food intake estimates. This is consistent with micro-level evidence on the role of health and other factors.
Similarly, for a low-income sample in the Philippines, Pinstrup-Andersen (1990) found only low correlations between child anthropometry and a wide range of household-level indicators - per capita household income, per capita food acquisition, per capita calorie consumption, household calorie adequacy, total household food acquisition, and total household calorie consumption. The highest correlation was only.22. The very weak relationships between child anthropometry and other hunger indicators suggest that children's growth impairment is not a useful indicator of household food security. The authors were actually asking the opposite question - whether household food-security indicators would serve to identify households in which malnourished children were located. The answer to this question was also negative. In contrast, caloric adequacy of pregnant and lactating women was reasonably strongly related to that of their households, suggesting that nutritional problems for this group are more a function of household food insecurity and also discounting the interpretation that poor measurement of household data could account for the lack of relationship with child anthropometry.
To identify linkages between individual and household hunger, which is essential for diagnosing nutrition problems and setting priorities for interventions, empirical work using data covering a broad range of ages and including multiple indicators of hunger needs to be given a high priority. Such work might explore, for example, the extent to which underweight children are concentrated in households with low access to food, and the covariation of anthropometry for children and adults within the same households. If it turns out that underweight among small children acts as a reliable proxy for hunger of others in the same household, location, or social group, the wide availability of childhood anthropometry could be exploited more systematically to enhance our overall understanding of hunger in entire populations. If, on the other hand, variations in childhood anthropometry diverge sharply from those reflected by other hunger indicators, the temptation to generalize widely from data pertaining directly only to small children should be resisted. In the meantime, it is safer to interpret changes in child anthropometry within a region or social class as indicative of changes in the hunger status of entire families, but not without first considering whether changes in infant feeding practices or in the disease environment might provide an adequate explanation for the trends.
1. The World Bank most commonly used a cut-off set at 90 per cent of the calorie requirements estimated by the FAO/WHO/UNU committee in 1971; the FAO defined its cut-off as 1.4 times the basal metabolic rate (BMR). In neither case do the cut-offs employed allow for more than minimal physical activity for adults, and the newer common cut-off of 1.54 BMR still allows only for light activity (Uvin 1994).
2. Counting breast-feeding bouts over shorter periods is problematic because daytime consumption may or may not be reflective of night-time consumption: when children sleep with their mothers, nursing may follow a similar pattern around the clock; where they do not, there may be little or no night-time nursing.
3. Sukhatme and Margen (1982) interpret interindividual variation observed cross-sectionally as reflecting intra-individual variability; they also interpret the autocorrelation of daily individual intakes as evidence of a homoeostatic, self-regulating process. Although neither of these interpretations is implausible, interindividual variation could reflect stable differences across individuals and autocorrelation of intakes could result from external influences that vary cyclically over a span of days (such as different eating patterns on weekends and weekdays). Even if the evidence for energy intake and efficiency of use as a self-regulating process were definitive, the conclusion that intake levels observed only as the low point in a fluctuating series could be maintained indefinitely without damage seems questionable.
4. Smallness on any of the anthropometric indicators may result from illness rather than from compromised nutrition, though in most cases it is likely to be a combination of the two. Smallness may also result from normal variation or genetic potential, and one of the challenges this presents is to set cut-off points for anthropometric measurements that identify nutritional problems without also including children who are simply small.
5. Mid-upper arm circumference has been shown to predict risk of death better than either weight for height or height for age (Briend et al. 1987).
Contents - Previous - Next