This is the old United Nations University website. Visit the new site at http://unu.edu
Surveillance of micronutrient deficiency
Frederick L. Trowbridge
Implementing practical and effective surveillance will be essential to control micronutrient deficiency successfully. A number of issues must be resolved, including selecting optimum target groups and appropriate indicators for surveillance, refining interpretive criteria, and defining technically sound and practical methods for data collection and analysis. Consensus on these issues is urgently required to establish effective surveillance that can both support current programmes and document progress toward the global goal of eliminating micronutrient deficiencies.
A key issue in mounting country-level programmes to control micronutrient deficiency will be to establish effective assessment and surveillance, so as to identify populations at risk and monitor progress over time. To develop effective surveillance, programme managers must define the target groups, the indicators to be used to assess micronutrient status, and the strategies by which surveillance will be organized and coordinated.
Who to measure: Target groups for surveillance
Potential target groups for surveillance of micronutrient deficiency are infants and preschool children, school-age children, and women of child-bearing age. Selection of the most appropriate target group should be based on specific criteria including level of risk or vulnerability, accessibility of the target group for assessment, and degree of representationthe ability of measurements made in the target group to reflect the extent of the problem in the overall population. Other criteria may be the availability of normative data and the potential usefulness of the target population for surveillance of other micronutrient deficiencies.
Infants and preschool children are a highly useful target group because they are vulnerable to micronutrient deficiencies, are accessible for assessment in child health clinics and community surveys, and are an indicator of risk in the general population. In particular, surveillance of vitamin A deficiency focuses on this group because of its vulnerability.
Schoolchildren may also serve well for micronutrient surveillance, particularly for iodine deficiency disorders (IDD). They are at risk because of the nutritional needs imposed by their rapid growth and maturation. They are also accessible for assessment in schools. Moreover, school-based surveillance can provide a stimulus for teachers to develop health education activities, including efforts to help children improve their own micronutrient intake and that of their families. Intervention efforts such as treatment of helminthic infections and the provision of micronutrient supplements are also feasible in the school setting. Surveillance data from schoolchildren, however, have to be interpreted with caution because they may not be representative of the overall community. Children from the poorest families with the highest risk of micronutrient deficiency and other health problems may be less likely to attend school and to be included in school-based surveillance.
Women of child-bearing age are frequently included in surveillance because they are vulnerable to micronutrient deficiencies, especially during pregnancy and lactation. They are also relatively accessible in maternal and child health clinics and in community surveys. The increased vulnerability of pregnant women to conditions such as iron deficiency may make them highly useful as target groups.
The most critical technical challenge in implementing effective micronutrient surveillance is to identify appropriate and practical indicators. Currently, these vary widely in terms of practicality when applied under field conditions. The following discussion considers some of the strengths and limitations of specific indicators for each of the major micronutrient deficiencies.
Indicators of iodine deficiency
Cretinism
Documenting the occurrence of cretins in a population may serve as a valuable signal of serious iodine deficiency. Cretinism is such a striking and tragic manifestation of iodine deficiency that it can provide a powerful stimulus for action. However, the related neurological deficits range from the obvious abnormalities of cretinism to much more subtle, subclinical deficits that are difficult to quantify. Also, neurological deficits among older children and adults reflect previous iodine deficiency and may not give a reliable indication of the current iodine status of the population. Because of these limitations, cretinism does not serve well as an indicator of iodine status for surveillance purposes, but it can provide a powerful initial indication of serious deficiency.
Goitre
For many years thyroid size has been used as a measure of iodine deficiency, particularly among schoolchildren, in initial assessments of populations at risk. Assessing thyroid size by palpation is non-invasive and is generally well tolerated. However, grading goitre by palpation presents a number of serious limitations when used as a surveillance indicator. Compared with results using ultrasound techniques, it shows a high level of variability among observers. In areas where iodine deficiency is highly prevalent, small goitres may be considered normal, leading to an underestimation of the problem. In areas where only mild deficiency exists, the prevalence of detectable goitre may be too low to permit reliable evaluation by palpation. Thus, assessing thyroid size by this method may be useful for initial evaluations in areas of moderate to high prevalence or for monitoring substantial changes in prevalence in response to intervention. For detecting or monitoring mild levels of iodine deficiency, it is unreliable.
Recent experience using ultrasound suggests that this methodology can greatly improve accuracy and reliability in assessing thyroid size [1]. With the introduction of portable equipment, this approach has become increasingly feasible for field use by individuals who have been properly trained. As with other microcomputer hardware, the cost of the equipment is decreasing while its capability increases. Because the accuracy and precision of ultrasound is so much better than palpation, this is clearly the preferred approach to assess thyroid size. However, because special equipment and trained personnel are required, it will probably be more feasible for focused surveys than for broad-based, continuing surveillance.
Urinary iodine
Measurement of urinary iodine excretion provides a direct measure of iodine intake. Thus it is a useful indicator of the iodine status of a population; however, it has significant limitations [2]. For example, measurement of urinary iodine can be difficult to standardize because of interfering substances and because the laboratory environment may easily become contaminated with iodine, leading to inaccurate results. Moreover, urinary iodine values can be difficult to interpret. Expressing results in terms of iodine concentration in the urine is the preferred approach, but results expressed in this way vary with the degree of urine dilution [3]. Because results may be highly variable, it is best to look at the distribution of values in a population rather than just interpreting iodine status on the basis of the mean or median level of excretion.
Thyroid-stimulating hormone
Measurement of thyroid-stimulating hormone (TSH) is a promising indicator of iodine deficiency that is finding increasing application in IDD surveillance. It directly reflects the adequacy of thyroid hormone, a substance that is critical to normal neurological development. The level is easily measured with a highly sensitive and specific immunoassay using a small blood sample [4]. Even though a sophisticated laboratory is required, the samples are stable without refrigeration and thus are easily transported to a single, centralized laboratory for processing [5]. The distribution of TSH values in a population can be used to detect even mild levels of iodine deficiency [6].
Surveillance for iodine deficiency can employ existing neonatal screening programmes that analyse TSH from blood spot samples in newborns, as long as a sensitive assay is used. In addition, TSH can be assessed in the general population or in specific target groups such as preschool children and women of child-bearing age. Experience in interpreting the distribution of TSH values in populations is still limited. Further studies are necessary to describe the distribution of TSH values in populations with sufficient iodine intake and with various degrees of iodine deficiency. However, given the technical soundness of this indicator and the facility with which it can be collected and analysed, it is likely to see increasing use in surveillance programmes.
Consumption of iodized salt
Assessing the use of iodized salt at the community level may serve as a practical index of the effectiveness of intervention programmes. Obviously, this must be coupled with close monitoring to ensure that salt labelled as iodized actually contains an adequate concentration. Surveillance of iodized salt consumption among schoolchildren can be effective in identifying communities in which iodized salt use is low. Once identified, these communities can be targeted for interventions that promote the use of iodized salt.
Indicators of vitamin A deficiency
Night blindness
The prevalence of individuals with a history of night blindness, and the existence of words in the local language that describe this condition, may initially serve as a useful indicator of vitamin A deficiency. Specialists can make quantitative measurements of dark adaptation, but these measurements have been difficult to standardize under field conditions. Also, the low prevalence of positive findings limits the effectiveness of night blindness in detecting or monitoring populations with mild to moderate levels of vitamin A deficiency.
Xerophthalmia
Eye examinations may reveal signs such as conjunctival xerosis and Bitot's spots that can indicate the existence of vitamin A deficiency [7]. However, these signs are somewhat variable and subjective, so health workers must be carefully trained for accurate diagnosis. Moreover, the low prevalence of these findings means that very large samples are required to establish the disorder with any certainty, especially if only mild or moderate levels of deficiency are present. For these reasons, the quantitative assessment of xerophthalmia is not likely to be a practical measure in populations with mild or moderate deficiency.
Conjunctival impression cytology
In this technique, filter paper is applied to the conjunctiva to remove epithelial cells, which are then classified histologically as positive or negative [8]. The method is simple and minimally invasive, and may provide an indication of the likelihood of deficiency for surveillance purposes. However, the results indicate only the presence of positive findings and do not provide a continuous scale of deficiency. Moreover, interpreting the samples requires careful training and standardization. For these reasons, conjunctival impression cytology falls short of providing a reliable, quantitative index of vitamin A status.
Serum retinol
Measurement of retinol has been widely used to determine vitamin A status at the population level. The proportion of individuals with low retinol levels reflects the risk of deficiency. However, retinol does not reflect liver stores of vitamin A and may be affected by other factors, such as infection and protein-energy malnutrition. Also, the method may not be practical under field conditions because of the need for venous blood samples, careful storage and transport of specimens, and sophisticated laboratory analysis. Retinol can be measured in a microsample of serum. The use of filter paper blood spot samples may prove to be feasible, but this method requires further development.
Relative dose response
Dose-response tests, based on measuring retinol response to an administered dose of vitamin A, measures vitamin A status more accurately [9]. These tests have the significant advantage of reflecting liver stores of vitamin A; however, their acceptability is limited by the requirement that two blood samples be drawn. The modified relative dose-response test requires only a single venous sample, but it is complex for field application and its use as an indicator in continuing ongoing surveillance may be limited.
Retinol-binding protein
The potential for retinol-binding protein to indicate vitamin A status has not been fully explored. The test is highly correlated with serum retinol because it is the carrier protein for retinol in the circulation. When used as an indicator, it has many of the limitations of serum retinol, including depression in response to infections and lack of ability to reflect liver stores. However, it has the substantial advantage of being potentially measurable in a dried blood spot sample This indicator must be developed further, but appears to have substantial potential for field application.
Indicators of iron deficiency
Haemoglobin and haematocrit
These measures of anaemia are the most widely used indicators of iron deficiency at the population level. Haemoglobin measurements are easily made in the field with portable photometers such as the Hemacue system, which requires only a capillary blood sample [10]. Haematocrit testing is also relatively simple, requiring only a capillary sample, and is low in cost. It requires an electrically operated centrifuge, however, which may limit its field application for surveillance purposes.
Although widely used, these two tests for anaemia reflect more severe forms of iron deficiency and are relatively insensitive to milder forms in which only iron stores are reduced. Moreover, because haemoglobin is affected by several factors including acute infection, malaria, genetically based haemoglobinopathies, and protein-energy malnutrition, anaemia-based indicators may not be specific for iron deficiency. Despite these limitations, the prevalence of anaemia remains a highly useful indicator to assess iron status in populations. Because iron deficiency affects women and children preferentially, the finding of anaemia among women and children but not among men provides substantial evidence that iron deficiency is the main cause.
Serum ferritin
This indicator of iron status is highly useful since ferritin levels reflect iron stores l 1 l ]. Thus, they can detect earlier stages of iron deficiency than may be indicated by anaemia. Moreover, ferritin can be measured using microsamples, potentially extracted from a dried blood spot. The greatest limitation is that infection and inflammation can cause elevated serum levels. Therefore, a low serum ferritin indicates iron deficiency, but a normal value does not exclude the disorder in populations where the prevalence of infection is high.
Transferrin receptors
Levels of circulating transferrin receptor proteins may serve as highly specific indicators of iron deficiency [12]. The levels become elevated in the later stages of iron deficiency and are not subject to interference by the presence of infection or chronic disease. Moreover, a test can potentially be developed that would measure transferrin receptor concentration in filter paper blood spot samples. This method holds promise as a useful index of iron deficiency for which samples could be readily collected in the field.
Developing surveillance strategies
One often hears of the need to create surveillance systems to monitor changes in nutrition status on a continuing basis. The word "system" implies building a fixed and permanent structure, however. Rather than systems, we might want to think about surveillance strategies. This implies that each setting requires its own strategy, and that the approach taken to provide the necessary surveillance information may change over time as different needs arise.
A surveillance strategy may involve a number of different data-collection mechanisms, including national, regional, or community surveys; clinic- or school-based data collection; sentinel sites; and other more informal methods. Each approach has its advantages, limitations, and related costs. A strategy may include any or all of these elements to meet the needs of a particular public health programme. The fundamental objective is to develop a flexible and affordable mechanism for collecting essential data that can help guide policy and programme actions, and that can evaluate the impact of interventions.
Another issue in the development of a practical surveillance strategy is the possibility of coordinating data collection for several micronutrients (iodine, vitamin A, iron). This may offer a number of potential advantages. A key factor is that risk groups for these micronutrient deficiencies tend to overlap, so surveillance can be targeted to the same groups. Moreover, developing surveillance that uses a single investment of personnel and financial resources to gather information on several micronutrients may be more cost-effective than separate activities aimed at a single micronutrient.
Implementing coordinated approaches may be difficult. Often, programmes are set up as vertical or categorical structures that are directed at a specific micronutrient problem. In fact, donor agencies often inadvertently encourage vertical programmes by providing resources that are targeted to and can be only expended for a specific micronutrient. In some cases, an argument can be made for the efficiency of this approach to maintain programmatic focus. Coordination should, however, at least be considered, and its potential advantages be recognized.
The development of more effective surveillance approaches can only proceed if the effort is supported by appropriate applied research. Applied research needs include development of new indicators that can provide more sensitive and specific measures of micronutrient status; field testing of surveillance methods to evaluate their performance; refinement of microsample techniques for measuring biochemical indicators, particularly dried blood spot samples; examination of factors such as infection and protein-energy malnutrition that affect micronutrient levels; and assessment of interactions among micronutrients.