Contents - Previous - Next


This is the old United Nations University website. Visit the new site at http://unu.edu


Major nutrition-related components of NHANES II

The five major components of NHANES II were a household questionnaire, a medical history questionnaire, a dietary questionnaire, examination by a physician, and special procedures and tests. The household questionnaire consisted of questions about family relationship; age, sex, and race of family members; housing information; occupation, income, and education level of each family member; and participation in the food stamp programme and school breakfast and lunch programmes. Separate medical history questionnaires were used depending on the age of the sample person, one questionnaire for children of 6 months to 11 years and another for persons of 12 to 74 years. Both the household questionnaire and the medical history questionnaire were administered in the respondent's home.

When individuals arrived at the mobile examination centre, they were scheduled through the dietary interview, physician's examination, and special procedures and tests. The procedures and tests included body measurements for all, allergy tests for persons of 6 to 74, X-rays of persons of 25 through 74 except pregnant women (cervical spine, lumbar spine except women under 50 years, and chest), and urine and blood tests. From blood samples taken in the centre, a number of nutrition-related assays were done. These included serum albumin, serum vitamins A and C, serum lipids (cholesterol, triglycerides, and high-density lipoproteins), protoporphyrin, serum iron, total iron-binding capacity, serum zinc, and serum copper. Red cell folates, serum folates, serum ferritin, and serum vitamin B12 were determined on blood samples with abnormal complete blood count, haemoglobin, haematocrit, or MCV, and on a subsample of normals.

The dietary questionnaires consisted of a 24-hour recall, a food-frequency questionnaire, a dietary-supplement questionnaire, and specific questions on medication, vitamin, and mineral supplement usage. All interviews were conducted by trained interviewers who had at least a bachelor's degree in home economics.

In the 24-hour recall, respondents were asked to report all foods and beverages consumed on the previous day. Respondents estimated the size of the portions consumed by referring to food models. In addition to foods and portion sizes, interviewers asked about what time of day the food was eaten and its source. The time of day was coded as one of five ingestion periods: morning, noon, between meals, evening, or total day. The source of the food was coded as home, school, restaurant, or other.

Each food item was coded by the interviewer within 72 hours of the interview. The food code book developed for the survey contained five-digit food codes for approximately 2,500 food items. Each food item was identified by name (including brand names if appropriate), by whether it was raw, dry, or frozen, by how it was prepared, and, for mixed dishes without food codes, by major ingredients. A food composition data base updated from NHANES I was used to calculate the energy, vitamin, and mineral content of the reported foods. Modifications to the NHANES I data base included new data from VSDA's revised Handbook No. 8, and food composition data from food companies on new products and brand-name products of unique formulation.

The food-frequency questionnaire elicited information about the consumption of 18 food groups over the previous three months. Frequency was given one of four possible codes: a whole number, never, less than once a week, or unknown. The interval at which the food was usually eaten was also given one of four possible codes: never, daily, weekly, or less than weekly. One question was asked about use of vitamin and mineral supplements, and one about how often the salt shaker was used at the table. Responses to this last question could be assigned to one of three codes: rarely or never, occasionally or seldom, frequently or always.

The dietary-supplement questionnaire contained questions about whether the respondent was on a special diet, what type, and for how long. One question asked about the possible use of nine medications in the previous week. These were commonly prescribed medications that might interfere with test results or affect interpretation of results. Another question related to problems preventing the respondent from obtaining needed groceries. The final question asked about trouble swallowing, pain, nausea and vomiting following eating, and loss of appetite.

The medication, vitamin, and mineral usage questionnaire requested specific information about brand name, manufacturer's name, and reason for using vitamin or mineral supplements and medications.

The quality of the dietary component was controlled at several levels. Before the survey began, the dietary interviewers were trained in interview techniques and in how to code the 24hour recall. A manual was issued to each interviewer which described the procedures to be followed. Periodically, the forms were reviewed and evaluated, and instructions were issued to the interviewers to promote consistency. Interviewers exchanged coded 24-hour recall forms to check each other's work, and forms were also reviewed by the field staff before being forwarded to headquarters. At every location, each interviewer tape-recorded two interviews with randomly selected subjects. The recordings were evaluated at headquarters for adherence to procedures. Comparisons were made at headquarters of the mean values and frequency distributions by stand location and by interviewer to detect unusual results by location and systematic errors by interviewers. Foods for which no appropriate food codes existed were forwarded to headquarters for assignment of new code numbers.


Uses of dietary data

NHANES dietary data have been put to four types of uses: relating diet and demographic characteristics, relating diet and health characteristics, determining interactions of diet and nutritional status indicators, and tracking trends in diet and nutrient intakes over time.

In relating diet to the demographic characteristics of the population, the major question to be asked is: What are the food consumption patterns and nutrient intakes of subpopulations of the United States by such characteristics as age, race, sex, income, occupation, and education? The NHANES dietary data can answer questions such as: How do nutrient intakes and food consumption patterns of persons differ by level of education? What are the regional differences in consumption of certain food groups?

NHANES data have been used to relate the food consumption patterns and nutrient intakes of United States subpopulations to indicators of health status. Specific questions that have been addressed include: How do nutrient intakes compare with the Recommended Dietary Allowances and other dietary guidelines? What dietary patterns are associated with higher levels of tooth decay? What dietary and health variables are associated with iron-deficiency anaemia?

Examining interactions between nutrition-related variables, NHANES data can compare dietary intake, biochemical status, anthropometry, and presence or absence of health conditions. Questions that can be addressed by the data include: What are the relationships between dietary intake and biochemical status for persons who smoke, use vitamin/mineral supplements, or use oral contraceptives? Are those who take vitamins and other dietary supplements the ones who need them? Are subpopulations with high serum cholesterol and other evidence of cardiovascular disease consuming foods high in cholesterol and saturated fats?

Changes over time in food and nutrient intakes can be tracked and correlations made with health variables. Examples of questions that can be posed to the data include: What changes in obesity and diet have taken place in the last ten years? Are serum cholesterol values declining among men and women?


Plans for future NHANES

The next National Health and Nutrition Examination Survey (NHANES III) is scheduled to begin in 1988. We have already begun planning for the survey. Among the topics being considered are the content, sample design, data processing, co-ordination with other surveys, addition of a longitudinal component, and the possibility of continuous monitoring of special groups.

The needs of government agencies, including the Food and Drug Administration, the Environmental Protection Agency, and the National Institutes of Health, and researchers in industry and academia will be considered. As the content of the survey is being developed, consideration will also be given to which topics should be considered core components. The core components would be administered to all sample persons while the non-core components would be administered to a subsample. In addition, the core components would be repeated in future surveys.

Suggestions for content will be solicited from federal agencies, the legislative branch, the public health and nutrition communities, researchers, foundations, and associations. A variety of mechanisms are being considered to gather recommendations from these groups including letters, meetings, and advertisement in journals.

Concurrent with decisions about survey content, preliminary decisions must be made about sample design, data processing, co-ordination with other surveys, addition of a longitudinal component or continuous monitoring of high-risk groups. Some of the questions to be answered include: whether it is feasible to include primary sampling units from the Health Interview Survey or the Nationwide Food Consumption Survey as primary sampling units in NHANES III; where automation can improve turn-around time, cut costs, and decrease errors; and whether NHANES III can use the same food composition data base that was used in the Nationwide Food Consumption Survey. It is conceivable that the dietary interview could be automated in NHANES III. Coding and edit checks would be accomplished during data entry while the interview is conducted. Changes could also be made to the current format, increasing the number of 24-hour recalls per person, for example.


Conclusion

The last 15 years have been a period of unparalleled interest in the relationship of diet to health. The dietary component of NHANES together with the clinical and biochemical assessment form a unique data set on a nationally representative sample of people. NHANES data have been used to monitor changes in health, nutritional status, and dietary intake over time. Interrelationships among dietary and health variables in the general population have been studied. NHANES III will continue to build on this information foundation.

A problem in planning the dietary component for NHANES III is that conflicting demands are being made. Regulatory agencies and researchers want more detail about the food people eat, how it is packaged and prepared, and what nutrients, additives, and toxic substances it contains. Demands for more rapid publication of data would lead us to simpler interviews with less detail about the foods consumed.

While this dilemma probably cannot be resolved immediately, we would like to hear discussion of the pros and cons of shortened, simplified interviews and data bases for use with NHANES. We would also like to hear recommendations on how to make our national surveys more compatible while extending their usefulness to policy makers and researchers.


Systems considerations in the design of INFOODS


Introduction
Staff turnover and system growth
Documentation
The choice of environmental and basic tools
Choices of operating systems
Choice of programming language
User interface
Data representations
System architecture and linkages
Stability
Primitive tool-based systems
Summary
References


JOHN C. KLENSIN

Laboratory of Architecture and Planning and INFOODS Secretariat, Massachusetts
Institute of Technology, Cambridge, Massachusetts, USA


Introduction

The International Network of Food Data Systems (INFOODS) was organized in 1982 as a global collaborative of people and organizations interested in working towards improving the amount, quality, and availability of food composition data. Currently it is focusing on the development of standards and guidelines for (a) the terminologies and nomenclatures used in describing foods and food components, (b) the gathering of food composition data, including sampling, assay, and reporting procedures, and (c) the storing and interchange of food composition data.

INFOODS is co-ordinated by a small secretariat, based at the Massachusetts Institute of Technology, which has responsibility for initiation, co-ordination, and administration of international task forces to work on specific problems. Additionally, this secretariat serves as a resource for, and clearing-house for information about, food composition activities around the world. INFOODS works with, and is organizing where necessary, regional groups throughout the world; these provide information and assistance for food composition work in their geographic areas. INFOODS presently is funded primarily by the United States government and administratively supported by the United Nations University.

It is generally assumed that the major product of INFOODS will be one or two integrated computer systems for nutrient and nutritional data. In terms of both technical problems and the requirements of different groups of users, that goal presents serious challenges. It is useful to review those challenges and the reasons why a different strategy may be in order.

Technical problems and user requirements may be seen as challenges because they involve questions for which we don't know the answers, as well as several for which, at this point, we probably do. The validity of our belief in the answers we have depends on whether certain analogies hold between systems for management, recording, and analysis of nutrition data and those for other types of data - especially statistical and social measurement data - and the scientific application of them. In addition, a recurring theme in systems design is that large systems usually involve complex choices to which there are few "correct" answers. Instead, there are many trade-offs in which the needs and preferences of one group are optimized at the expense of others. Making these choices explicitly and with an understanding of their implications, and remembering, far into the future, the reasons for the options chosen, tends to promote better systems that are both more internally consistent and consistent with their avowed goals. Inevitably, making explicit choices that are remembered, some of the decisions will turn out, as time passes, to have been wrong. As a result, one of the major challenges - almost a meta-challenge - is designing for damage containment to ensure that a few wrong decisions do not result in the total uselessness of the system or the need to rebuild it from scratch. An understanding of how the wrong decisions were arrived at contributes to containment of the damage.

One of the themes that is not important is the question of "personal" v. "large" computers as ends in themselves. There can be specific reasons for choosing smallish machines- cost, space, even the psychological advantage of being able to pull the thing's plug if it behaves offensively; and there are also some reasons for choosing large ones (or complexes of small ones) economies of scale, the ability to retain large data bases of foods or consumption histories, and convenient sharing of information among scientists. But in discussing the reasons for choosing a machine we should not get involved in debate about the relative merits of small and large computers. It is especially important to avoid that debate because the use of some mixed strategies, in which different equipment is used for different operations, may be the best overall strategy given the present state of the art.

Before discussing the issues, challenges, and problems involved in trying to construct integrated systems, we should look at the question of why such systems should be considered. Small non-integrated systems have several advantages. They are typically cheaper to build and easier to maintain, and do not require large team efforts over a long period of time. Perhaps as important is one of the major discoveries of the microcomputer revolution - that considerable "friendliness" is a characteristic of machines that are not very capable. When capability is limited, it becomes possible to list all the commands, to list all the options, and to provide clear error messages that identify all choices. In other words, a message such as "No, you cannot type that answer, you must use one of the following three" is a reasonable and possible option. It is neither reasonable nor possible if there are tens of options to a particular command. Nor is it feasible to respond to an inquiry about what a command is called by listing all commands when there are several hundred from which to choose. The limited environments of small and unintegrated systems also tend to make them comparatively easy to document.

In this paper, large-scale systems are assumed to be groups of programs that provide a more or less common face to users, that permit free movement of data and intermediate results between different commands or other program components and analyses, and that let the user determine the order and content of both analyses and display formats. Such assumptions make the large-scale system a different type of object, rather than just a larger one, from most traditional program packages or packaged programs.

If one can figure out what is to be done with the data and what analytic and accessing capabilities are needed, it is often easily possible to design a collection of several medium-sized programs or small-scale systems for quite different purposes and users and having different interfaces, to operate from a single data base. In terms of the complexities of getting the data-base design right, that type of arrangement raises the same issues as the large-scale system, but is much easier from a software design standpoint. Also the individual programs may be much easier to get onto a small machine than a complete largescale system would be. So that is one of the alternatives to be considered.

A potential advantage of large systems is that they should be able to provide a user with more flexibility. At their best, they contain a wider resource base - more tools that can be applied - for most situations. If designed well, they should have a longer life expectancy than smaller systems because they can be extended further and can be used in more innovative and creative ways, including ways not anticipated by their designers. Larger systems can support a wider variety of models and analyses, and consequently permit comparisons among techniques. Such additional analytic capabilities are usually supplemented by facilities for handling large or complex data sets that are beyond the capabilities of a small system.

Most of the issues raised in this paper apply to smaller systems as well as larger ones, but become much more important as systems become larger. The would-be developers of a large system must consider these issues in the early design stages to avoid severe problems later on. The major challenges are easily stated: planning and designing what the system is to do and how to implement it, and then testing those ideas. Also essential, although seemingly obvious, is that resources adequate to the task be available not only at the beginning but also over a long enough span to do the entire job. The best long-term strategies, which tend to focus on the building of tools and prototypes and the conduct of experiments before a final commitment is made to a strategy, tend to be poor short-term ones from the standpoint of sponsors or agencies looking for results. The fear of ending up with only a prototype when the resources run out has prevented many prototypes from being built, and as a result many problems have occurred in production that would have been easily identified and eliminated in prototype experiments. The resource issue will not be addressed here, except to note that tasks usually take much longer and cost much more than expected.

It is worth noting that a very large fraction of the time and cost overruns in computer system building and programming can be attributed to a lack of clarity about what the goals are, what users are to be served, and what facilities are to be incorporated. Clear thinking, careful analysis of requirements and constraints imposed by the users (as distinct from ones imposed by real or imagined technical considerations), and careful design consistent with that thinking and analysis are usually richly rewarded, and the failure to perform such thinking and analysis is equally richly punished.


Staff turnover and system growth

The planning of large systems requires consideration of a future in which most of the members of the development group will change by the time the system is in active and productive use. By the time the system is ready for demonstration, many of the development staff will have departed, although the designers may well still be around. This implies that careful attention must be paid to how additions and modifications to the system will be made and how the system will be extended in the future either by the users or the design group. With the typical system design, there are benefits from building special tools to aid in system construction, integration, and testing. It is often useful to expend some effort to define and delimit the framework of the proposed system - its boundaries, fundamental structure, and relationship to the outside world. How much time and effort can and should be spent in these areas becomes another critical choice. This choice is complicated by the knowledge that what is appropriate for a central staff to do in developing a system may not be appropriate for a staff later on (especially one that is administratively or geographically dispersed) and may not be appropriate when users try to create their own extensions.


Documentation

The question of how to document a large system is a key one that should be addressed early and as part of the planning of the code and user interfaces. One approach is to provide comprehensive documentation; but comprehensive documentation may run into thousands of pages as the system grows [17]. Such volume will almost certainly lead to complaints about size and bulk, comments about needing wagons rather than binders, and requests that everything be distilled onto cards than can be put into pockets and purses. Standards about information to be included in documentation - algorithms [7,12,16], error messages, and the like, as well as sampling information and methods of analyses - make such volume inevitable; a mere four pages of description on each of 500 commands leads to 2,000 pages of documentation. On the other hand, documenting a large system as if it were a small one, adopting pocket cards or brief on-line files as the only form of documentation, or in some other way trying to keep the total under 100 pages will cause user frustration or worse. These are questions that do not have clear answers, but making choices early and clearly and remembering the reasons for the decisions that are made can help. At the same time, these decisions, like others discussed throughout this paper, should be made in a way that minimizes the damage if the world appears differently at some time in the future.


The choice of environmental and basic tools

Almost any applications system that one might build today will exist in some environment over which the system developers do not have complete control. The days of writing codes in absolute binary and keying them in from the front panels of machines have departed, some recent excesses in the microcomputer community notwithstanding. Potentially, this means that choices must be made about what environments will be established to develop and operate the system - choices about hardware, operating systems, and languages. In many cases, the possible choices are so constrained by circumstances as to be trivial or non-existent. Worse, the constraints often arise from circumstances that have nothing to do with the requirements of or the intentions of the new integrated system, and will often lead to choices that are pathological for it. For the reader who has the luxury of making choices the two sections that follow are provided; for the reader who does not, these sections may be helpful in anticipating problems where choices are more constrained.


Choices of operating systems

In an ideal world, the operating system chosen for any applications system is one that is smart, flexible, and state-of-the-art, and that operates on powerful, inexpensive, widely available hardware. In addition, the operating system must be utterly stable, so applications development does not involve aiming at a moving target. These attributes almost never exist in combination. Advanced and state-of-the-art systems are typically kept that way by continual revision or frequent releases. Each revision will "improve" the environment in ways that more or less significantly undermine existing work. Applications system developers can gain control over such changes by developing and maintaining their own operating systems, but the price for doing so is usually too high. Systems should be selected to reach a reasonable balance between sophistication and modernness on the one hand and stability on the other. Once the selections are made, software design criteria should include the ability to keep the stable interface that endusers will insist upon; for once someone gets used to a system that is even moderately satisfactory it is likely to be strongly preferred to any other, even those that are objectively better. It will be expected that stability will be preserved even when the supporting operating system is changed.


Choice of programming language

The choice of a programming language (or set of languages) usually follows that of an operating system. While there have been cases in which an operating system was chosen because it supported a particular language, such cases are rare. All things being equal, systems that can be built entirely in a single language are much easier to cope with than those that require two or three. If nothing else, use of a single language makes the management task easier, since few programmers are equally comfortable and efficient in several languages at the same time; just as with natural languages, it is difficult to "think" in more than one language concurrently, regardless of what one can manage at separate times or in different places.

Unfortunately, from the language standpoint, operating systems requirements involve procedures for data entry and recording, for locating and aggregating data, and for doing statistical computations on surveys and to construct food tables. Historically, almost all good software for screen management and data entry support has been created in or for COBOL or PL/I. By contrast, almost all of the research and development work in numerical algorithms has been done in FORTRAN and ALGO-60. While FORTRAN and ALGOL are quite suitable for computational codes, they are not suitable for systems work unless machine dependencies, assembly language subroutines, and other forms of idiosyncratic and incomprehensible code are introduced. COBOL is terrible for systems work, and not much better for numerical computation. The languages that are very good for systems work tend to be too poor or untested for serious computational work. While there are two possible exceptions, both are very large and complex as languages go, and there are allegations that they are very clumsy and hard to learn; further, they tend not to appear very often (at least in complete form( in microcomputer implementations.' The alternative, writing in assembly languages rather than relatively high-level ones, usually leads to trouble, and should not be seriously considered in building a large system with a long life expectancy. There is a third alternative in languages like BCPL, BLISS, and C that are really medium-level, nearly machine-independent assembly languages. For serious applications work, however, as distinct from systems work, they can be nearly as much trouble as assembly languages and for much the same reasons.


User interface

The user interface for a system includes not only how users will communicate with the system, but also how the system will communicate with users in normal and error situations and how output will be formated and presented. There are many opinions about each of these issues, and none is completely correct for all audiences. Every user interface decision is problematic. This paper cannot hope to provide a complete discussion of the issues; the examples that follow are intended to convey the flavour and difficulty of the challenge.

If a single user-level language is chosen and imbedded in the system, the choice must be correct for all present and future audiences. Interfaces that adapt automatically to the characteristics of individual users and their growing knowledge of the system are a major research area today. Where such adaptation is possible, it complicates documentation both for the users themselves and for those who are expected to understand the processing and analysis activities of others. If one can design language for the system around the particular needs of the users to be served, be they the builders of food tables, epidemiologists, or hospital dieticians, much convenience may be gained for the users, and their learning and use of the system may be expedited. At the same time, such language design may require a great deal of learning on the part of users with other backgrounds or interests.

If the needs of several different types of users are to be supported, the system will end up with multiple languages and interfaces and all of the inconsistency and unpredictability to which that leads. Since large, diverse systems attract people with diverse needs and backgrounds, there are no easy solutions - but this may be another argument for not building such systems at all.

One popular solution, at least hypothetically, is to try to use a natural language, such as English, as the means of instructing the system. With advances in the technology in the last few years, this is probably a feasible option, although it entails considerable difficulty in design and implementation. More significant difficulties lie with the degree to which natural languages lack compactness of form and absolute clarity, which is why mathematical and other symbolic notation is used in statistical work. One of the greatest difficulties in using natural languages is getting people to understand them unambiguously.

All of the preceding comments on the user interface apply to picture languages, pointing languages, and even shouting-at-the-computer languages just as much as they apply to typed commands. One can either optimize for a particular group of users and leave everyone else somewhat inconvenienced and unhappy, or one can try to find compromises somewhere.

The actual mode of communication between user and system is almost a separate issue from that of the language used in the communication process, although some choices in this area can constrain, or remove constraints from, language choices. User oriented menus, help systems, command completion systems, and question-answering are good choices for raw beginners; it has been argued elsewhere [15] that they have a tendency to grow pathological for regular and experienced users. Many of the most interesting and sophisticated of the less traditional approaches to human-computer interfaces rely on specialized hardware which may prevent their application in many specific implementations.

One useful alternative to a single, firm, choice of interface is to design "agent" facilities so that the system can easily support programs - both system- and user provided - that run other programs on behalf of the user. Agents may be useful for supporting alternative command formats and presentations, default arrangements, and a choice of interfaces such as menus or question-asking. They are also a convenient framework for system extension facilities for use by the relatively casual user [10]; and they appear to be well-suited for building expert or assistant ones. By contrast, while they can be used to provide conversational or menu-driven environments, or those that require screen inputs, they tend not to work well, or at least to be difficult to support, when the underlying environment has any of those characteristics. These drawbacks impose some major constraints on system organization.

The difficulties and choices found in user communication with the system are paralleled in system communication with the user and with output presentation. The design of all formats around a 24 x 80 character display can be a severe limitation, especially for plots and graphics. Worse yet, in this day of stress on interaction, is the design of output for the line printer, with its headers and footers and its long and wide pages at low resolution. At the same time, there are still many analysts who prefer working with piles of paper to sitting at terminals; articles and food tables will probably be published and used on paper for a long time to come. Designs that depend on high-resolution graphics devices will exclude many users who will be unable to pay the price of entry to the system. Almost any solution will either make some class of potential users very unhappy or will limit the groups and types of users who can be served. This makes it very important to make decisions early and with an understanding of whose happiness is being considered and whose is being sacrificed.


Data representations

Most integrated environments make use of some kind of file system to retain information - a worksheet, a special system file, or even a full data-base management system. In addition to providing a compact way to save information during and between sessions, these files can be used as the mechanism by which all commands for users, other than those intended to read raw data into files and display the contents of files, communicate with each other and with the users. In other words, computational commands do not read, clean, or process raw files, nor do they print results. Having commands work this way ensures (given adequate data representations) that any command can use the outputs of any other appropriate commands as inputs. That level of compatibility will apply to commands written in the future, as the system is extended, as well as to those designed initially. This type of strategy is also complementary to agent strategies and to primitive tool-based systems (discussed below).'

As a system-building approach, such strategy has a long tradition in statistical and social science computation [8, 9]. At the same time, many users find it inconvenient (unless it is hidden) for trivial sets of operations. It also leads to inconvenience and unpredictability when one discovers, late in the life of a system, that the data representation forms are inadequate, that there is no mechanism for cleanly extending them, and that the only practical solution is to have some commands that simply print results. For example, some statistical systems in the recent past have run into major difficulties as the requirements of new or proposed procedures forced a choice between moving from columns and data matrices to symmetric matrices and multi-dimensional arrays on the one hand, and, on the other, deciding that some routines should display results that could not be captured in the file system. We are aware of several situations in which systems have been reorganized in major ways internally, requiring users to convert data sets, in order to try to cope with these problems as they unfold. Naturally enough, the problems tend to be buried as much as possible, rather than being cited explicitly in the literature.

To a degree, the more heavily the system relies on a single fixed set of data structures, the more dependent it becomes on the correctness of those data structures and file representations; such dependence amounts to a negative technical aspect of the approach. So once again there is a challenge in trying to make the right decision - in balancing the compatibility advantages against convenience for trivial tasks and against the risks of having to use a mixed strategy or make a major redesign if the data representations are not adequate to future developments. There are some alternative methods for data conversion, such as globally changing all files that would not exist in an integrated environment. However, the risks of making a conversion of broad scope would be a threat to the integrity of multiple dependent programs operating off a common data base or data representation, as well as to a more structured system.


Contents - Previous - Next