Contents - Previous - Next


This is the old United Nations University website. Visit the new site at http://unu.edu


Non-controversial issues: Population, resources, and technology

There has been, and still is, great controversy as regards the essentiality (non-substitutability) of certain environmental resources. However, the controversy is largely over definitions and details, not fundamentals. Possibly this confusion has arisen because the issues were not formulated sharply enough, until recently. I think there is a reasonable consensus among experts on the fact that some environmental services are essential to long-run human survival on this planet. The existence of "critical" environmental resources is not seriously doubted by most people. The doubters are mostly conservative libertarians with a deep faith in the ability of markets to allocate scarce resources and to call forth technological (or other) substitutes in response to any perceived scarcity.

The weakness of this position is that markets for environmental services are virtually non-existent. Markets must function through price signals. Clearly we need food, sunshine, clean air, and fresh water. We also need the waste disposal services of bacteria, fungi, and insects. All are, at bottom, gifts of nature. Because they are not "commodities" that can be owned and possessed or physically exchanged, they have no prices. Moreover, since these services are not produced by human activity, price signals could not induce an increase in the supply. What is still doubted by many scientists, on the other hand, is the answer to the second half of the question: whether or not these essential environmental resources are truly vulnerable to human interference and possibly subject to irreversible damage.

One example of an essential environmental resource that appears to be subject to irreversible damage is the ozone layer of the stratosphere. The cause of damage, it is now agreed, is atomic chlorine, which originates from the inert chlorofluorocarbons (CFCs) that do not break down in the lower atmosphere and gradually diffuse into the stratosphere where they are broken up by high-energy ultraviolet radiation (UV-B). The chlorine atoms, in turn, react with and destroy ozone molecules, thus depleting the protective ozone layer. This phenomenon was very controversial 20 years ago, but the controversy has largely subsided, thanks to the discovery of annual "ozone holes" in the polar stratosphere, which were first seen in the mid-1980s.

Another example of increasing consensus concerns climate change. The climate is certainly an environmental resource. Even a decade ago there were still a number of scientists expressing serious doubts about whether the problem was "real." The major source of doubt had to do with the reliability of the large-scale general circulation models of the atmosphere that had to be used to forecast the temperature effects of a build-up of greenhouse gases (e.g. carbon dioxide, methane, nitrous oxide, CFCs). Since then, the models have been improved significantly and it has been established fairly definitely that climate warming has been "masked" up to now by a parallel build-up in the atmosphere of sulphate aerosol particles (due to sulphur dioxide emissions), which reflect solar heat and cool the earth. The two effects have tended to compensate for each other. However, the greenhouse gases are accumulating (they have long lifetimes) whereas the sulphate aerosols are quickly washed out by rain. In other words, the greenhouse gas concentration will continue to increase geometrically, whereas the sulphate problem may increase only arithmetically or not at all (if sulphur dioxide emissions are controlled). In any case, the Intergovernmental Panel on Climate Change (IPCC) has now agreed that the greenhouse problem is indeed "real." The controversy continues, however, with regard to likely economic damage and optimal policy responses.

There is already a near-consensus among experts that continued human population growth is not consistent with long-run sustainability (question 2.1) and that some natural resources must eventually be depleted (question 2.2). On the other hand, there is less agreement about whether or not increasing waste and pollution would constitute a limit on growth (question 2.3) or whether or not the balanced natural systems such as the carbon, nitrogen, and sulphur cycles are at risk (question 2.4).

The unacceptability of continued population growth (question 2.1) is a matter on which there is reasonably wide consensus. Malthus foresaw that population growth would eventually outrun the carrying capacity of the earth. Colonization of new lands in the western hemisphere, together with dramatic improvements in agricultural technology, forestalled the crisis for two centuries. Some conservative economists regard this as sufficient evidence that "Malthus was wrong" and that today's neo-Malthusians are unnecessarily alarmist. Nevertheless, the alarm has been raised once again, perhaps on better grounds: there are no more "new lands" waiting for cultivation, and the potential increases in yield available from fertilizers and plant breeding have already been largely exhausted.

Technological optimists - notably Herman Kahn and his colleagues (Kahn et al. 1976) - have unhesitatingly projected that early twentieth-century rates of increase in agricultural productivity can and will continue into the indefinite future. However, agricultural experts are much less sanguine. The potential gains from further uses of chemicals by traditional methods are definitely limited. Ground water is already becoming seriously depleted and/or contaminated in many regions of the United States and Western Europe, where intensive irrigation cum chemical agriculture have been practiced for a few decades. Such problems are now also becoming acute in places such as northern China. Over a decade ago Bernard Gilland wrote:

Since the onset of the rapid rise in the world population growth rate over 30 years ago, there has been speculation on the human carrying capacity of the planet. Most writers on the problem either hold, on technological grounds, that the Earth can support several (or even many) times its present population, or warn, on ecological grounds, that the Earth is already overpopulated and that human numbers should be reduced. I shall try to show that neither of these views is realistic, and that a plausible assessment of carrying capacity leads to the view that the world is not yet overpopulated but will be so in the second decade of the twenty-first century, when the population will be 60 percent larger than at present. (Gilland 1983, p. 203)

Gilland went on to conclude:

Estimates for global carrying capacity and long-range demographic projections are admittedly subject to wide margins of error, but the consequences of relying on an excessively optimistic assessment of the future population food supply balance would be so serious that a conservative assessment is justified. (Ibid., p. 209)

Admittedly, Gilland's assessment was based on conventional agriculture using land now classified as "arable." Julian Simon argued that this is not a fixed quantity, and that so-called arable land had actually been increasing at a rate of about 0.7 per cent per annum (from 1960 to 1974) (Simon 1980, p. 1432). This is one of the reasons food shortages projected earlier did not occur. But most of the "new" cropland was formerly tropical forest (the rest was grassland, such as, for instance, the vast and ill-conceived "new lands" projects of Soviet central Asia). Deforestation has now become an acute problem throughout the tropics, and most tropical forest soils are not very fertile to begin with and are rapidly exhausted of their nutrients by cropping. There is no basis for supposing that the amount of arable land can continue to increase much longer, if indeed it has not begun to decrease already for the reasons noted above. In any case, erosion and salination are taking a constant toll of the lands already in production.

With regard to the possibility of continuing to increase the productivity (yield) of existing arable land, there is a continuing push to develop improved varieties and higher photosynthetic efficiencies. Biotechnology is now beginning to be harnessed to increase food production. There is optimistic talk of a "second green revolution." For some years past, global grain production per capita has actually been declining. Thus, incremental improvements will be needed just to keep up with population growth.

Gilland also did not take into account several theoretical possibilities, including such "high-tech" schemes as genetically engineered bacteria capable of digestion of cellulose or crude oil, large-scale hydroponics, and massive irrigation of tropical deserts such as the Sahara using desalinated sea water. Certainly, these possibilities must be taken seriously, and some of them may play an important role before the end of the twenty-first century. On the other hand, there is no chance that any of them could make a difference within the next 20 or 30 years. In short, there are strong indications that agricultural technology cannot continue to outpace population growth in the third world for more than another few decades. For these reasons, the majority of demographers, and most economists, now take it for granted that population growth must be brought to an end as soon as possible if sustainability is to be achieved (e.g. Keyfitz 1990, 1991).

As regards concerns about resource exhaustion (question 2.2), the "neo-Malthusian" position was taken very seriously by some alarmists, such as Paul Ehrlich, in the 1970s. The argument was made that economic growth is inherently restricted by the limited availability of exhaustible natural resources (e.g. Meadows et al. 1972). However, it is now widely agreed among both economists and physical scientists that energy or mineral resource scarcity is not likely to be a growth limiting factor, at least for the next half-century or so. The Malthusian "limits to growth" position adopted by some environmentalists in the 1960s and 1970s has been largely discredited, both by empirical research (e.g. Barnett and Morse 1962; Barnett 1979) and by many theorists. The main reason for the change of perspective is that the neo-Malthusian view was naive in two respects.

First, the neo-Malthusians neglected the fact (well known to the fuel and mineral industries) that there is no incentive for a mining or drilling enterprise to search for new resources as long as it has reserves for 30 years or so. This is a simple consequence of discounting behaviour. It explains why "known reserves" of many resources tend to hover around 20-30 years of current demand, despite continuously rising demand. Secondly, they gave too little credit to the power of market-driven economies to call forth technological alternatives to emergent scarcities (e.g. Cole et al. 1973; Goeller and Weinberg 1976). However, as it turns out, it is overused "renewable" resources, such as arable land, fish, fresh water, forests, biodiversity, and climate, that are more likely to be limiting factors.

The existence of feasible strategies to achieve population stability (question 3.1) is now generally accepted. The subsidiary question of the most appropriate means remains murky. This optimism is based partly on evidence of a slow-down in global population growth in recent decades. However it is admittedly unclear whether the observed slow-down (mostly in China, so far) can be extrapolated to other countries, particularly in the Muslim world. Still, the majority of experts seem to believe that the required "demographic transition" is economically and institutionally feasible, in principle. Here the central problem is seen to be to achieve near-universal literacy, equal rights and legal standing for women, a social security net for the poor, and real economic growth to finance all of this, at a rate fast enough to reach that "middle-class" standard within a few generations.6

Demographers and social scientists generally agree that these are the preconditions for radically reduced birth rates. A few years ago Jessica Matthews of the World Resources Institute (WRI) had the following comment:

The answer is emphatically that there is a realistic path to global population control. The demographers measure what they call unmet needs for contraceptives. That's the place where action has to begin - with women and couples who express a desire to use contraceptives, but who currently have no access to them. The cost is about $10 per couple per year. The need is several times world spending on contraception - a trivial, almost infinitesimal sum, compared to defense spending. I dare say it would probably be covered by the cost of one B-2 bomber, about $500 million.... The second most important realistic, feasible, direct intervention is through women's education. For reasons that are not entirely understood, even primary school education makes a huge difference in women's fertility rates. Therefore education of women in developing countries, and expansion of that opportunity, will have a huge effect. (Matthews 1990, pp. 27-28)

The 1994 Cairo Conference on Population and the Status of Women echoed most of Matthews' themes. Although there were passionate objections to the conference itself, and to the manifesto signed by most attendees, they scarcely challenged the cause-effect relationships set forth by Matthews. On these issues, there is a wide consensus among experts. The most passionate debates with regard to population policy centre on moral (and, of course, political) questions of methods of birth control and, especially, the legitimacy of abortion. These arguments are not within the realm of science or scientific debate.

Controversial issues: Pollution, productivity, and biospheric stability


On toxicity
The stability of the biosphere: The impossibility of computing the odds
Technical preconditions for sustainability


The existence of plausible threats to biospheric stability, even survival (questions 2.3 and 2.4 above), is by no means obvious. The question about whether or not pollution constitutes a possible limiting factor for economic growth (question 2.3) is perhaps the one most debated at present. It is highly controversial. If there is any consensus on this issue it is merely that "toxification" - in the sense of "toxic wastes anywhere near my neighbourhood"- is unacceptable (i.e. must be prohibited regardless of cost). The next section discusses this further. But the extent to which pollution constitutes a limitation on growth itself, or on the welfare generated by economic activity, remains an open question.

The problem of climate warming has been extensively studied and debated (as mentioned above), though there are still significant areas of disagreement among experts with regard to economic damage and the appropriate response strategies. On the other hand, the issue of environmental acidification and/or toxification has never been considered seriously as a global threat to human survival. However, concerns are beginning to arise, especially in regard to cancer and human reproductive capacity. The link between various chemical agents and biological impacts none the less remains largely speculative, and is likely to remain so for many years. Damage mechanisms and thresholds are known in some cases, but not in others. However, it is fairly easy to construct a simple catalogue of measures of materials flux and consequent waste generation that self-evidently cannot continue to increase indefinitely.

The issue of whether or not there is a threat to biospheric stability itself (question 2.4) is rather deep. There are two aspects: the first has to do with phenomenology; the second has to do with the essential indeterminacy of the risk. Not only is there no consensus on either of these points, there has been almost no discussion up to now. 1 return to this question at greater length below.

As regards the third main question, concerning the least-cost (or "least-pain") transition to a sustainable trajectory, the problem of slowing population growth (question 3.1) has already been mentioned. The existence of plausible technological "fixes" (question 3.2) and the possible existence of large numbers of "win-win" opportunities or "free lunches" (question 3.3) are much more controversial. There is also no consensus as yet. These questions are discussed after the two digressions - on toxicity and on biospheric stability.

On toxicity

There is no doubt that widespread fear of exposure to toxic chemicals is one of the major driving forces behind the environmental movement. The near-hysterical media coverage of the "Love Canal" episode and the proliferation of "Superfund" sites certainly support this contention. Yet, as a basis for discussing environmental threats half a century hence, one needs a different kind of evidence. Unfortunately, methodological problems proliferate even faster than superfund sites.

First, the number of industrial chemicals produced in annual quantities greater than 1 metric ton is estimated at 60,000. The number grows by thousands each year. Only a tiny percentage of these has been tested for the whole range of toxic effects. In fact, it could be argued that none has, since new effects are being discovered all the time, often by accident or from epidemiological evidence long after the fact. For instance, mercury was not known to be harmful in the environment until the mysterious outbreak of "Minamata disease," a severe and sometimes lethal neurological disorder among cats, seabirds, and fishermen living near Minamata Bay, in Japan. It took several years before public health workers were able to trace the problem to organic mercury compounds (mainly methyl mercury) in fish from the bay. The ultimate source turned out to be inorganic mercury from spent catalysts discharged by a nearby chemical plant. The toxic effects of cadmium ("itai-itai disease") were discovered in a similar way.

Second, quantitative production and consumption data for chemicals are not published consistently even on a national basis, still less on a worldwide basis. Data can be obtained only with great difficulty, from indirect sources (such as market studies), and for only the top 200 or so chemicals. In the United States and virtually all countries with a central statistical office (or census), production and shipments data are collected, but the data are withheld for "proprietary" reasons if the number of producers is three or fewer. In Europe, the largest producer of most chemicals, all quantitative production and trade data are suppressed. Data are published in terms only of "ranges" so wide (e.g. 100-10,000 tonnes) that the official published numbers are useless for analysis.

Data on toxic chemical emissions are extremely scarce. The US Environmental Protection Agency's Office of Toxic Wastes is the only official primary source of such data in the world, and its major tool is the so-called Toxic Release Inventory (TRI), which is an annual survey that has been in effect since 198?. The survey must be filled out by US manufacturing firms (Standard Industrial Classification 20-39) with 10 or more employees and that produce, import, process, or use more than a threshold amount of any of 300 listed chemicals. The reporting threshold as regards production or processing for each chemical was initially (1987) 75,000 Ib; since 1989 it has been 25,000 lb (roughly 12 metric tons), while for use the reporting threshold is now set at 10,000 lb (roughly 4.5 metric tons) per year. Releases are reported by medium (air, water, land) and transfers for disposal purposes to other sites are also reported. There is serious doubt about both the completeness and the accuracy of the TRI reports, because published data are very difficult to reconcile with materials balance estimates, as discussed elsewhere (Ayres and Ayres 1996).

Third, a large number of manufactured chemicals - probably the vast majority in terms of numbers, if not tonnage's - are produced not because they are really needed as such, but because they are available as by-products of other chemical processes. This is particularly true of products of chlorination and ammonylation reactions, which require repeated separation (e.g. distillation) and recycling stages to obtain reasonably pure final products. For instance, it has been estimated that 400 chlorinated compounds are used for their own sake, but at least 4,000 are listed in the directories (Braungart, personal communication, 1992). This is because it is easier to treat them as "products" than as wastes. Many such chemicals are found in products such as pesticides, paint thinners and paint removers, dry-cleaning agents, and plasticizing agents.

Fourth, many of the most dangerous toxic chemicals are known to be produced by side reactions in the manufacturing process, or "downstream" reactions in the environment. Perhaps the most infamous toxic/carcinogenic chemicals are the so-called "dioxins," which are not produced for their own sake but appear to be minor contaminants of some chlorinated benzene compounds that are used for herbicide manufacturing. Thus dioxins were accidental contaminants in the well-known herbicide 2-4-D, which became known as "Agent Orange" during the Viet Nam war. They are also probably produced by incinerators and other non-industrial combustion processes, depending on what is burned. As regards downstream processes, the example of methyl mercury - produced by anaerobic bacteria in sediments - was mentioned earlier. Exactly the same problem arises in the case of dimethyl and trimethyl arsine, extremely toxic volatile compounds that are generated by bacterial action on arsenical pesticide (or other) residues left in the soil. Still other examples would be the dangerous carcinogens such as Benz(a)pyrene (BAP) and peracyl nitrate (PAN) produced by reactions between unburned hydrocarbons, especially aromatics, nitrogen oxides (NOx), and ozone. (This occurs in Los Angeles "smog", for instance.) In fact, oxides of nitrogen are themselves toxic. NOx is produced not only by most high-temperature combustion processes but also by atmospheric electrical discharges.

An even more indirect downstream effect is exemplified by the Waldsterben (forest die-back) in central Europe. The conifer trees of the Black Forest and much of the Alps are now being weakened and many are dying. This appears to be the result of a complex sequence of effects starting with increased acidity of the soil. As the pH drops below 6 there is a sharply increased mobilization of aluminium ions, which are toxic to plants. There is also an increased mobilization of heavy metals hitherto fixed in insoluble complexes with clay particles. Many toxic heavy metals - from pesticides, or from deposition of fly ash from coal burning - have long been immobilized by adherence to clay particles at relatively high pH levels (thanks, in part, to liming of agricultural soils). However, as the topsoil erodes as a result of intensive agriculture, it is being washed into streams and rivers and, eventually, into estuaries, bays such as the Chesapeake, or enclosed seas (such as the Baltic, the Adriatic, the Aegean, or the Black Sea), where it accumulates.

This sedimentary material is "relatively" harmless as long as the local environment is anaerobic, except for the localized risk of bacterial methylation of mercury, arsenic, and cadmium mentioned earlier. But this accumulated sedimentary stock of heavy metals (and other persistent toxic chemicals too) would become much more dangerous in the event of a sudden exposure to oxygen. For instance, sediments dredged from rivers and harbours may be rapidly acidified and could become "toxic time bombs" (Stigliani 1988).

Fifth, many toxic compounds are produced naturally by plants and animals, largely as protection against predators or as means of immobilizing prey. Nicotine, rotenone (from pyrethrum), heroin and morphine (from opium), cocaine, curare, digitalis, belladonna, and other alkaloids are well-known examples from the plant world. Recent research suggests that natural (i.e. biologically produced) compounds have about the same probability of being toxic or carcinogenic as synthetic compounds. The widespread idea that "natural" products are ipso facto safer than synthetic ones is apparently false. In fact, Bruce Ames (inventor of the "Ames test") has argued with considerable force that the use of synthetic pesticides is less dangerous, to humans, than reliance on "non-chemical" methods of agricultural production, because plants produce greater quantities of natural toxins when they are under stress. However, probably even less is known about the range of toxic effects from natural chemicals than from industrial chemicals.

In fact, there is no general theory of toxicity. It comes in many colours and varieties. The notion includes mutagenic effects visible only after generations, effects on the reproductive cycle, and carcinogenic effects (e.g. asbestos, dioxins, vinyl chloride), or chronic but minor degradation of physiological function. At the other extreme are acute effects resulting in rapid or even instantaneous death. Methyl isocyanate (MIC), the cause of the Bhopal disaster, is an example of the latter. Chlorinated pesticides and polychlorinated biphenyls (PCBs) were not even thought to be dangerous to humans until long after they had been in widespread use. It was belatedly discovered that these chemicals tend to accumulate in fatty animal tissues and to be concentrated as they move higher in the food chain. Eagles, falcons, and ospreys were nearly wiped out in some areas by DDT because their eggshells were weakened to the point of non-viability.

As noted above, soil acidification resulting from anthropogenic emissions of SO2 and NOx to the atmosphere is also releasing toxic metals (and other compounds) that were formerly immobilized in the soil. Large accumulations of toxic metals reside in the soils and sediments in some areas. For many decades lead arsenate was used as an insecticide, especially in apple orchards. Copper sulphate and mercury compounds (among others) were widely used to control fungal diseases of plants. Mercury was also used to prevent felt hats from being attacked by decay organisms. Chromium was, and still is, used for the same purpose to protect leather from decay. Copper, lead, nickel, and zinc ores were roasted in air to drive off the sulphur (and the arsenic and cadmium). Lead paint was used for more than a century, for both exterior and interior surfaces. For half a century tetraethyl lead and tetramethyl lead were used as gasoline octane additives (they still are so used in much of the world). Soft coal has been burned profusely in urban areas; usually the bottom ash was used as landfill for airports and roads. Coal ash contains trace quantities of virtually every toxic metal, from arsenic to mercury to vanadium. For decades, phosphate fertilizers have been spread on farmland without removing the cadmium contaminants. In all of these cases, increasing acidity means increased mobilization of toxic metals. These metals eventually enter the human food chain, via crops or cows' milk.

It is clear that toxicity is not simply a problem associated with the production and use of industrial chemicals or heavy metals. It is intimately linked to a number of other anthropogenic processes, not least of which is global acidification. To take another example, it is well known that CFCs emitted to the atmosphere are responsible for depleting the ozone layer in the stratosphere. The major consequence on the earth's surface is an increase in the intensity of harmful UV radiation reaching the surface. Spawning zooplankton and fish in shallow surface waters are likely to be adversely affected. This is, in effect, a form of eco-toxicity.

Is there any common factor among all these types of toxicity? It can be argued that all human toxins are, in effect, causes of physiological disturbance. All interfere with some biological process. Mutagens interfere with the replication of the DNA molecule itself. Carcinogens interfere with the immune system; neurotoxins (e.g. cyanide) interfere with the ability of the nerves to convey messages. Many toxins cause problems for the organism because they closely resemble other compounds that perform an essential function. Thus carbon monoxide causes suffocation because it binds to the haemoglobin in the blood, as oxygen does. But, when the haemoglobin carrier arrives at a cell in need of oxygen, the potential recipient "sees" only a carbon atom where an oxygen atom should be.

The point of this example is that toxicity, to an organism, is just another word for imbalance or disturbance. A toxin is an agent that causes some metabolic or biological process to go awry. Every organism has a metabolism. Metabolic processes are cyclic self-organizing systems far away from thermodynamic equilibrium. The same statement can be made of the metabolic processes - the "grand nutrient cycles" such as the carbon and nitrogen cycles - that regulate the whole biosphere. Any disturbance to the biosphere is "toxic," in principle.

The stability of the biosphere: The impossibility of computing the odds

The fundamental question about whether or not the stability of the biosphere is at risk was deferred. This is a very deep question indeed.

First, a quick review of the case for believing there may be a real threat to survival. Most people who have never thought deeply about the matter tend to assume that life is a passive "free rider" on the earth. In other words, most people suppose (or were taught) that life exists on earth simply because earth happened to offer a suitable environment for life to evolve. They imagine that earth was much like it is now (except for more volcanic activity) before life came along, and that if life were to be snuffed out by some cosmic accident-- say a massive solar flare - the animals and plants would disappear but the inanimate rivers, lakes, oceans, and oxygen-nitrogen atmosphere would remain much as they are today.

The above quasi-biblical vision is not in accord with the scientific evidence. It is true that life probably originated on earth (though some scientists speculate that the basic chemical components of all living systems may actually have originated in a cold interstellar cloud - Hoyle and Wickramasinghe 1978). Life certainly evolved on earth. The earliest living organisms appear to have been capable of metabolizing organic compounds (such as sugars) by fermentation, to yield energy and waste products such as alcohol's. The organic (but non-living) "food" for these simple organisms was created by still unknown processes in a reducing environment. The composition of the atmosphere of the early earth cannot be reconstructed with great accuracy, but it undoubtedly contained ammonia, hydrogen sulphide, and carbon dioxide, plus water vapour. There was certainly no free oxygen. It is less certain, but possible, that no free nitrogen was present. Life would have disappeared as soon as the supply of "food" was exhausted, if it had not been for the evolutionary "invention" of photosynthesis.7

The first photosynthetic organisms converted carbon dioxide and water vapour into sugars, thus replenishing the food supply. But they also generated free oxygen as a waste product. For a billion years or so, the free oxygen produced by photosynthesis was immediately combined with soluble ferrous iron ions dissolved in the oceans, yielding insoluble ferric iron. Similarly hydrogen sulphide and soluble sulphites were oxidized to insoluble sulphates. These were deposited on the ocean floors. Thanks to tectonic activity, some of them eventually rose above sealevel and became land. (Virtually all commercial iron ores and gypsum now being mined by humans are of biological origin.) When the dissolved oxygen acceptors were used up, oxygen began to build up in the atmosphere. As a metabolic waste product, oxygen was toxic to the anaerobic organisms that produced it. Again, there was a threat of self-extinction.

Once again, an evolutionary "invention" came to the rescue. This was the advent of aerobic respiration, which utilized the former waste product (oxygen) and also increased the efficiency of energy production sevenfold over the earlier fermentation process. Aerobic photosynthesis followed, thus closing the carbon cycle (more or less) for the first time. This occurred less than 1 billion years ago, though life has existed on the earth for at least 3.5 billion years. But the carbon cycle and the earth's atmosphere did not stabilize for several hundred million more years. The free oxygen in the atmosphere exists only because large quantities of carbon, with which it was originally combined, have been sequestered in two forms: (1) as calcium carbonate, in the shells of tiny marine organisms (which later reappear as chalk, diatomaceous earth, or limestone), or (2) as coal or shale. The carbon sequestering process took place over several hundred million years a period culminating in the so-called carboniferous era during which the carbon dioxide content of the atmosphere declined to its present very low level. In addition, sulphur has been sequestered, primarily as sulphates. Similarly, though somewhat less certainly, the free nitrogen in the earth's atmosphere was probably originally combined with hydrogen, in the form of ammonia of volcanic origin. Whereas the carbon has mostly been buried, the missing hydrogen has probably recombined with oxygen as water vapour.

The early atmosphere and hydrosphere of the earth were quite alkaline compared with the present, because of the ammonia. The hydrogen-rich reducing atmosphere of the early earth has been replaced by an oxygenating atmosphere; the hydrosphere is correspondingly more acid than it once was before life appeared. The biosphere has stabilized the atmosphere (and the climate), at least for the last several hundred million years. If all life disappeared suddenly today, the oxygen in the atmosphere would gradually but inexorably recombine with atmospheric nitrogen and buried hydrocarbons and sulphides (converting them eventually to carbon dioxide, nitric acid, nitrates, sulphuric acid, and sulphates). Water would be mostly bound into solid minerals, such as gypsum (hydrated calcium sulphate). This oxygenation process would also further increase the acidity of the environment.

Suppose all possible chemical reactions among carbon, nitrogen, and sulphur compounds - including those currently sequestered in sediments and sedimentary rocks - proceed to thermodynamic equilibrium. The atmosphere would consist mainly of carbon dioxide. The final state of thermodynamic equilibrium would be totally inhospitable to life. (For one thing, the temperature would rise to around 300°C.) Once dead, the planet could never be revived (Lovelock 1979). Table 1.2 displays some of these "ideal" effects.

The point of the capsule history of the earth shown in table 1.2 is that our planet is, in reality, an extraordinarily complex interactive system in which the biosphere is not just a passive passenger but an active element. It is important to establish that the earth (atmosphere, hydrosphere, geosphere, biosphere) is a self-organizing system (in the sense popularized by Prigogine and his colleagues, e.g. Prigogine and Stengers 1984) in a stable state far from thermodynamic equilibrium. This system maintains its orderly character by capturing and utilizing a stream of high-quality radiant energy from the sun. Living organisms perform this function, along with other essential functions such as the closure of the carbon cycle and the nitrogen cycle (Schlesinger 1991).

Table 1.2 The stabilizing influence of the biosphere (Gaia)

Reservoir

Substance

Actual world

Ideal world I

Ideal world II

Atmosphere Nitrogen 78% 0% 1.9%
  Oxygen 21% 0% trace
  CO2 0.03% 99% 98%
  Argon 1% 1 % 0.1%
Hydrosphere Water 96% 85% ? Not much water
  NaCl 3.4% 13% ?
  NaNO3 - 1.7% ?
Temperature °C 13 290 ± 50 290 ± 50
Pressure Atmospheres 1 60 60

Source: Lovelock (1972).
Note: Life is impossible if the average temperature is too high for liquid water, or if salinity exceeds 6 per cent.

Complex systems stabilized by feedback loops are essentially nonlinear. An important characteristic of the dynamic behaviour of some non-linear systems is the phenomenon known as chaos. Such systems are characterized by trajectories that move unpredictably around regions of phase-space known as strange attractors. "Stability" for such a system means that the trajectory tends to remain within a relatively well-defined envelope. However, a further characteristic of non-linear multi-stable dynamic systems is that they can "jump" also unpredictably - from one attractor to another. (Such jumps have been called "catastrophes" by the French mathematician René Thom, who has classified the various theoretical possibilities for continuous systems.) The resilience of a non-linear dynamic system - its tendency to remain within the domain of its original attractor - is not determinable by any known scientific theory or measurement. In fact, since the motion of a non-linear system along its trajectory is inherently unpredictable (though deterministic), the resilience of the earth system is probably unknowable with any degree of confidence. It is like a rubber band whose strength and elasticity we have no way of measuring.

The climate of the earth, with its feedback linkages to the biosphere, is a non-linear complex system. It has been stable for a long time. However, there is no scientific way to predict just how far the system can be driven away from its stable quasi-equilibrium by anthropogenic perturbing forces before it will jump suddenly to another stable quasi-equilibrium. Nor is there any way to predict how far the equilibrium will move if it does jump. The earth's climate, and the environment as a whole, may indeed be very resilient and capable of absorbing a lot of punishment. Then again, they may not.

What can be gained by more research? Probably we can learn a lot about the nature of the earth-climate-biosphere interaction. We will learn a lot about the specific mechanisms. We will learn how to model the behaviour of the system, at least in simplified form. We will learn something about the stability of the models. We may, or may not, learn something definitive about the stability of the real system. The real system is too complex, and too nonlinear, for exact calculations. There is no prospect at all of "knowing the odds" and making a rational calculation of risk. The problem we face is that the odds cannot be calculated, even in principle. In the circumstances, prudence would seem to dictate buying some insurance. The question on which reasonable people can still differ is: how much insurance is it worthwhile to buy? The answer depends, in part, on the technological alternatives.


Contents - Previous - Next