This is the old United Nations University website. Visit the new site at http://unu.edu
Rosenberg and Birdzell [17] have shown how the industrial development of the West depended on social and technological experimentation and social learning through strong feedback mechanisms from the market. In the earliest period of the Industrial Revolution, the potential rewards for technical and managerial innovation were high and the penalties for failing to innovate were correspondingly severe. However, the risks and costs arising from the unforeseen consequences of innovation were generally treated as acts of God and were expected to be borne by society at large. Thus the substantial risks of technical or market failure borne by the innovator were not augmented by the additional risk of having to compensate society or third parties for unforeseen adverse effects, such as environmental pollution, industrial accidents, failure of companies, or displacement of labour, unless gross negligence could be shown, and the burden of proof was on plaintiffs to show both injury and fault in the case of health and environmental effects [19]. There was a general expectation that the totality of innovations elicited by the high potential rewards would have an aggregate net benefit to society far exceeding the aggregate net cost to society associated with particular adversely affected groups or individuals. This was just the "price of progress." In this situation technology assessment was important, but it focused on technical feasibility and market potential, not on possible adverse effects on people who were not direct parties in the production and consumption of the goods or services resulting from innovation. Technology assessment was the responsibility of the entrepreneur and was primarily directed at reducing entrepreneurial risk arising from technical or market failure through the exercise of better technical and economic foresight.
Even when there were adverse effects either on the environment, health, safety, or social welfare, they were expected to be short term, local, and reversible. It was thought that oversights could almost always be remedied by a technical fix at modest cost, so there was not much point in looking for dangers that, more likely than not, could be easily overcome by human ingenuity as they were encountered. The development economist, Albert Hirschman, pointed out in a famous article [9] that development projects in developing countries often fail to recognize many difficulties and obstacles to success, but equally they underestimate the power of human ingenuity to surmount such obstacles, so that most projects that would not have been undertaken if the difficulties and side-effects had all been foreseen, nevertheless end by being successful because of human ingenuity and adaptive innovation. A similar belief underlay the attitude toward technological innovation in most developed countries until the mid-1960s Indeed, in the nineteenth century, innovators were virtually free from the risk of being held accountable to society for the unforeseen social costs entailed by the widespread application of their innovations.
Thus the dominant attitude in most industrial societies of the North until very recently has been that technology should be deemed innocent until proved guilty beyond a reasonable doubt. Among most élites of the developing world, technology is generally seen as the only realistic avenue of escape from the growing disparity in power and living standards between the developed and developing countries. Indeed, if they have had any concern with the social costs of technical progress, it was more with the social costs of unemployment and displacement of labour than with environmental and health "externalities."
The rise of global problems
Throughout most of human history, societies that despoiled their environment or exhausted their mineral resources eventually declined and were lost to sight, while civilization and power reappeared in new locations where nature had been less exploited. Today, however, human society exists on a truly planetary scale, and we are in the midst of trying to sort out what this means for the sustainability of civilization as we have known it. There are fewer and fewer unexploited resources or undegraded environments in which civilizations can be reborn, allowing humanity to progress by simply abandoning its past mistakes. Thus the experimentation and social selection described by Rosenberg and Birdzell has depended historically on geographical isolation of human communities, which tended to prevent mistakes of individual institutions or nations from dragging the whole world down together.
Since the 1960s there has been a sea change in consciousness with respect to technical and material progress. For the first time in the history of humankind on the planet, human activities are becoming a major natural phenomenon in their own right comparable in potential to the natural phenomena that have altered the face of the planet in geological history. In absolute terms, population, economic activity, and technical change are still increasing, and there is a widespread recognition, at least among élites, that something has to give. The only point debated is about how soon and in what way. This new consciousness has spread from the developed world to many élites in the developing world in less than 20 years.
In fact the shift towards a more cautious attitude toward technical progress arises from a combination of two factors: growing evidence of real environmental deterioration, and rising expectations for environmental quality among those societies or parts of societies where basic material needs have been met and that have become increasingly aware of "externalities" associated with the process of economic growth. Paradoxically, much of the evidence of environmental deterioration has come from progress in science from better understanding of natural processes and from advances in the ability to detect ever smaller traces of pollutants in the environment. For example, it was only in the early 1950s that the photochemical effects of sunlight on the gases emitted in automobile exhausts were first shown to be the main cause of atmospheric smog in the Los Angeles Basin. It was not until the 1970s that the upper atmospheric chemical reactions leading to the depletion of the life-protecting stratospheric ozone layer began to be unravelled, implicating a man-made chemical originally introduced widely because of its chemical inertness and complete lack of toxicity to humans and animals- namely the chloro-fluorocarbons (CFCs). The man-made chemical compound DDT was discovered during the Second World War to be a powerful insecticide, apparently harmless to humans. Introduced for the purpose of eliminating disease-bearing insects, such as the lice-carrying typhus, it was hailed as a miracle technology that was credited with saving nearly half a billion lives, mainly in the developing world. Yet, when applied on an even larger scale in agriculture, it was eventually found to cause such ecological damage that its use was largely banned in developed countries [23, 6].
Other chlorinated hydrocarbons were introduced that were believed to be less ecologically damaging because less persistent in the environment, but they also proved more hazardous to human health, especially in the developing world, and began to turn up in drinking water supplies and soils. Today, the use of chemicals in agriculture is in increasingly bad repute because of both its cumulative ecological effects and its declining effectiveness due to adaptive evolution of the target pests. Most of these hazards were not fully anticipated, in part because, when first introduced in small amounts, the substances appeared to be relatively harmless and their benefits were immediate, obvious, and demonstrable. At the same time, nobody foresaw the rapid growth in their scale of use that their benefits to the producers would elicit, nor the new secondary ecological effects that large-scale use would generate.
Not only did scale of use produce unanticipated effects, but many chemicals introduced in small amounts constituted a "time bomb," with latent health effects that did not appear until many decades after the initial exposure of people to them. One of the most dramatic examples was occupational exposure to asbestos, much of which occurred during the Second World War, with fatal health effects appearing only many decades later. It became conceivable that a large population could breathe or otherwise ingest large quantities of a substance for years with no adverse effects, only to show up with delayed cancers or other diseases in near epidemic proportions many decades later. There were also many examples of systemic effects occurring only at the end of a long causal chain, in which the original triggering event was obscure. This also led to increased separation in both space and time between the people who enjoyed the benefits of a new technology and those who were exposed to the risks or bore the ultimate costs, even extending across generations. In all these instances, dramatic examples or anecdotes were far more influential on public opinion than statistical averages, and thus became an important factor in shifting the burden of proof against the introduction of new technology.
The response
The 1960s in most of the Western industrialized world were a period of dramatic technological and social change accompanied by a shift towards the more sceptical view of technical progress already mentioned [20]. According to one author writing during this period [2], the adverse second-order consequences of well-intended enterprises had long been recognized by historians and social critics but had tended to be regarded as inevitable and uncontrollable. Indeed, the beneficial technological by-products of even such undesirable activities as wars had frequently been commented on. However, it was said that "it is a mark of our times that we no longer accept a conclusion that we can do nothing about the unwanted consequences of our actions, or that a criterion such as 'progress' in technology is per se justification for imposing unpleasantness on ourselves." In fact, "particularly in the past decade, the American public seems less willing that either of these sorts of second-order consequences [i.e., beneficial or adverse] should be left to chance" [2, p. 1]. This, in essence, is the underlying motivation for the technology assessment movement as it grew up in the United States in the 1960s, and is still its main justification today. In the context of a developing country, the justification would be similar, but not identical, because a developing country would be following a different technological trajectory in a different social and cultural milieu. Therefore, technology assessments performed in industrialized countries are not necessarily readily applicable in developing countries, even when the purely technical features of the technology being assessed are similar.
Institutionalization of technology assessment
The decade of the 1970s saw a proliferation of laws and regulations throughout the industrialized world, but particularly in the United States, designed to control the social costs of economic growth and of the introduction and diffusion of technology. The objectives of these laws, as expressed in their preambles, were often extraordinarily ambitious and presumed a scientific knowledge base and an analytical capability both inside and outside government to monitor anticipate, and assess the effects of technology, although they did not yet exist. Because tight deadlines were often imposed, regulation tended to favour "end-of-pipe" curative technologies rather than potentially more cost-effective but more long-term, high-risk-preventive ones. Slow economic growth in the industrialized world reinforced this preference since such technologies would not require replacement of incompletely depreciated plant and equipment that could not meet the new standards.
In developing countries, there would be a greater incentive to invest in preventive technology, since new production capacity would have to be built anyway and it probably would be cheaper overall than buying obsolete plant with end-of-pipe retrofits - perhaps even cheaper in its own right in some instances.
Two pieces of US legislation have influenced the social and environmental assessment of technology elsewhere: the National Environmental Policy Act of 1969 (NEPA), which introduced environmental impact statements (EIS), and the Technology Assessment Act of 1972, setting up the Office of Technology Assessment.
ENVIRONMENTAL IMPACT STATEMENTS. An Environmental Impact Statement is required from every agency of the Federal government to accompany recommendations or reports on proposals for legislation and other major Federal actions significantly affecting the quality of the human environment, with an analysis of the effect of any alternatives to the proposed action, including inaction. The draft EIS is then subject to comment by other Federal agencies and from the public where practicable. The question of public hearings and comments is often contentious because there can be a very fine line between representation of legitimate substantive concerns and the use of the EIS procedure and lead time for comments as a blocking tactic by groups objecting to the action on other grounds. The judgement as to whether this line has been crossed is subjective, depending on the values of the observer.
Many aspects of the EIS procedure are peculiar to the American political and legal system, in particular the uniquely important role that judicial review of administrative actions plays in the US regulatory system, severely limiting administrative discretion in the interpretation and enforcement of the laws passed by Congress. Nevertheless, some procedure of this type would be desirable for most developing countries. For one thing, donors of international aid including both multinational agencies such as the World Bank and regional development banks - are increasingly imposing some sort of environmental review on investment projects, and it would be desirable for recipient countries to develop their own procedures that help ensure that local cultural and social priorities are properly taken into account in such reviews. To the extent that recipient countries demonstrate their own capabilities for technology assessment or environmental impact assessment, they are less likely to be second-guessed by donors and more likely to maintain control over their own development priorities.
A particular problem with the EIS procedure as implemented in the United States is that Congress has provided very little guidance to the agencies as to the relative weights to be given to various factors in arriving at a decision concerning the intended agency action; this difficulty is intensified by the fact that the responsible agency decision maker is often a proponent of the project being assessed. While the procedure has seemed to imply a kind of overall cost-benefit analysis - a balancing of aggregate costs against aggregate benefits for all affected interests - this has never been explicit, nor is there any guidance as to how distributional considerations are to be taken into account when benefits and costs or risks are experienced by different groups- a situation increasingly frequent as the scale and time span of effects have expanded [5, p.146]. This lack of guidance on the relative weighting of factors - including short-term vs. long-term, intangible vs. economic, and winners vs. losers - often means in practice that environmental impact statements become catalogues of every conceivable effect with little evaluation, let alone quantification, of their relative importance, particularly because legal challenge was most likely to be based on omissions rather than commissions.
THE OFFICE OF TECHNOLOGY ASSESSMENT. The Office of Technology Assessment was set up in 1972 in response to the felt need in the Congress for independent, non-partisan technical advice drawing on the best available scientific, engineering, and other expert knowledge inside and outside government. The focus was on assessing alternative courses of action that Congress might take where scientific and technological considerations were heavily involved. Although the techniques were developed in the US context, with its sharp separation of legislative and executive powers, I believe they are widely generalizable and could be used effectively in many other settings, including some appropriate to developing countries.
In fact, the American pattern of an OTA attached to the legislative branch has been followed in several countries, especially in Europe, with various institutional differences related to the specific historical and constitutional context. For instance, in France the Office parlementaire d'évaluation des risques scientifiques et technologiques has also been designed to be a bridge between the Chamber of Deputies and the Senate, and is a non-partisan body made up of equal numbers of representatives from the parliamentary majority and the opposition. In this case, each report is prepared and signed by a representative of either the Senate or the Chamber of Deputies, with the assistance of the secretariat (for example, on nuclear waste, biotechnology, high definition TV, etc.), and is discussed before publication by the parliamentary committee covering the field.
In the United States, the research agenda is usually formulated through a fairly extended negotiation involving several Congressional committees, the Technology Assessment Board (TAB) of the Congress, and the Director of the OTA. Since many more studies are requested than the OTA can carry out, efforts are made to consolidate and reconcile requests from more than the one committee? thus ensuring that several committees feel a sense of "ownership" of the study and will be ready to take its conclusions and recommendations seriously. The staff of the OTA may take a good deal of initiative in recasting and sharpening the questions and in formulating problems in a way that enhances the non-partisan stance of the Office, so as to increase the credibility of the results to members of Congress with widely differing policy perspectives and constituency interests.
Once the final definition of a study has been formally approved by the TAB, a project advisory committee is appointed - usually 20-30 outside experts and members of the public representing a wide range of expertise and political views, chaired by an experienced person. The members of the committee bear no responsibility for the content of the final report, which is prepared by the OTA staff, but they have frequent opportunities to review and comment on drafts of sections relevant to their individual interests and expertise. A great effort is made to understand and explain the reasons for any differences of opinion, whether they be technical or derived from differing value-perspectives. This is done even though some points of view may be rejected, with reasons explained, in the final report.
On average, about half the substantive research going into a report is done "in-house" by staff and half is commissioned from outside consultants. In fact, the staff usually does not do original research, apart from reading the literature and interviewing experts; rather the purpose of the OTA studies is to synthesize, distil, and interpret existing knowledge and recast it in a form so as to fit a specific policy context. The commissioned reports are used as raw material for the final report, but the OTA staff is under no obligation to use them or to adopt their conclusions.
As a report nears completion, drafts of all or parts of it are sent to outside reviewers - as many as 100 or more - for comment and criticism. Although confidentiality is requested of reviewers, it is not always respected, but this has generally not caused problems. Frequently leaks have served to alert the Director and the TAB to political sensitivities, or errors of fact or interpretation, before the report is approved for public release by the TAB.
The non-partisan nature of an OTA report is stressed by casting the final recommendations in the form of a list of options for possible Congressional action, usually with options to satisfy both the cautious and the activists. This studied neutrality is sometimes frustrating to members of Congress who would prefer stronger or more definitive policy recommendations, but it probably serves to enhance the credibility of the OTA in the long run. The OTA operates less by picking the best policy options than by screening out those that cannot be justified by the evidence at hand, while still leaving open for debate a number of conclusions and recommendations that do not fly in the face of the evidence or the best professional opinion that is available, although they may express different value judgements.
OTA reports are published in essentially three forms: a one-page summary of conclusions and recommendations intelligible to laymen; an executive summary of about 50 pages synthesizing the principal conclusions, recommendations, and arguments; and a full report sometimes in many volumes - collecting essentially all the work that has been done in a form that can be used as an authoritative reference by Congressional professional staff and by outside specialists concerned with the problem or issue. In addition, the Directorate of the OTA and the staff who prepared the report are available for testimony to Congressional committees and for one-on-one meetings with Congressional staff or members of Congress or other government officials. Thus there is a rather extensive effort to disseminate the results of a study beyond the mere publication of a report.
The methodology and its critics
Technology assessment started by examining the technical characteristics of a given technology, such as the automobile or nuclear power, and then attempting to explore all the possible social, economic, environmental, health, and ecological effects of its application. This simple definition runs into difficulties, however, since it implies a notion of technological determinism, a unidirectional causality from technology to society. The social impact of a technology - indeed its environmental impact as well - depends on the social supporting systems and ancillary or supporting technologies that accompany its large-scale deployment. These ancillary systems may well be different in different societies and political systems, as in the case of television broadcasting. There is therefore the question of whether the term "technology" in TA refers just to a single artefact or whether it refers to the whole system of ancillary technologies and social supporting systems actually used in connection with the widespread deployment of the dominant artefact.
A great deal of effort and debate have gone into the methodology of TA. The field has been criticized as "non-paradigmatic" and, by inference, therefore not cumulative [22, p. 7]. In a survey of actual TA projects in the United States, Rossini et al. [18] found that TA practitioners seldom used any of the quantitative techniques that had been widely advocated in the theoretical literature. Wad and Radnor point out that in fact specialists "have a disdain for [quantitative techniques], preferring to rely on their own judgments and intuition in the selection of approaches and in the design of the TA" [22, p. 38]. This has implications for the use of TA in developing countries. As observed by Wad, "if the techniques of TA receive scant attention from practitioners within the very society from which TA evolved as a body of knowledge, it is very questionable whether they would have much relevance in other, quite different societies" [22, p. 39]. This criticism of pure technique could be an advantage if it makes TA more accessible to societies in which sophisticated knowledge is in short supply. On the other hand, a well-defined paradigm would offer a common language, making it easier to transfer TA across cultures than is the case with intuitive practices.
Another criticism is that the evaluation of technology in a society, and indeed all so-called "objective knowledge," is primarily a reflection of the power interests of various social groups and the resulting "imperatives for the reproduction and legitimation of existing social structures and [power] relationships" [7, 25, 26, p. 108]. Thus seemingly technical debates about the choice or regulation of technology are nothing but political power struggles in which science is just another instrument. There is some element of truth in this view in that political and cultural biases, often heavily weighted with perceived self-interest, can never be completely expunged from discussion of "science for policy." However, Laudan [10] argues that the role played by such social and political factors varies inversely with the uncertainty and immaturity of a scientific field. As evidence accumulates and a field matures, rationality becomes more and more of a constraint on social construction The assessment depends on both the scientific uncertainty and the political stakes and power of the various players. The critics thus maintain that science is largely if not wholly irrelevant to actual policy outcomes. They may be correct in the implication that policy assessment and implementation, no matter how technical in nature, must be sensitive to the distributional implications of the conclusions reached, and that analysts must try to anticipate the effect of this on the implementability of their recommendations. However? this must not be taken to mean that either science or policy analyses are valueless.
A typology of technology assessment and policy analysis
Since in practice it is so difficult to separate a technology from the socio-technical system in which it is embedded, it is convenient to develop a typology of TA classified along a great many dimensions.
Dimensions of technology assessment
Types of technology assessment
Taking into account primarily the first three dimensions listed above, one can distinguish five types of technology assessment.
PROJECT ASSESSMENT. Here we are concerned with a concrete project such as a highway, a shopping centre, an oil pipeline, or the actual plan for construction and testing of a prototype of a new aircraft or power plant. The environmental impact statement process most often deals with such specific projects. Project assessments may be further subclassified according to the novelty or extent of previous experience with the technologies to be employed, or the degree of previous experience with the particular type of environment in which they are to be deployed. To the extent that a project presents special challenges either because of the novelty of the technology used or of the unprecedented problems of a new environment (such as was the case with the Alaska pipeline), project assessment may spill over into the next category, generic technology assessment, which must rely more on theoretical insights derived from science and less on cumulative practical experience with the technology in operation.
GENERIC TECHNOLOGY ASSESSMENT. Here the focus is on a general class of technologies without reference to a particular project or a particular site, environment, or social setting. An example of an especially important and common class of generic TAs is the assessment of medical therapies or prescription drugs for the treatment of particular medical conditions [11]. This sort of TA has come into prominence, particularly in the United States, as health care costs have continued to rise faster than the cost of living index in most of the industrialized countries, directing more and more attention to the proliferation of new medical technologies and their cost-effectiveness. In this case the system boundary is well defined, and the generic nature of the technology is obvious because it is applied repetitively under similar circumstances in many places to many different patients with similar characteristics.
Another example might be a new generation of "inherently safe" nuclear reactor designs being proposed for the next generation of nuclear power plants. Here the boundaries of the system to be considered are much less clear because nuclear power is a systemic "global technology" in which adverse experience anywhere in the world has repercussions throughout the entire system due to political reactions, and the safety and reliability of the system as a whole is very sensitive to non-technical human factors of management and regulation as well as to essential supporting technologies such as waste storage and disposal, and support of the nuclear fuel cycle. Generic TA is what one mostly has in mind when one uses the term TA, yet only a relatively small proportion of the reports produced by the OTA or other similar agencies could be said to conform to this description of generic TA.
PROBLEM ASSESSMENT. Here the approach is to examine a broad problem area such as commercial air transport and assess a variety of technologies as well as non-technical measures that might be used to cope with the problem. For example, instead of assessing a supersonic transport programme such as was proposed by the Nixon administration in the early 1970s, one might have posed the problem of future air transportation needs and considered a variety of aircraft types as well as air traffic control systems for meeting a defined social need. indeed, a more sensible approach might be to extend the boundaries of the problem to define it as enhancing the mobility of goods and people, and including various ground transportation technologies as well. Even in project assessment, the EIS procedure requires that "alternatives to the proposed action" be fully assessed, implying consideration of alternative technologies or social actions that would achieve the same objectives as the particular project being proposed.
POLICY ASSESSMENT. Policy assessment is very similar to problem assessment, except that it takes greater account of non-technological alternatives to achieving social goals for whose realization new technology is only one of many options. A good example might be the use of various kinds of economic incentives to both electricity consumers and public utilities to reduce peak or total demand for electricity as an alternative to constructing additional power plants, or the development of more efficient or environmentally benign generation or transmission technology. Policy assessment blends imperceptibly into policy analysis, where the emphasis shifts more completely away from technology towards broader social and political measures that require less prescriptive design. An important advantage of policy assessment over generic TA or problem assessment is that it tends to be more even-handed as between technical and non-technical solutions to the problems being addressed. At the same time it is more likely to recognize that the overly conservative regulation or suppression of new technology is just as likely to have unforeseen and undesirable side-effects as the introduction of technology - once again leading to a more even-handed balance between technical and non-technical approaches [8, 24, pp. 91-94].
GLOBAL PROBLÉMATIQUE. When a number of closely interrelated social, political, economic, and technical problems coexist and are difficult to attack piecemeal, and the resulting cluster of problems affect the world as a whole considered as a single system, we call the assessment required a "global problématique." What makes it more challenging than other forms of technology assessment is the close interconnection among many of the component problems, and particularly in the interaction between the technical and political dimensions of the environmental risks concerned. What makes the "problématique" different from other forms of TA or problem assessment is that no single scientific report, no single decision, and no single nation will have the last word, or even a very important word? on how humanity ultimately comes to terms with those risks. The management of the problématique has to be a cumulative process of "social learning" with, ultimately, very wide participation of virtually all the stakeholders.
In the 1970s a fashion arose for developing computer models of the entire world designed to assess trends in and effects of various combinations of policies on food, energy, environment, population, natural resources, and even human relations. Models add no new raw information and are no better than the data and assumptions that go into them; they are nevertheless a valuable accounting device for keeping track of many more variables than can be embraced by the human mind. Such models have attempted to assess socio-technical systems of increasingly comprehensive scope until in some cases, the whole world is treated as a single coupled system.
This interest in modelling as an aid to policy-making was part of the motivation for the creation of the International Institute for Applied Systems Analysis (IIASA) as a joint East-West research institute in Laxenburg, Austria, in 1972. Among its major projects, IIASA created computer models of the world energy system [1] and the world system of agricultural production and trade. One aim of such models was to explore the consequences of various national and world policies in these sectors on a global basis, e.g. the potential role of various energy sources, or the effects of more open world trade in agricultural products on the world hunger problem. There was a great deal of debate as to whether the many simplifying assumptions and subjective judgements that had to be made for the models to be manageable would largely vitiate their usefulness as policy tools.