Contents - Previous - Next

This is the old United Nations University website. Visit the new site at

The expansion of modern science and technology

As we have seen, the link with political power was present from the beginning of modern science, but that link was all the less effective, institutionalized, and systematic because science had little influence on economic, military, and technical development, and at the same time, because the state intervened little in its affairs. The age of institutionalized science policy really started only when scientific activities began to have a direct effect on the course of world affairs, thereby causing the state to become aware of a field of responsibility that it could not neglect [20, 52, 4]. To give an idea of the change of scale that occurred as a result, we need only point out that the entire Federal R&D budget of the United States was less than $1 billion in 1939 (agriculture and health accounted for the lion's share); the Manhattan Project alone, which was responsible for the first three atomic bombs produced by 1945, cost $2 billion over three years, while the Apollo Program to put a man on the moon cost $5 billion per year over 10 years. In 1989, the total American gross domestic expenditure on R&D went up to $135,150 million, of which a little more than 50 per cent was financed from public sources. Even the countries that are the most vociferous upholders of free-market principles and abhor state intervention, from the United States to Germany, have seen public support for R&D, both direct and indirect, considerably increase and expand.

By science policy we mean the collective measures taken by a government in order, on the one hand, to encourage the development of scientific and technical research and, on the other, to exploit the results of this research for general political objectives. Today these two aspects are complementary: policy for science (the provision of an environment fostering research activities) and policy through science (the exploitation of discoveries and innovations in various sectors of governmental concern) are on a par in the sense that scientific and technological factors affect political decisions and at the same time condition the development of various fields (defence, the economy, social life, etc.). The historian of science will find it easy to show that neither the idea nor the thing itself was really absent from the development of science as an institution before the Second World War. However, if these two aspects did exist beforehand, they rarely did so simultaneously, and in any case only for short periods marked by the interest of the state in military exploitation of the results of scientific research, for instance during the French Revolution, the American Civil War, or the First World War [53].

The rise of science policy

In the West, the examples of a closer link between science and the state provided by the First World War and the post-war period were only a rough sketch of a process that was to be accelerated and firmly established by the time of the Second World War. In particular, even though the Depression of the 1930s caused some people to become aware of the role that science policy might play in economic and social development, this awareness did not go so far as to provide the state with the means to guide the direction of scientific research, or even to organize it in a more coherent manner [9]. France alone among the market economies endeavoured to recognize the jurisdiction of politics over scientific affairs by setting up, under the Popular Front, the post of Under-Secretary of State, which was given first to Irene Joliot-Curie, then to Jean Perrin. The fact that the two Nobel prizewinners occupied in 1936 a ministerial position and the establishment of the Centre National de la Recherche Scientifique (an institution mainly concerned with the promotion of basic research) are the first signs in the West of the recognition on the part of the state of both the role played by science in economic and social affairs and the political concern that it should be integrated into the general fabric of government decisions [40].

This case, unique in the West, was inspired in part by the Soviet experience. For it was indeed in Russia that the closest link ever to be forged between science and politics was established by the triumph of the Revolution. The progress from ideology to action provides a model of organization inasmuch as it attempted to integrate science into the social system as a "productive factor" among other productive forces. Certainly, scientific activities enjoyed a status and a support at that time that had no equivalent in other countries before the Second World War; research was considered inseparable from the political system of which it was both the means and the end. Nevertheless, as heavily as political factors may have weighed on the development of science as an institution, the model presented by the Soviet regime did not give rise then to a real science policy [14, 62].

It was at any rate that model that served as reference to Bernal when, just before the war, he wrote his book The Social Function of Science, a pioneer work heralding the enormous changes that were soon to affect the relations between science and the state [3]. No other work has done more to ensure the recognition of scientific activities as a social institution that both affects and is affected by the development of the social system as a whole. In many respects, Bernal's analysis still shows a utopian approach directly inspired by the hopes that the Enlightenment and nineteenth-century positivism had placed in the politically liberating and inevitably beneficial character of science. He is nevertheless the first to have perceived and analysed (even though with the Marxist bias of that time) all the aspects that could make scientific and technological research activities themselves into objects of social research. As such, Bernal appears as the founding father of the new field that is, in relation to development issues as well as to the industrialized countries, the subject of this whole volume: science policy, or science, technology, and society "studies." Bernal deplored the lack of public interest in science at that time and the scarcity of resources, but he had no doubt as to the immense progress science would accomplish and the great service that, associated with technology, it would render to society. Two conditions at least needed to be met in his view if these promises were to be fulfilled: far greater resources allocated for research activities, and the implementation of deliberate science policies.

It is now commonplace to point out that the Manhattan District Project, the name given to the programme that developed the first atomic bombs, marked an irreversible turning-point in the relations between science and the state: the establishment of science as a "national asset," the direct intervention of governments in the direction and range of research activities, the recruiting of researchers for large-scale programmes [21]. The change in scale of research activities goes hand in hand with the major technological developments that had a direct effect on the relations between countries: there were 100,000 researchers (scientists, engineers, and technicians) in the world in 1940, and 10 times this number 20 years later [10]. In the OECD area alone, the total R&D personnel was estimated at 1,754,430 in 1983, of which the United States accounted for a little more than 700,000 [38].

Indeed, the nature and the scale of the scientific research undertaken during the Second World War and, above all, the strategic importance of its results, have had consequences beyond anything Bernal had foreseen. According to his own words in the preface to the new edition of his book, "the scientific revolution entered a new phase - it became aware of itself" [3]. During and after the Second World War, scientific and technical research, conceived with military strategic ends in mind, became the source of newly discovered forms of technology that were to be applied on a vast scale in civil life: nuclear energy, radar, jet planes, DDT, computers, missiles, etc. From then on it became impossible for political power to leave science to its own devices, and at the end of the war, the demobilization of researchers, far from signalling the end of "mobilized" science as such, gave rise to systematic efforts to take advantage of research activities in the context of "national and international" objectives [18].

The perfecting of nuclear weapons, missiles, and computers altered the most traditional law of the balance of power: it was no longer enough to avoid being at the mercy of the enemy, one had now to forestall him. In this new kind of international competition, between the "balance of terror," the arms race, and the fear of "technological gaps," scientific and technical research constituted a powerful strategic, diplomatic, and economic resource. Science policy developed in this context of strategic competition as a consequence of the impossibility of establishing real peace at the end of the Second World War. In this sense it is obviously one feature of an overall policy determined by rivalry, struggles, and clashes between nations, ideologies, and will for power. But in another sense the growing influence exerted by technological and scientific affairs on politics in general could be regarded as a cause as well as an effect of the international climate of insecurity. No doubt, the "tyranny" of the arms race and escalation operated through a "scientific-military-industrial complex" that is very real and the irony (or wisdom) of history is that it was a senior army officer and president of the United States who uttered the first and gravest warning against this complex. In his farewell speech as president, Eisenhower referred to the risks of a public policy becoming the captive of a scientific and technological elite and of the military-industrial complex to which this elite owes its existence (New York Times, 22 January 1961).

Actually, it was only from 1957 - the date of the first sputnik - that institutions really concerned with science policy were set up. Even in 1963, when the first Ministerial Meeting on Science took place at the OECD, the ministers specifically in charge of scientific affairs could be counted on the fingers of one hand [28]. In the space of only three years, they made up the majority. As a field of government competence, science and technology were no longer intended merely to follow in the wake of educational or cultural policies. Whatever the institutional arrangements, the organizations concerned with science policy, wherever they were, all fulfilled at least three functions: information, consultation, and coordination. Science policy of any kind had to be prepared by administrative services, clarified by the advice of experts, coordinated between the various ministries and agencies concerned with research activities, and finally, of course, decided upon and implemented in conjunction with the private industrial sectors. National traditions and structures provided a framework for these functions and, within that framework, specific bodies (e.g. the Office of Science and Technology in the United States? the Delegation générale à la recherche scientifique et technique in France). According to whether the political system was centralized, decentralized, or pluralistic, science policy was developed in different institutions, linked more or less closely with bodies concerned with economic and strategic planning. Everywhere these bodies started their functions by collecting statistics on R&D activities, drawing up an inventory of researchers and laboratories, and allocating resources to sectors considered to have priority [5].

From the 1950s to the 1970s, science policy in the industrialized countries went from an age of pragmatism to the general awareness of the role played by scientific and technological research in the "wealth of nations" and in the struggles for international competition. However, there were important changes not only in the aims but in the political and cultural contexts. The first period, which corresponded to a climate of high tension, the Cold War, strategic competition and economic development impervious to the social and environmental costs it engendered, came to an end in 1968-1969. In the aftermath of détente, the campus revolts, the growing awareness of the limits to economic growth, and the American fiasco in Vietnam, the positivism induced by the methods and achievements of science was questioned not only by movements outside the scientific community but also by scientists themselves [49]. An American walked on the moon, but the very success of the Apollo Program marked a turning-point: the great options that had fed science policy during two decades ceased to be taken as articles of faith. The previous priorities were being re-examined critically, and reordered in a manner that, it was felt, would be more concerned with social well-being than with technological progress as such.

It is instructive to underline some of the conceptual changes that have taken place in the field of science and technology policy research and that show how this area of policy-making, although defined and nurtured by science, is heavily dependent on social structures and pressures. The OECD has been one of the leading institutions in highlighting the importance of science and technology policy; the first report prepared by the Secretariat in 1963, Science, Economic Growth and Government Policy, was quite optimistic and focused on the formulation of government policies, the building of scientific and technological infrastructures, and on the need to expand science and technology education as a lever for increasing economic growth. Nearly a decade later, in 1971, another report on the subject, Science, Growth and Society: A New Perspective, stressed the social impact of scientific and technological advances, paid attention to the American challenge in technology, and focused on both the role of innovation as an engine of growth and the need to anticipate and assess the negative aspects of technical change. The OECD reports published in 1980, Technical Change and Economic Policy, and in 1981, Science and Technology Policy for the 1980s, put greater emphasis on the economic and social changes that characterized the industrialized nations during this period and acknowledged that after three decades of unprecedented growth in the world economy, the situation was likely to be different. The oil crises led to focusing research priorities on possible energy alternatives, but issues such as the interaction between technology and employment, the dominant role played by micro-electronics and informatics, the growing importance of biotechnology and new materials, the restructuring of world industry and international competitiveness became central concerns of science and technology policy makers.

Thus in less than 20 years, a new perception of the interactions between science, technology, and society has emerged in the industrialized countries, one in which the optimistic views have been replaced by increased concern regarding the impact of advances in science and technology on society. The scientific crisis simply reflected the crisis taking place in society. As the Brooks Report pointed out, "science policy is in disarray because society itself is in disarray, partly because the power of modern science has enabled society to reach goals that formerly were only vague aspirations, but whose achievements had revealed their shallowness or has created expectations that outrun even the possibilities of modern technology or the economic resources available from growth" [33]. The problems posed by the deterioration in living standards, the chaotic state of urban development, the difficulties of transportation, pollution, the threat to the environment, and the growing inequalities within most of the industrialized countries and between them and the developing countries all of this called for some control over the course of technical progress and the building of new paths that would reconcile technical progress to a more harmonious type of development. The notion emerged that the solution to these problems does not lie solely in the technocratic application of instruments that would reduce history to its physical constraints. Even in the case of strategic weapons and arms control, some scientists became aware that the "dilemma of steadily increasing military power and steadily decreasing national security has no technical solution" [61].

It is in this context of challenge and disenchantment that technology assessment was launched: a new function that would enable possible undesirable effects to be foreseen or the costs of the introduction of new technologies to be considered in relation to obvious or disregarded social needs. Subsequently, following the example of the United States, most of the industrialized countries created special bodies, within or outside their parliaments, whose function was not only to anticipate and regulate the effects of technological change but also to involve the public more closely, if not make it participate in the decision-making process relating to science and technology activities. However, this period of questioning and reappraisal did not see any reduction (rather the contrary) in the predominant strategic and prestige objectives concentrated in the most important industrialized countries on defence, nuclear, space, and computer research. And the malaise felt in relation to social issues was soon to be superseded by the economic difficulties precipitated by the oil crisis of 1973. The barely attempted efforts to redirect research activities toward the solution of social problems were limited, if not stopped, by the economic crisis, growing unemployment, and more intense international economic competition in relation to the "new technologies."

The defence-related R&D endeavour

Science policies were the consequence of the Second World War and the absence of peace that followed it. For the most industrialized countries, and in particular those with nuclear weapons, the Cold War was a period of full-scale mobilization of scientific resources, with huge investments in R&D in three key sectors: nuclear, space, and information and communications technologies. For the United States, Britain, and France, these investments accounted for two-thirds of their total R&D expenditure, public plus private. For the USSR, the defence budget was an even greater drain on resources, with the statistics for the 1980s indicating that military expenditures varied between 20 and 28 per cent of GDP - an enormous proportion when compared to that of the United States, where military spending equalled 6.5 per cent of GDP in the same period, even if the American GDP was much higher [62].

The arms race was one of the most spectacular features of the Cold War, but there was also fierce competition for world renown, ranging from the first sputnik to the first men on the moon. These struggles forced the state to intervene in research and innovation, even in countries claiming to be unshakeable upholders of free market capitalism. Questions may indeed be raised about the cost of the exaggerated level of armaments and the links between economic and strategic reasoning; it may be argued that the arms race diverted scarce resources (capital and skills) that could have been used for more socially and economically constructive purposes. The debate about the cost-benefit analysis of the "spin-offs" from military R&D for the civilian economy is not over, but it is impossible to underestimate the importance of the innumerable innovations generated by military R&D during this period, and especially the role they played in the conception and development of the new technologies that characterize the "new technical system" just now beginning to flourish [58, 26, 55].

On the Soviet side, it is clear that the priority given to the military-industrial complex in R&D expenditure and production made a decisive contribution to the collapse of the economic system. It cannot be ruled out that Reagan's challenge via the Strategic Defense Initiative (Star Wars) helped Gorbachev to realize that the centrally planned Soviet system had reached its limits, with a civilian economy in a desperate state and a military sector unable to keep up with the rapid progress of American technology. For the capitalist democracies, the costs in terms of economic growth were far smaller, but still not zero. One has only to compare the rates of productivity growth in countries with high levels of defence-related R&D to those with low levels. Germany and Japan, forbidden to invest in military activities after 1945, have had far higher productivity growth and much greater technological success in commercial terms than the United States, Britain, and France. Furthermore, in the 1970s, the innovations generated by the defence sector seemed increasingly remote from the needs of ordinary consumers. The military demands for technical excellence in terms of reliability, miniaturization, resistance to extreme conditions, etc., have created products that are harder and harder to adapt for civilian purposes. At the same time, in certain high technology areas (especially "chips," components), commercial users have tended to overtake military orders in stimulating innovation. It is likely that the spin-offs from military R&D will be far less useful for the civilian economy in future, so that the economic growth rates of the countries most committed to such programmes will suffer accordingly.

Military R&D efforts have not been monopolized by the most advanced, industrialized countries. Among the developing countries, nations such as Brazil, China, and India have strengthened their manufacturing potential at the same time as their ambitions to build up an independent armaments industry, and even their own nuclear and space facilities. The growth in the arms trade in developing countries and the appearance of new producing countries are a sign of both the relative success of some industrialization policies and the feelings of insecurity that rightly or wrongly beset the purchaser nations. Military ambitions have been able to stimulate industrial modernization in a context of policies of economic nationalism; yet, it is obvious that this choice of manufacturing and exporting weapons has diverted scarce resources that could have contributed to a more balanced economic and social development.

The Cold War justified everywhere the growth of a vast public sector and increasing state intervention in the private sector. Business interests were able to cash in on the arms race precisely because both sides felt insecure. "A war with no fighting neatly avoids the risk of fighting coming to an end. Obsolescence in a technological competition is a nearly perfect substitute for battlefield attrition" [12]. As long as the Cold War lasted, stopping the race was deemed more dangerous than the race itself. The post-war period has ended with the collapse of the communist system, the abolition of the Warsaw Pact, and the fragmentation of the Soviet empire. The signing of the START agreements means a 30 per cent reduction in long-range nuclear weapons. The end of the confrontation between the two systems and the collapse of the communist economies lead to the end of the arms race, and hence mean facing the problem of how to convert some (if not most) of the arms industries to civil purposes - a very difficult issue, which will take many years to resolve and which will quickly generate large-scale redundancies to add to the economic crisis in the republics of the new Commonwealth of Independent States.

There are already signs of a new race beginning, this time either to attract the best scientists from these countries to work in the West or else to "anchor" them in their laboratories, helping them to destroy the existing weapons systems or to redirect their research towards peaceful ends. Either way, the aim is to hold onto them and discourage them from selling their services to developing countries that would like to build up their own nuclear weapons and space capability. The OECD ministerial conference on science and technology in March 1992, attended for the first time by representatives of Russia, Hungary, Poland, and Czechoslovakia, was almost entirely devoted to this problem. And the sole purpose of the International Centre for Science and Technology established in Moscow with funding from the European Community and the United States is to prevent the growth of "mercenary science," where nuclear scientists rather than hired soldiers offer themselves to the highest bidder.

The reduction in nuclear weapons is not the same thing as disarmament, and the scaling down of the arms race by cutting the number of weapons does not necessarily mean scaling down military R&D programmes - even if there is now less urgency to perfect some of them. For one thing, the agreements deliberately leave open the possibility of increasing the numbers of cruise missiles, and the removal of some intercontinental missiles will in fact lead to even greater R&D efforts to improve the "quality" of conventional arms. For another, although the end of the Cold War undermines the traditional basis for the legitimacy of the military-industrial complex, the subsequent upheavals that are likely in central Europe and above all in the former Soviet republics will encourage the West to "lower its guard." It is clear, after the experience of the Gulf War, that the research into electronic warfare, in particular the anti-missile systems, is likely to expand rather than diminish, because of the threats of nuclear proliferation from peripheral countries.

Although the spectre of global nuclear war is fading for the first time, local conflicts are far from over. Military R&D efforts will continue to concentrate on miniaturization and on improving the precision of conventional weapons, as well as perfecting the systems of surveillance, monitoring, and response peculiar to electronic warfare. As General Poirier [42] has stressed, nuclear weapons, paradoxically, restrained the level of violence, because potential enemies knew that they must act and stop each other from acting in a haze of shared uncertainties, which led to political moderation and strategic prudence. In the "balance of terror," uncertainty brought a degree of order to relations between the superpowers, as deterrence only works when the enemy acknowledges the same rules. Nuclear proliferation may lead to an "imbalance of terror," where uncertainty generates disorder and where disorder on the periphery in fact adds to general uncertainty. The death of communism and the collapse of the Soviet system have removed the basis for the whole post-war strategic confrontation, and it is hard to imagine the biggest nations relying upon their nuclear deterrence in the event of hostilities initiated by "non-rational" smaller countries without atomic weapons. However, given that the sources of conflict throughout the world have not been eliminated, the "watch" will continue to mobilize substantial scientific resources. The heyday of the military-industrial complex is not yet over; that of defence-related R&D even less so.

The era of innovation policy

Whichever country - and no matter its political ambitions or strategic commitments - the primary objective of the industrialized nations now is to achieve and if possible improve economic growth, without which nothing else is possible, in economic as well as in all other spheres. Economic growth depends more than ever on firms' competitiveness, which in turn is very closely linked to the capacity for innovation, not only of firms but also of the entire system of social and economic organization (especially in relation to education and technical training). The research effort of these countries can be defined today as more and more oriented towards this goal, and it is complemented by a set of measures aimed at increasing the diffusion and application of technology in a large array of traditional industries and activities, as much as in the industries with a high R&D intensity.

This is the most important and revealing change: innovation policy appears as an extension of (or an alternative to) what was previously called science and technology policy. The concept emerged in the course of the 1970s as a result of three developments: first, economic and sociological analysis of the factors responsible for the performances of firms and especially of the roles played therein by technical innovation; second, the economic problems starting with the oil crisis that stopped the post-war period of rapid growth and full employment; and third, the upsurge of the "new technologies," particularly the information technologies, which brought about great changes in products and services throughout the economy. During the 1980s, the "structural policies" followed by the industrialized countries reshaped the continuum of their research systems to adjust to and overcome the consequences of the crisis (industrial restructuring, competition from the "newly industrializing" countries, unemployment, etc.) and the changes in the system of production and consumption introduced by the "Information Revolution." To these should be added the recent concerns about the environment, which are generating more and more public and private R&D efforts to bring products, processes, and industrial waste into line with new regulations. These changes in standards reflect changes in attitudes and values that oblige industry to innovate so as to satisfy the new consumer demands as well as the new legislative requirements regarding safety and pollution.

In brief, while state intervention in R&D activities has evolved in a context of privatization and deregulation, the American model has been replaced by the Japanese model, involving a package of long-term measures with a common target covering education, research, industry, foreign trade, and environment aimed at ensuring and sustaining the dynamism of firms in a global context [34, 35, 37]. The idea that innovation and entrepreneurship were among the basic factors underlying industrial expansion was certainly not new, since it dates back to the writings of Schumpeter. But the period of expansion after the war caused it to be overlooked. Although many studies were undertaken, notably those of the OECD on the "technology gap," the "Charpie Report" in the United States, and the research of economists like Edwin Mansfield, Richard Nelson, and Christopher Freeman, governments did not pursue them beyond affirming the importance of a well-thought-out policy for scientific and technological research activities: their gaze fixed on the input, they barely concerned themselves with the ways of ensuring a better diffusion of the output [11, 8].

All these efforts nevertheless arrived at the same conclusion: the problems of innovation depend less on the size of the investments in R&D than on basing the management of university and industrial resources on the entrepreneurial model. By emphasizing the importance for the innovation process of these factors, which are not properly scientific or even technical, all these studies recommended concentrating on policies that at first sight appear to have little in common with science policy as such. They stressed that it is not enough for a country to have excellent universities and research teams, to turn out increasing numbers of Ph.D.'s, to devote vast resources to R&D activities, or even to pile up Nobel Prizes in order to be one of the leading innovators. Winning the productivity battle, capturing and keeping new markets, and developing the full potential for innovation does indeed require a well-run research system, but that is just one prerequisite among many others. For innovation to be successful, the diffusion process is much more critical than that of either discovery or invention.

This period of introspection and research led to a better understanding of the sources, determinants and nature of innovation [22]. In particular, it came to be realized that commercial viability depends as much, if not more, on the social and institutional factors that provide the environment for the management of innovation as it does on the technical sophistication of the new products or services that it generates. To a large extent, the success of the "American model" could be attributed to the combination of two factors: the capacity of the universities to adapt very rapidly to the new needs generated by advances in knowledge, and the ability of industry to exploit the results of research more efficiently. And yet most of the European policy makers paid less attention to these factors and their combination than to the magnitude of the United States' expenditures for R&D (the "magic" target of 3 per cent of GNP) and the role exerted by the Federal government in stimulating the national research endeavour in the name of strategic and defence-related challenges.

In fact, even before the crisis of the 1970s, the example of the United States itself, where a few people had begun to be concerned with the falling rate of productivity growth, gave food for thought. Clearly, there was no direct link between the amount invested in R&D and the performance of the economy: champions in most categories of science and technology, the United States still had a productivity growth rate below that of Europe and, most important, of Japan. The question has been debated for more than a decade, and the Americans are still pondering the answer [30]. The fascination with the success of the "American model" made observers overlook the take-off conditions of a very different model, which more than ever confirmed that innovation should not be confused with scientific research: the model adopted by Japan, soon followed by the "little dragons" of South-East Asia. This raises at least the question of how much basic research does really contribute to growth and development at large. The modernization of Japan and its most recent success story in industrialization, like that of the newly industrialized countries, was not until recently accompanied by major contributions to scientific progress as such. The situation started to change in Japan because the very nature of its industrial development now requires a greater input of theoretical research. But this change is connected as much with the greater economic prosperity of the country as with the new prerequisites for producing technical innovations that are increasingly "sophisticated" and linked to laboratory research [56].

In Europe, it was not until the crisis of the 1970s that the significance of these limits to science policy began to be appreciated. By shifting from science in the strict sense to the broader field of innovation, governmental concern demonstrated an awareness of the fact that economic development was increasingly dependent upon constraints affecting industrial competitiveness and international trade. In the preceding period, the main concern had been to make basic research an integral part of the research system and to rely for technological innovation on "major programmes" supported, if not directly managed, by the state. Henceforth, there was debate about the extent to which the state should provide support for basic research and these "major programmes" that were financed (or subsidized) by public resources. Now, in the new context of privatization and deregulation, the question is how far the state should go, and under what institutional conditions, in intervening in the market in order to stimulate technological innovation.

Thus the criteria, as well as the instruments, involved in science policy have been profoundly altered. Science policy as such concerns individuals, institutions, and issues involved in measures related to scientific training, higher education, and academic research. As illustrated by the recent OECD report, Technology and the Economy: The Key Relationship [39], which is entirely devoted to an analysis of technological innovation in the context of increasing international competitiveness, innovation depends on a much wider range of actors, institutions, and issues - from industry, the banking system, and the overall economic environment to vocational training and even the general level of technical and scientific literacy. What is at stake is the need to "integrate" science and technology policies with all other government efforts, especially economic, industrial, energy, and social policies, as well as policies on education and employment. This was all the more obvious because of the need to cope not only with the consequences of the economic crisis but also with the changes introduced by the "new technologies." The products and processes created by these new technologies led to new modes of production and consumption that spread through all sectors of economic and social life; these products and processes are developed mainly by flexible, decentralized firms that are able to adapt quickly to market changes and are highly aware of consumer needs and preferences. In this context of market economies, if the role of the state cannot be limited to merely supporting scientific and technological activities, how far should it intervene, under what circumstances and on what criteria?

In some areas, state intervention is traditionally unquestioned (or, in some countries challenged less than in others): defence, basic research, the environment, health, large-scale technological systems such as those involving large infrastructures and networks (energy, transport, telecommunications). These areas concern society as a whole and require strategic action; in short, they are outside the market framework, and the private sector cannot be expected to take on the risks involved, or to safeguard and respect the public interest. The decisive competitive battle is now being waged among the small and medium-sized firms rather than among the major public programmes. Here, innovation involves entrepreneurial initiative, for which the management structures of public enterprises are badly (or rarely well) prepared. If the state has to intervene directly, it can be in the preliminary stages, where an "infant" technology or an "infant" industry threatens to be stifled before it reaches maturity by pressures from competitors. Yet the state cannot forever stand in for firms, or at least not without allowing its programmes to be guided by noneconomic considerations, and unthinkingly subsidize their products in order to protect them from foreign competition; there is no lack of examples of these risks and failures, from the Brazilian "reserved market" for information technologies to the Anglo-French supersonic Concorde and the French "Plans Calcul" [32].

In the past, the state could start from scratch or could promote an industry (e.g. metals, shipbuilding, railways, oil) where the aim was to satisfy national needs without having to face the pressure of international competition. If need be, it could nationalize existing firms, even if they were foreign. But when the new technologies are involved, which deal mainly with intangibles (i.e. information, from hardware to software), the state has far less room for manoeuvre. Nationalizing firms in this sector would mean buying only the factories without having any control over the flows of intangible data that are the real source of technical and commercial success. In this context, the trend towards deregulation appears to be the result not only of economic (if not ideological) considerations, but also of institutional and technical factors: on the one hand, the organizational and social setting, which reveals the limits of the management and control of the monopoly hitherto enjoyed by publicly owned firms (e.g. the post office), and on the other, the new technical system, which imposes strategies and even an entrepreneurial approach closely linked to consumer demand and international markets. Once outside the programmes that are its concern for strategic reasons, it is through indirect measures (especially fiscal, but also educational in general) and above all a macroeconomic policy favouring investment that the state is best placed to stimulate technological innovation efficiently - and more economically [50]. Most of these changes will continue to affect this new "strategic posture" of the industrially advanced countries, a posture that is basically defined by the growing economic competition, more concern for the regional and global environment, and the possibility - still to be confirmed - of an effective levelling off not only in the military budgets at large but also more specifically in the defence-related R&D endeavour.

Contents - Previous - Next