Contents - Previous - Next

This is the old United Nations University website. Visit the new site at

7. Computational bases

7.1 Massively Parallel Systems

RWC requires a computation framework that can process various kinds of information and integrate them flexibly. A system that implements an RWC application is likely to consist of many modules that can exploit parallel and distributed processing at several levels both within and between modules. Several parallel computation paradigms have been proposed, including concurrent object oriented, data flow, data parallel, neural network, probability-based information processing, etc. RWC will probably be realized by some combination of these paradigms. These paradigms are naturally adopted to a massively parallel system and they require a huge amount of computation to solve practical problems within a reasonable amount of time.

These observations show that a massively parallel system for RWC is necessary to support the computational power, and it must also be general purpose to efficiently execute the multi-paradigms. The massively parallel system should be flexible itself, adapting itself to application environments for optimal performance while minimizing the workload on the users.

The research and development will include the following topics.

7.1.1 Massively Parallel Architectures

The following are fundamental technologies that should be pursued in the development of general-purpose massively parallel systems:

1. Model. Flexible execution models, which can be bases of general-purpose architectures, should have the ability to fill the gaps between the language models and hardware. Flexibility that allows a mapping of a virtual computer onto actual processing elements should also be pursued.

2. Architecture. The massively parallel system should be based on a general-purpose architecture that supports various paradigms efficiently. It is also important to study hardware architecture in consideration of future device technology and packaging technology for an efficient implementation of the massively parallel system.

3. Interconnection Network. The interconnection network should provide high-speed communication that is comparable to computation speed. It should also provide support for dynamic load distribution, global synchronization, and global priority control. In implementing the high-speed interconnection network system, not only silicon technologies but also optical technologies should be considered.

4. Robustness/Reliability. Hardware-oriented robustness that can tolerate expected component failures in massively parallel systems should be examined. System components should have self-checking and self-repairing features. The total system should have a maintenance architecture or facilities to maintain system reliability.

In the first half of this programme, a prototype system that consists of 104 processing elements will be designed and developed as a platform. The platform system is used for tools of software development and a research platform for novel functions. Fundamental research on massively parallel model and architecture are concurrently undertaken. In the second half, a massively parallel system is expected to have the order of 10 6 processing elements. It will have the ability to execute various kinds of RWC applications at real-time speed. The architecture will be based on the new massively parallel computing model to be studied in this programme.

7.1.2 Operating System for Massively Parallel Systems

The operating system for a massively parallel system should be designed to support the execution of various processes (parallel programs) concurrently with high throughput and to build a user-friendly software environment that hides hardware details and makes parallel programming easier. The research topics are as follows:

1. Hierarchical Structure. To realize a functionally distributed management system for flexible processor management, the operating system may require a hierarchical structure. Hierarchical structure makes the system scalable. Some efficient mechanisms to control activities and its hierarchical structure and the reduction of overhead for controlling parallelism or executing critical sections should be considered.

2. Network Management. Advanced intelligent routing, addressing, synchronization, deadlock prevention, flow control, and failure preclusion should be incorporated into a flexible network management system.

3. Resource Management and Load Distribution. In the massively parallel system, the elimination of synchronization overhead, access contention, and communication overhead will become more serious issues. To overcome these problems, the operating system should be able to collect management information autonomously and undertake statistical management or adaptive management. Memory management and virtual systems for several resources should also be pursued for efficient scheduling and load distribution.

4. Fault Tolerance. In the massively parallel system, resource management should be done in a manner that allows for expected component failure rates. Therefore, it is necessary to handle the failure preclusion system as a normal process. Multi-route processing will also be required for tackling failures.

7.1.3 Languages for Massively Parallel Systems

The language for the massively parallel system must be able to describe the coordinated operations of a number of processes. The problem is how to extract the available parallelism in the problem domain, and to be able to execute it with as much parallelism as the underlying system can provide. Various compilation techniques and run-time implementation techniques scalable to nearly 1 million processors should be studied. The following items should be considered:

1. Language Model. Language model is a description model for flexible programming languages of massively parallel systems. The model must be simple and be sufficiently close to the underlying architectures so as not to restrict their computing power, and at the same time provide powerful means of abstraction to promote software programmability, portability, and reusability. In the research for a language model, fundamental research on supporting flexible language, model of describing coordinating and cooperating actions, inheritance, and reflections should be pursued.

2. High-level Languages for Massively Parallel Systems. The primary goals of high-level languages for massively parallel systems should be ease of programming and the ability to describe computation on the scale of 1 million processors. One viable candidate will be an appropriate amalgamation of concurrent object-oriented, functional, and declarative constraint-based approaches. Currently available object-oriented models are not intended to process more than 1 million processes, so the following extensions will be needed: introduction of a description system permitting hierarchical decomposition of complexity; diversification of message propagation systems; introduction of reflective functions for adapting and evolving objects; and declarative description of object relationships.

7.1. 4 Environment for System Development and Programming

1. Programming Environment. The need to develop a programming environment that can support multi-paradigm programming is expected. Tools for debugging, graphically monitoring and analysing load balance, communication characteristics, etc., will also be required. Since these functions may need hardware support, the architectural design should take these requirements into consideration.

2. System Development Support Environment. The requirements of an environment supporting the development of the massively parallel system include two features different from conventional ones. One is the support for the interconnection network development, where the overall functions such as robustness, dynamic load distribution and global synchronization mechanisms, and performance of the interconnection network should be evaluated, in advance, by system-level simulation. The other is support for the architecture development of processing elements, where a set of basic functions for processing elements should be determined through a functional assessment of the various subsystems, including the interconnection network.

7.2 Neural Systems

In recent years, neural networks based on the model of the brain have been receiving attention for their capabilities of learning/self-organization and many types of flexible information processing. However, these networks are still limited to small-scale applications because the neural models used are very simple and the learning is mostly based on the back-propagation technique and requires a large amount of computing time. Usually neural networks are simulated on conventional computers, and the simulation speed is very slow, especially on large networks. Therefore, how to realize high-speed processing on a neural network is an important subject, and it will be desirable to have special hardware.

In the RWC programme, the possibilities for large-scale neural networks will be explored in order to create flexible information-processing systems that can operate in the real world. The research and development will include: research on new models, hardware architectures, and software environments; development of a prototype system on the scale of 10,000 processing (neuron) units to provide a platform for research on neural models and applications. Later, a final system is to be developed, which is expected to be on the scale of 1 million neuron processing units. In the final stage, the neural system will be integrated with the massively parallel system to make flexible information processing a reality.

7.2.1 Neural Models

A flexible information system for RWC can be implemented using a large neural network that changes its own structure adaptively through interaction with the real-world environment. Realizing such a large neural network will require research on new models:

1. Neuron Unit Models. Simple neuron unit models have so far been used with success in limited areas. However, more advanced applications will demand more sophisticated neuron models. To begin with, the possibilities of already proposed models, such as the chaos neuron model, the complex number neuron model, and the neuron logic model, must be evaluated. At the same time, research must be done on new neuron models.

2. Modularization and Hierarchization. In the learning process of a large neural network through interaction with the real world, new knowledge should be acquired without destroying existing knowledge, and information should be efficiently retrievable. This implies the necessity of modularization and hierarchization of knowledge. Related important research topics are: learning mechanisms using centralized or distributed control for the purpose of realizing modularization, hierarchical structuralization and functional differentiation of a large-scale neural network, evaluation criteria for this and interaction among modules, etc.

3. Learning and Self-organization. Layered or hierarchical neural networks are effective in spatial pattern recognition, while recurrent neural networks are effective for recognition and generation of temporal patterns, and also for application to optimization problems. Since they will play a more important role in the future, it is vital to undertake research on the methods of learning and self-organization for recurrent neural networks. Another important research topic is the topology and size of a neural network, which are the most critical parameters for generalization capability of the network in learning by examples. A network should be large enough for learning and small enough for generalization.

4. Associative Memory. Association is one of the basic functions created by neural networks. Spatial or temporal patterns are memorized distributively and recalled on the principle of best match. It is necessary to theoretically clarify the principles of this association function and to work out an engineering mechanism to implement it. Related research topics are memory capacity, topological structure of memory, etc.

5. New Analog Computing Principles. Information processing by a neural system is based on the analog non-linear dynamics of the system. New principles of analog computing in neural systems, including chaos dynamics, must be clarified from this point of view.

6. Integration of Different Paradigms. Research must be done on models for integration of different paradigms. For example, the integration of a neural network and logical processing and the integration of pattern processing and symbol processing will be required for implementation of a neural system. The representation of input/output information in a neural network is important in that it functions as an interface when different paradigms are integrated, and in that it substantially affects the processing performance of the network. Therefore, theoretical and experimental research on input/output representation will be an important issue.

7.2.2 Neural System Hardware

Real-world application of neural networks might require a large network, on a scale of 1 million neurons. Such a large neural network may be modularized and consist of sub-neural networks, each of 1,000 neurons that are fully interconnected. The hardware of the neural system must support such a large network at high speed. The target processing speed is 10 TCUPS (Tera Connections Updates Per Second). In the design of neural system hardware, general-purpose and scalable mechanisms should also be incorporated so as to allow a wide variety of neural network models, because at present it is not clear which model is the best and various other new models are likely to be unveiled in the future. Hardware for neural systems can be classified into the following three types: neuro-accelerators, VLSI neuro-chips, and engineering implementation of neural networks.

A neuro-accelerator consisting of special-purpose parallel processors should be developed for neural network processing. A large number of architectures for this have been proposed. A typical architecture consists of hundreds of processors and achieves 1 GCUPS.

A VLSI neuro-chip is the hardware for implementing neuron unit(s). The domain of neuro-chip architecture is so wide that it ranges from digital circuit chip to analog circuit chip. Digital circuit neuro-chips have various advantages such as high noise tolerance, high processing accuracy, and direct applicability of the ordinary computer manufacturing technology. They are suitable for stable operation in a large system. In addition, it is easy to implement other variations using the pulse-density model or the like so as to produce new neuro-chip possibilities. On the other hand, analog circuit neuro-chips make it possible to reduce the hardware volume because it has fewer operation circuits. This is advantageous for developing large-scale networks. Moreover, analog circuits have the potential for implementing dynamic and complex neural networks such as the chaos neural network. It is also possible to consider the use of digital-analog hybrid neuro-chips, which combine the strengths of both digital and analog circuits.

The engineering implementation of a neural network is the third approach in which the functions of the neural network are implemented through hardware logic without the use of neuron unit hardware.

It is difficult to compare these approaches, since each approach is unique in terms of learning capability, scalability, and so on.

In a neural system, all neuron units exchange their activation values. Therefore, the interconnection network architecture is an important point in design. Methods of time or frequency multiplexing are possible solutions to this problem. Related important technologies include wafer scale integration, three dimensional architecture, and optical interconnection. CAD and silicon compilers are considered to be important design tools.

7.2.3 Neural System Software

A variety of neural software systems are required for the research and development of neural systems:

1. Simulation System. A flexible, general-purpose neural simulator for large-scale neural networks would be a powerful tool. The requirements for such a simulator are high-speed processing, machine independence, extensibility, convenient user interface, and a variety of utility routines. It is also desirable that such a simulator has mathematical analysis tools to describe and analyse the convergence or cognitive performance of individual networks.

2. Neural Network Language. Neural network processing should be described using a high-level language. The design of such a language demands the following research and development: expression of ambiguous information, description of best-match operations, integration with logical programming, and integration with simulators.

3. Operating System. When the number of hardware neurons is smaller than the number of units in a neural network, a virtual mechanism to fill in this gap will be important. Related research topics are: mechanisms for mapping the neural network onto the hardware, scheduling of resources, etc.

7.2.4 Integration with a Massively Parallel System

It is highly probable that a neural system will be one of the processors for such specific purposes as associative memory, pattern recognition, and combinatorial optimization. This means that the neural system will be combined or integrated with other computing systems. The forms of integration range from close connection to loose connection. An example of close connection is neural systems connected as associative memories to the processing elements of a massively parallel system, and an example of loose connection is a massively parallel system throwing problems such as optimization to the neural system.

7.3 Optical Computing Systems

Light is expected to be a new information medium, because of its extended transmission capacity and massively parallel processing capability. Optics will provide new device technology as well as new architectures and algorithms in the RWC programme that aims at flexible information processing using massively parallel distributed processing. Research topics will be classified into the following categories.

7.3.1 Optical Interconnection

Optical interconnection merges the advanced electronics technology represented by VLSI with optical communication technology and thus eliminates information transmission problems in electronic systems such as propagation delay, line-to-line cross-talk, space factors of wiring and mounting, and large power consumption.

By using high-density multiplexing technologies in the area of time, space, and wavelength, optical interconnection device, architecture, and design technology will offer high-speed and large-capacity optical interconnection having such flexible functions as reconfigurability and self-routing. These are also key technologies for realizing optical neural systems or optical digital systems.

In order to develop optical interconnection, the following issues are important:

- Ultrafast (sub-picosecond) optical interconnection devices for large-capacity interconnection by using time division multiplexing (TDM)
- Space-parallel/functional optical interconnection devices for reconfigurable high-speed interconnection by using space division multiplexing
- Wavelength-parallel/functional optical interconnection devices for large capacity/reconfigurable interconnection by using wavelength division multiplexing (WDM) and wavelength-selective self-routing technology
- Passive optical interconnection elements including micro-optics and diffractive elements for developing optical components having advantages of both stability and high-density optical interconnections
- Advanced opto-electronic integrated devices and circuits combining different material systems/functions for compact and smart devices for the next generation following the aforementioned devices
- Research on interchip and intra-chip optical interconnection for high speed and flexible optical interconnection networks between processing elements, between processors and memories, and between memories
- Modularization of opto-electronic devices and passive optical elements for integration and miniaturization of optical interconnection components

7.3.2 Optical Neural Systems

Optical neural systems aim at realizing real-time processing of images and other spatially distributed information or spectral information through learning and associative processing, using massive and flexible interconnectivity of light.

In order to develop such systems, the following issues are important:

1. Optical Neural Models

- Models for direct input and processing of 2-D/3-D image information by neural networks
- Novel-type theoretical models using physical phenomena of light, such as bistability, chaos, and phase-conjugation, etc.
- Expandable modular models consisting of a number of unit modules
- Models for implementing optical analog devices that are low in accuracy but excellent in large-scale configuration at high speed

2. Optical Neural Devices

- Large-scale optical array devices that can vary synaptic connection weights according to electric/optical learning signals
- Optical neural devices for direct image recognition and processing and for extracting features of input images
- Modularization and standardization of optical neuro-chips

3. Optical Neural Systems

- Design technology for the distribution of functions, hierarchization of the system, and realization of accurate processing through system integration with digital computer
- Learning methods for acquiring knowledge from training signals, storing them as structured knowledge, and technologies for increasing learning speed
- Human friendly I/O interface technologies for direct processing of multimedia information and also for image database allowing direct search of images by key image

7.3.3 Optical Digital Systems

Optical digital systems aim at realizing massively parallel and accurate processing of images and other spatially distributed information or spectral information with logical computation principles using massive and flexible connectivity of light.

To develop such systems, the following components are required:

1. Optical Logic Devices

- High-speed binary/multi-valued optical devices and their two dimensional integration with low power consumption
- Space-parallel optical logic devices with encoding signals in the form of spatially coded pattern
- Wavelength-parallel optical logic devices with encoding signals in the form of a combination of light with different wavelength
- Passive optical devices for micro-optics, planar optics, diffractive optics, and high-precision optics

2. Optical Logic Circuit

- Reconfigurable optical interconnection between optical logic devices
- Functional modularization of optical logic elements, such as parallel optical registers, parallel optical memories, optical crossbar switches, and optical I/O units
- CAD technology for 3-D optical circuit design

3. Optical Digital Systems

- Architecture and design for general/special purpose, optical parallel computers with highly accurate and flexible processing capabilities, based on explicit logical algorithms
- I/O interface for high-speed data exchange between optical logic circuits and electronic systems
- Technologies for implementing and integrating different functional optical modules
- Programming language and compiler for optical parallel digital systems compatible with those for electronic computers, and compilers

7.3.4 Environment for System Development

Optical computing technologies are based on the presumption of using newly developed optical devices, and modularization of opto-electronic devices is also an important goal in the RWC programme. Optical contributions to the highly parallel and massively distributed systems shall be verified with such modules.

OEICs will be key devices for optical interconnection, optical neural systems, and optical digital systems. Development of OEICs should be based on the common platform of processing and module technologies in order to retain compatibility with the system.

The subjects that need to be investigated are: advanced OEIC processing technology, opto-electronic module technology, and standardization and CAD technology.

8. Research organization and plan

8.1 Basic Policy

The primary goal of this programme is not to develop a single computer but to explore the possibilities of elemental technologies that are significant and as yet unestablished. In order to accomplish this challenging and very fundamental goal, the programme is to be managed under the following fundamental policy:

1. Formation of flexible research organization. Research themes are appropriately allotted so that common-base (such as computational bases) or system integration-oriented research is performed in the central laboratory while individual or elemental research is performed in the distributed laboratories, and an organic and flexible link between both parts is secured.

2. Introduction of competitive principles. The programme introduces competitive principles in the first stage, taking various approaches, and selects the research themes to be investigated in the second stage on the basis of the results of evaluation after the initial five years.

3. Interdisciplinary and international cooperation. The programme promotes interdisciplinary and international cooperation in order to fulfil the basic aims, supporting joint research with national institutes like the ETL and universities, etc., and inviting subcontractual applications from domestic/overseas research organizations such as universities, etc.

4. Publication of research achievements. The progress and results of research and development are to be reported and publicized at domestic/ foreign conferences, etc., and by actively holding symposia and workshops as well.

5. Establishment of infrastructure for research activities. A high-speed computer network is established as the infrastructure for internationally distributed research, and formation of a flexible research organization as well as exchanges of research results are supported.

8.2 Organization Scheme

MITI selects about a dozen Japanese companies, including almost all the major ones in electronics, which form the RWC Partnership. The RWC Partnership will found its own central laboratory (RWC Research Center) near the Electrotechnical Laboratory (ETL) in Tsukuba City, expecting close cooperation with the ETL and receiving researchers from the laboratories of each company.

The ETL, which belongs to MITI and has been playing an important role in concept formation of the programme, will continue to support and lead the programme by sending some researchers into the main positions at the RWC Research Center and also carrying out its own basic, leading research for RWC.

As for the previous Fifth Generation Computer project, MITI will provide a similar amount of total budget (about US$500 million for 10 years). The main part of the budget will be allocated to the RWC Partnership (approximately half to the RWC Research Center and the other half distributed to laboratories of companies), about 10 per cent to the ETL and domestic universities, and about 15 per cent to foreign research institutes to promote international cooperation.

There is a modality for foreign researchers to participate in the RWC programme. Foreign companies and non-academic organizations will be permitted to directly join the RWC Partnership, while foreign universities will be able to participate either as subcontractors or by collaborating in joint research with the ETL and RWC Research Center. In the case of joint research, basically there is no budget flow except for the information exchange.

8.3 Time Schedule

A two-year preliminary study was undertaken in 1989 and 1990 under the research committee on the New Information Processing Technology (NIPT) and several working groups, which included the participation of more than 100 researchers in various fields from universities, national institutes like the ETL, and companies. The final report was published in March 1991. FY 1991 was devoted to the feasibility study under the new name of Real-world Computing (RWC) and toward making a master plan for the RWC programme in May 1992. Under these activities, we organized three workshops (Dec. 1990, Nov. 1991, and Mar. 1992) and one international symposium (Mar. 1991), which were open to foreign countries.

The RWC programme is starting in 1992. The RWC Partnership will be established by August, and the RWC Research Center will open in October. A call for subcontracts will be announced overseas this autumn, and the deadline will be the end of 1992. The applications will be reviewed and selected in January and contracts will be prepared in April of 1993. The chance to join the RWC Partnership or to apply for subcontracts will be kept open after this first opportunity.


1. Otsu N. (1989). "Toward Soft Logic for the Foundation of Flexible Information
Processing." Bull. ETL 53 (10): 75-95.

2. Report of the Research Committee on New Information Processing Technology (1991). Industrial Electronics Division, Machinery and Information Industries Bureau, MITI.

3. The Master Plan of the RWC Program (1992). Industrial Electronics Division, Machinery and Information Industries Bureau, MITI.


Introducing the discussion, M. Dierkes stressed that the second part of Session 4 dealt with two main issues related to the interface between information technology and human culture: first, the translation of information into the huge variety of world languages and the specific role of "sub-languages" in different professional fields and segments of society; second, the development of corresponding technologies that are less rigid than conventional information-processing by computers and thus make possible the handling of the more ambiguous information typical of human thinking and communication. He continued by saying that the vision of everyone communicating in their native language as they are instantaneously translated into other languages by highly '`intelligent" machines seems to be socially and culturally desirable. Whether this is technologically feasible in the foreseeable future and whether it is economically viable seem to him key questions to be addressed. A third issue, he claimed, is the problem of access to these tools, especially in the case of languages spoken by small segments of the world population and by people who lack sufficient financial resources to develop the relevant technologies.

The rationale of the great number of commercial products dealing with translations from English into Japanese and vice versa was questioned by N. Streitz, especially in view of the fact that the basic linguistic principles are public knowledge. D. Lide suggested that one might look for "a neutral interchange language" to which each natural language would be translated. Comments were made by G. Johannsen on the policy of Japan's MITI to foster competition in domestic research projects but to support cooperation within Japan when it comes to international competition. He said that this might be an example to be followed by others, especially the developing countries, calling for pooling resources and sharing costs of R&D.

The point was raised by M. Dierkes that efficient machine "translation" of languages and sub-languages would be an important element in preserving cultural diversity and facilitating communication in today's multipolar world. He stressed that there is a great need for future research to improve available technologies to the point that they are able to "intuitively" process information.

Finally, the need for international cooperation in this field was strongly supported by all participants: The cost of developing dictionaries, the necessity of understanding the cultural framework, the huge variety of sub-languages representative of social and professional "subcultures" or spheres of life, the cost of hardware and software production, the research required to go beyond the bilingual machine, and the need for more "intuitive" information processing are claimed to be just some of the technological, economic, and social aspects calling for international cooperation.

Contents - Previous - Next