Contents - Previous - Next


This is the old United Nations University website. Visit the new site at http://unu.edu


4. Examples of front-ends

Several front-ends and related knowledge bases are briefly described to illustrate the state of the art, with particular attention to support for accessing scientific, technical, and medical information.

4.1 Medicine: Grateful Med and Loansome Doc

To allow physicians and other health care professionals to search a variety of medical databases, such as Medline available on the National Library of Medicine's (NLM) MEDLARS system, staff at the National Library of Medicine have developed Grateful Med for the PC [30], with Version 6.0 scheduled for release in June 1992 [23]. It assists with menu-driven off-line entry of strategies. Once on-line, it automatically reformats the terms entered into MEDLARS commands, executes the search, saves the results, logs off the system, reformats and displays the citations. The Grateful Med software generates suggested controlled vocabulary terms (Medical Subject Headings) based on retrieved Medline citations. When a strategy results in zero retrieval, a help screen is available that offers suggestions for modifying search strategies. COACH, an expert searcher program to help Grateful Med users improve their retrieval, is currently under development.

Since Grateful Med accesses bibliographic databases, users also need assistance in locating the actual documents. Loansome Doc, introduced in 1991, allows the individual user to place an on-line order for a copy of the full article for any reference retrieved from Medline. If the user's library can fill the document request directly or if it is filled through interlibrary loan, the user receives a photocopy by the preferred delivery method (e.g., mail or fax).

4.2 Medicine: Unified Medical Language System

The goal of the National Library of Medicine's Unified Medical Language System (UMLS) project is to give easy access to machine-readable information from diverse sources, including the scientific literature, patient records, factual data banks, and knowledge-based expert systems [17]. The barriers to integrated access to information in these sources include: the variety of ways the same concepts are expressed in the different machine-readable sources (and by users themselves), and the difficulty of identifying which of many existing databases have information relevant to particular questions. The UMLS approach to overcoming these barriers is to develop "knowledge sources" that can be used by a wide variety of application programs to compensate for differences in the way concepts are expressed, to identify the information sources most relevant to a user query, and to carry out the telecommunications and search procedures necessary to retrieve information from these information sources.

The three UMLS knowledge sources are: (1) a metathesaurus of concepts and terms from several biomedical vocabularies and classifications; (2) a semantic network of the relationships among semantic types or categories to which all concepts in the metathesaurus are assigned; and (3) an information sources map that describes the content and access conditions for the available biomedical databases in both human-readable and machine-readable form. Objectives for the next three years of the UMLS project are to develop and implement important applications that rely on the UMLS knowledge sources, to establish production systems for ongoing expansion and maintenance of the knowledge sources, and to expand the content of the knowledge sources to support the applications being developed. The NLM plans to develop these capabilities within its existing user interface, Grateful Med. and in COACH. For example, COACH uses the metathesaurus to augment user search terms and to help find new terms.

4.3 Environment: ANSWER

ANSWER is a stand-alone microcomputer-based workstation designed for use by health professionals and related personnel in US state and federal agencies responding to hazardous chemical situations. It was developed by the Toxicology Information Program of the National Library of Medicine for the Agency for Toxic Substances and Disease Registry. ANSWER illustrates the possibilities of using local data access and front-end capabilities in support of problem solving in emergency situations. ANSWER includes: a CD-ROM database with information on the medical and hazard management of exposure to over 1,000 hazardous substances; a database of information on previous chemical emergencies; a gateway to the National Weather Service's on-line information (automatic dial-up, log-on, and data capture for state, regional, and local weather information); an air dispersion modelling package for determining plume path and dispersion; a front-end for access to chemical, toxicological, and hazardous waste files located in various governmental and private sector on-line systems; and a report generation capability for editing, sorting, merging, and transforming retrieved data files.

4.4 Environment: Eco-Link

Eco-Link was developed as an electronic research system to take advantage of electronic sources of information on the environment and to coordinate their acquisition, storage, and presentation [39]. It integrates a wide variety of data from electronic sources relating to the environment. The heart of the system consists of download-filter-manage software routines that automate access to electronic databases and process the acquired information so as to merge data from a broad range of different sources in a common set of locally constructed databases. EcoLink standardizes output from on-line catalogues reachable through the Internet, bibliographic citations from locally mounted databases for newspapers and journal articles, and information from commercial vendors of full text, directories, news sources, and statistical data.

4.5 Chemistry: Graphics Front-ends

Chemical structure searching presents a need for customized front-ends, allowing the scientist to use two-dimensional chemical structure diagrams. Graphics front-ends support the off-line building of chemical structure graphics and subsequent uploading to a host computer, as well as the capture (downloading) of retrieved records. Warr and Wilkins [37] have reviewed the key features of a number of these graphics front-ends, such as STN Express, the front-end software that provides access to STN international databases. STN Express enables one to prepare off-line the strategy formulation (including structural query formulation) and then upload the search strategy line by line after logging on. Off-line chemical structure building is menu-driven. In addition to the ability to create search strategies off-line, the program provides predefined search strategies for general subjects, such as toxicity, that take advantage of individual databases provided by STN. The MOLKICK software package allows the user to enter chemical structures and then translates them into the proper format for searching in three different host systems (STN, Télésystèmes Questel, Dialog) [2].

4.6 Engineering: Ei Reference Desk

Engineering Information Inc. (Ei) has been developing an integrated software package that is designed to bring together both the searching and retrieval of documents [4, 28]. Users will have a choice of browsing through electronic tables of contents for engineering journals, searching COMPENDEX PLUS on CD-ROM, accessing other databases through a telecommunications link, and marking documents for automatic ordering and delivery from Ei's document delivery service. Each function of the Reference Desk has been implemented as a separate application but integrated within the Windows graphical interface. A planned enhancement is an electronic mail function.

4.7 The Livermore Intelligent Gateway

The Livermore Intelligent Gateway creates a framework that links distributed, heterogeneous computer resources and provides a single user interface such that a "virtual information system" can be tailored to any user's needs [8]. In addition to extensive data access capabilities, the Gateway system provides powerful analysis and processing tools to complete the creation of an integrated information environment. Once connected to the selected host, the user may interact in the system's native mode, use a Gateway overlaid common command language, or execute a fully automated search and retrieval procedure for routine tasks. Having simplified access to and retrieval of information, be it bibliographic, numeric, or graphic, the Gateway provides a tool kit to further analyse and repackage the information. Post-processing tools fall into two major categories: analysis of numeric data through statistical, mathematical, and graphics software, and analysis and restructuring of text through translation and analysis routines. In addition to the analytical tool kit, the Gateway provides sophisticated electronic mail capability as well as a wide variety of Unix utilities such as text editors and document preparation subsystems. The menus that a given user or group of users sees on the Gateway can be tailored to create a customized environment.

4.8 TOME SEARCHER and IMIS

TOME SEARCHER is microcomputer software that seeks to provide the inexperienced on-line user with a series of facilities [34]: choice of database(s) in relation to the subject of a search; guidance in formulating the scope of the search; natural language input of the search topic; guidance in clarifying and/or amplifying the topic; automatic conversion of the topic into a Boolean search statement; automatic inclusion of synonyms and spelling variants in the search statement; estimate of likely yield of a search statement; and guidance in narrowing or broadening the statement if the estimated yield does not match the output specified by the user. All this takes place off-line. The system continues by providing automatic dial-up, automatic transmission of search statements to the host using the appropriate command language, display of dialogue with the host, automatic downloading of search output, and the ability to browse through the downloaded records. Much implementation of TOME SEARCHER is customized to a particular subject area, such as electrical and electronics engineering. TOME SEARCHER is one component of the more ambitious IMIS project to develop an intelligent multilingual interface to databases, mounted on an IBM PC and accessing a number of European hosts [36]. IMIS will be designed to support interaction in English, French, German, and Spanish.

4.9 EasyNet

Perhaps the best-known front-end is EasyNet, which offers access to multiple databases on 13 hosts, including many science and technology databases [32]. It gives searchers the option of selecting a database themselves or allowing EasyNet to do so based on answers to a series of questions related to the subject and type of material required. Searching can be accomplished using menus to assist in constructing a search strategy or with commands based on the Common Command Language. Users are responsible for selecting their search terms and also for selecting Boolean logical operators to relate these terms. EasyNet translates the strategy into the command language of the host selected and logs on. After the search is completed and the data downloaded to EasyNet's computer, the user is logged off from the host. On-line help from professional reference staff is available by typing SOS. A customized version of EasyNet is marketed by BIOSIS as the Life Science Network, providing access to more than 80 databases [29]. Dyckman and O'Connor [11] report the results of a study analysing user problems handled by the SOS help service. Their analysis revealed that users seeking human help found the front-end's assistance inadequate in wording their search statements, using features of a specific database, or deciding which database to use.

4.10 Wide Area Information Servers

The Wide Area Information Server (WAIS) project seeks to determine whether current technologies can be used to create end-user full-text information systems [19]. The WAIS system is composed of three separate parts: clients, servers, and the protocol (Z39.50) that connects them. The client is the user interface, the server does the indexing and retrieval of documents, and the protocol is used to transmit the queries and responses. Questions are formulated as English-language queries, which are then translated into the WAIS protocol and transmitted to a server that translates the encoded query into its own query language and then searches for documents satisfying the query. The list of relevant documents is then encoded in the protocol and transmitted back to the client, where they are decoded and the results displayed. The user may modify the query or mark some of the retrieved documents as being relevant. The system can then attempt to find other documents that are similar to those judged relevant. A single interface provides access to many different information sources. With WAIS, the user may select multiple sources to query for information. The system automatically asks all the servers for the required information with no further interaction necessary by the user. The documents retrieved are sorted and consolidated in a single place, to be easily manipulated by the user. To support selection of databases, an on-line Directory of Servers is maintained. It can be queried to identify potential sources on a topic.

5. Evaluation of front-ends

Front-ends are designed as tools for users. To assess their performance and to identify areas in need of improvement, it is necessary to evaluate them. As noted above, front-ends may function in computer-assisted or computer-delegated mode. For those decisions that are computer-assisted, one must determine if the advice is helpful. For those decisions that are computer-delegated, one must determine if the computer's decisions are appropriate. Where assistance is not offered, one must determine if the targeted user group has the necessary expertise to function unassisted. Because the front-end controls what the user can request, one must determine whether it is "habitable" [38], where a habitable language is one in which its users can ex press themselves without straying over the language's boundaries into un-allowed questions. Furthermore, one must consider whether the output options meet the users' needs. In addition, there is a need to analyse what effort is required to use the front-end. Van Brakel [33] suggests using the activities of a human intermediary as a framework for evaluating the capabilities of a front-end.

Because information retrieval is a complex task, it is difficult to develop front-ends to achieve human levels of performance. Buckland and Florian [7] caution that "delegation, with computers as with people, invites the possibilities of undesirable decisions by the person or machine to whom the decision has been delegated." Two examples can illustrate limitations of front-ends. The first, drawn from early versions of Grateful Med. illustrates the difficulty of anticipating all possible variations in input that must be handled. Grateful Med allows the user to type in author names as initials followed by the surname. It then is programmed to translate this into the form required to search the Medline database, i.e., surname followed by a space and then the initials. Initially Grateful Med did not properly handle some input names. Entering "D.A.B. Lindberg" resulted in the translated string "B. Lindberg DA" because the front-end did not expect more than two initials prior to the surname. Clearly this is an error, but the user did not have a way to override this error. The second example illustrates limitations in the database selection capability of EasyNet. A study by Hu [16] analysed this capability of INFOMASTER, a version of EasyNet. Database selection is accomplished by narrowing down the subject selections for a particular query using menu choices made by the human searcher. Then INFOMASTER selects a database seemingly at random from among a group of databases in a particular subject field. Hu discovered that in some cases, the same menu selections by different searchers for the same query led to inconsistent databases selected by INFOMASTER. This meant that searches done at different times would be conducted on different databases, with varying levels of completeness and no indication to the user that there might be additional databases with better yields. Given the possibility of such errors or poor advice in front-ends, systematic evaluation is needed to characterize their strengths and weaknesses and to pinpoint areas in need of improvement.

6. Directions for research and development

While the examples given in this paper indicate that there are already a number of front-ends available to scientists and others who wish to do their own searching, additional research and development are required to create more useful front-ends. In addition to completing evaluations as described in the previous section, a number of other issues need to be investigated as outlined below. It should also be noted that development of front-ends may be aided by new computing tools such as user-interface management systems (UIMS) [15].

6.1 From Directories to Resource Selection Aids

There is currently a great deal of activity in developing databases of databases or directories. Examples include the Directory of Biotechnology Information Resources maintained by the National Library of Medicine and the Listing of Molecular Biology Databases maintained by Los Alamos National Laboratory. Other examples are guides to library catalogues and the Internet Resource Guide. Work with WAIS and UMLS is exploring ways to use directories for resource selection. This is an important area for further research, since sophisticated assistance with search strategy formulation is of little value if an inappropriate resource has been selected.

6.2 Coping with Null Sets and Large Sets

Studies of end-user searching frequently reveal that such users often formulate strategies that retrieve nothing or that retrieve very large sets. Prabha [27] reviews a number of strategies for reducing large sets. Front-ends must be designed to help users modify strategies to avoid both situations.

6.3 Front-ends for Non-bibliographic Databases

Much of the effort to date has focused on creating front-ends to handle query formulation for multiple bibliographic databases. As Järvelin [18] points out, there is also a need for assisting with access to multiple numeric and other types of non-bibliographic databases. While some of the front-end functions would correspond to forms of assistance needed for accessing bibliographic databases, others, such as data conversion between varying data representations, must be dealt with in accessing numeric databases.

6.4. Multilingual Facilities

As Vickery and Vickery [35] remark, a needed further enhancement of front-ends would be provision of multilingual facilities. They suggest that such a front-end could have the following features: (I) screen displays would be available in all the languages covered by the system; (2) the interface would accept input of a user query in each of these languages and would refine the query by interacting with the user in the language of input; (3) the terms of the refined query would be translated into the language of the selected database(s); and (4) retrieved records would be translated into the language of the user. As one example of such multilingual support, Halpern and Sargeant [13] describe a front-end for bilingual searching of Medline that has been developed by INSERM (Institut National de la Sante et de la Recherche Médicale) the host Télésystèmes-Questel. It supports bilingual access at both the command and query level. At the command level, menus and help screens are presented in French only. At the query level, subject searching can be performed in French or English, taking advantage of a French translation of the controlled vocabulary Medical Subject Headings.

6.5 Knowledge Acquisition

One of the problems involved in the design of more sophisticated front-ends is the lack of extensive knowledge gathered from experts about how they search. As human intermediary expertise is better understood, the problem of knowledge acquisition by the front-end remains. If performance of front-ends is to improve over time, then some provision for modification or learning must be implemented.

6.6 User-Friendly Systems

Vickery [36] has remarked that a front-end "accesses existing online systems, with all their constraints and deficiencies, so it can only be as successful as the online search system allows it to be. An interface does not address the problem of restructuring the database or the search system to make retrieval more intelligent." Harman [14] argues that attempts to develop more user-friendly front-ends are inherently limited by the design of the underlying retrieval systems. WAIS is a current example of efforts to create both a more user-friendly front-end and a system that is easier to use in searching, based on statistical methods, allowing natural language input, and returning lists of records in order of likely relevance. Research is needed to evaluate and further develop these alternative approaches to retrieval so that future front-ends are not so constrained. Knowbots [9] knowledge robots transporting the user's request out into the universe of digitized information where outlying knowbots will search for answers - may well be part of that future, but there still must be provision for human oversight of the information retrieval process.

7. Conclusion: Implications for developing countries

Keren and Harmon [20] suggest that nearly all publications that deal with information work in the developing countries cite one or more of the following problem areas: "lack of appreciation by national decision makers for the role of STI in development; the absence of an adequate infrastructure for information storage and processing; the absence of an adequate infrastructure for information use and absorption by users; and economic, administrative, technological, cultural, educational, and structural barriers to an adequate information flow." Given this context, it is necessary to ask what role computerized front-ends might play in overcoming barriers to information access and use in developing countries.

A key lesson is that there is the possibility with front-ends of accommodating some differences (e.g., language spoken) among users of information systems and of providing some guidance in the use of information systems tailored to the needs of particular user groups. As the telecommunications infrastructure gradually develops, making possible access to remote information resources, it will be necessary to investigate how best to design front-ends to meet the needs of particular user groups in specific countries or regions. In addition, as indigenous information sources are developed, front-ends have a role to play in integrating access to indigenous and external information sources where both are relevant to the users' needs.

References

1. Allen, R.B. (1990). "User Models: Theory, Method and Practice." International Journal of Man-Machine Studies 32 (5): 511-543.

2. Badger, R., C. Jochum, and S. Lesch (1988). "MOLKICK: A Universal Graphics Query Program for Searching Databases with Chemical Structures." Proceedings of the National Online Meeting 9: 7-8.

3. Bates, M.J. (1990). "Where Should the Person Stop and the Information Search Interface Start?" Information Processing & Management 26 (5): 575591.

4. Berger, M.C. (1989). "Engineering Information Workstation." Proceedings of the National Online Meeting 10: 33-35.

5. Borgman, C.L., and Y.I. Plute (1992). "User Models for Information Systems: Prospects and Problems." In F.W. Lancaster and L.C. Smith, eds. Artificial Intelligence and Expert Systems: Will They Change the Library? Urbana, Ill.: Graduate School of Library and Information Science, University of Illinois, pp. 178-193.

6. Brajnik, G., G. Guida, and C. Tasso (1990). "User Modeling in Expert ManMachine Interfaces: A Case Study in Intelligent Information Retrieval." IEEE Transactions on Systems, Man, and Cybernetics 20 (1): 166-185.

7. Buckland, M.K., and D. Florian (1991). "Expertise, Task Complexity, and Artificial Intelligence: A Conceptual Framework." Journal of the American Society for Information Science 42 (9): 635-643.

8. Burton, H.D. (1989). "The Livermore Intelligent Gateway: An Integrated Information Processing Environment." Information Processing & Management 25 (5): 509-514.

9. Daviss, Bennett (1991). "Knowbots." Discover 12 (4): 21-23.

10. Drenth, H., A. Morris, and G. Tseng (1991). "Expert Systems as Information Intermediaries." Annual Review of Information Science and Technology 26: 113154.

11. Dyckman, L.M., and B.T. O'Connor (1989). "Profiling the End-User: A Study of the Reference Needs of End-Users on Telebase System, Inc.'s Easynet." Proceedings of the National Online Meeting 10: 143-152.

12. Efthimiadis, E.N. (1990). "Online Searching Aids: A Review of Front Ends, Gateways and Other Interfaces." Journal of Documentation 46 (3): 218-262.

13. Halpern, J., and H.A .Sargeant (1988). "A New End-User Interface for Bilingual Searching of MEDLINE." Proceedings of the International Online Information Meeting 12: 427443.

14. Harman, D. (1992): "User-friendly Systems Instead of User-friendly Front-Ends." Journal of the American Society for Information Science 43 (2): 164-174.

15. Hix, D. (1990). "Generations of User-interface Management Systems." IEEE Software 7 (5): 77-87.

16. Hu, C. (1988). "An Evaluation of a Gateway System for Automated Online Database Selection." Proceedings of the National Online Meeting 9: 107-114.

17. Humphreys, B. (1991). "Unified Medical Language System: Progress Report." National Library of Medicine New 46 (11-12): 7-8.

18. Järvelin, K. (1989). "A Blueprint of an Intermediary System for Numeric Source Databases." In S. Koskiala and R. Launo, eds. Information*Knowledge* Evolution. Amsterdam: North-Holland, pp. 311-320.

19. Kahle, B., and A. Medlar (1991). "An Information System for Corporate Users: Wide Area Information Servers." Online 15 (5): 56-60.

20. Keren, C., and L. Harmon (1980). "Information Services Issues in Less Developed Countries." Annual Review of Information Science and Technology 15: 289-324.

21. Longley, D., and M. Shain (1989). Macmillan Dictionary of Information Technology. 3rd ed. New York: Van Nostrand Reinhold, pp. 277, 231.

22. Meadow, C.T. (1992). Text Information Retrieval Systems. San Diego, Calif.: Academic Press.

23. "NLM Plans June Release of Grateful Med Update" (1992). National Library of Medicine News 47 (3-4): 1-3.

24. "Panel on Information Technology and the Conduct of Research" (1989). Information Technology and the Conduct of Research: The User's View. Washington, D.C.: National Academy Press, pp. 1, 3.

25. Percival, J.M. (1990). "Graphic Interfaces and Online Information." Online Review 14 (1): 15 20.

26. Pollitt, A.S. (1990). "Intelligent Interfaces to Online Databases." Expert Systems for Information Management 3 (1): 49-69.

27. Prabha, C. (1991). "The Large Retrieval Phenomenon." Advances in Library Automation and Networking 4: 55-92.

28. Regazzi, J.J. (1990). "Designing the Ei Reference Desk." Proceedings of the National Online Meeting 11: 345-347.

29. Seiken, J. (1992). "Menu-driven Interfaces Simplify Online Database Searching. " The Scientist 6 (8): 18- 19.

30. Snow, B., A.L. Corbett, and F.A. Brahmi (1986). "Grateful Med: NLM's Front End Software." Database 9 (6): 94-99.

31. Sormunen, E., R. Nurminen, M. Hamalainen, and M. Hiirsalmi (1987). Knowledge-based Intermediary System for Information Retrieval: Requirements Specification. Research notes no. 794. Espoo, Finland: Technical Research Centre of Finland.

32. Still, J. (1991). "Using EasyNet in Libraries." Online 15 (5): 34-37.

33. van Brakel, P.A. (1988). "Evaluating an Intelligent Gateway: A Methodology." South African Journal of Library and Information Science 56 (4): 277-290.

34. Vickery, A. (1989). "Intelligent Interfaces for Online Searching." Aslib Information 17 (11/12): 271-274.

35. Vickery, B., and A. Vickery (1990). "Intelligence and Information Systems." Journal of Information Science 16: 65-70.

36. Vickery, B.C. (1992). "Intelligent Interfaces to Online Databases." In F.W. Lancaster and L.C. Smith, eds. Artificial Intelligence and Expert Systems: Will They Change the Library? Urbana, Ill.: Graduate School of Library and Information Science, University of Illinois.

37. Warr, W.A., and M.P. Wilkins (1990). "Graphics Front Ends for Chemical Searching and a Look at ChemTalk Plus." Online 14 (3): 50-54.

38. Watt, W.C. (1968). "Habitability." American Documentation 19: 338-351.

39. Weiskel, T.C. (1991). "Environmental Information Resources and Electronic Research Systems (ERSs): Eco-Link as an Example of Future Tools." Library HiTech 9 (2): 7-19.

40. Williams, M.E. (1986). "Transparent Information Systems Through Gateways, Front Ends, Intermediaries, and Interfaces. " Journal of the American Society for Information Science 37 (4): 204-214.

Multimedia technology: A design challenge


Abstract
1. Introduction
2. What are communication media and how do they differ?
3. Are human beings aware of the capabilities of different media?
4. What can the technology do now?
5. User centred or design centred?
6. The PROMISE multimedia interface project
7. How does one design a multimedia interface?
8. Some initial guidelines
9. Conclusions
10. Acknowledgements
References


James L. Alty

Abstract

The term multimedia used in this paper refers to different presentation media such as sound, graphics, text, etc. The possible benefits of a multimedia approach are discussed and some examples are given of how different media affect knowledge comprehension in human beings. An indication is given of what multimedia facilities are currently available. The PROMISE multimedia interface project is described, and in particular a possible approach to the formulation of a multimedia design methodology is proposed. Some initial guidelines on multimedia interface design are given. Although some of the ideas expressed here come from work in process control, they are likely to be generalizable to wider domains.

1. Introduction

The term "multimedia" has two possible meanings. Firstly, the "media" can refer to storage media such as WORMs, CD-ROMs, and disks. Secondly, it can refer to the presentation of information using different media such as sound, graphics, text, etc. [10]. In this paper the second meaning applies throughout.

The idea of using multiple media to improve communication between humans and computers is not new. A paper of 1945, "How We May Think" [7], suggested a multiple media approach that was later reassessed [8]. Some well-known early experiments with multimedia were carried out at the MIT "Media Lab" in 1977 [6], and Maekawa and Sakamura [14] also described an early multimedia machine that was being implemented at the University of Tokyo. The system had an optical disc, 100 Mbyte disc, a high-speed resolution graphics display, a TV camera, and sound input/output. These collections of multimedia devices were very expensive and it is only recently that a host of multimedia tool kits have entered the market-place at a relatively low cost.

But why should we be interested in multimedia interfaces? Well, even a cursory study of human beings communicating information between each other will show the importance of the use of multiple media in the communications process. Human beings often use at least two sensory channels (visual and auditory) but frequently use a third - touch - as well, and within these communication channels, a rich variety of media are employed. When one artificially reduces the richness of the set of communication media being used (for example by taking a tape recording of a meeting), the reduction in communication power is obvious. Since human beings do seem to be able to communicate more effectively between each other than with computers, the idea of employing additional media in human-computer interaction seems a sensible one, whose inclusion is likely to improve the communication process. The early stumbling blocks that prevented the implementation of multimedia facilities on computers were lack of power and storage. Now these problems have largely been overcome and multimedia tool sets are thus becoming available on both personal computers and high-performance workstations.

The technical problems of providing a variety of media with acceptable performance and at a reasonable cost are not the only stumbling blocks that may prevent the exploitation of multimedia facilities. Another key problem is that of devising a methodology to aid multimedia interface design. Being able to provide multimedia interfaces is not enough. One must also be able to know when to use which media and in what combination to solve a particular interface problem.

I will first discuss multimedia facilities generally, indicating what media are and why there may be a problem associated with the design of multimedia interfaces. I will then examine the PROMISE project (a large collaborative project), which is particularly concerned with the development of a methodology of multimedia design. Although many of the ideas in PROMISE come from work in process control, they are likely to be generalizable to wider domains.

2. What are communication media and how do they differ?

Although we often think initially about the physical aspects of communication media (screens, colour, sounds, etc.), the main attribute of a medium is that it provides a language for communication. This language will involve a syntax, semantics, and pragmatics, with the physical aspects of the medium providing constraints on the syntactic possibilities. Traditional computer media have usually been restricted to one sensory channel (e.g. text [visual], graphics [visual], sound [auditory]), but even within one sensory channel, a rich variety of media can be supported. The visual channel, for example, can support graphics, tables, diagrams, pictures, maps, graphical animation, and 3-D graphics, all of which communicate information in different ways. The auditory channel can support speech, verbal gestures, realistic sounds, artificial sounds, and music. The sensory channel can involve vibration and sensory input that might result from, for example, a data glove.

Some information already exists to guide us on the relative claims for different media for effective information transfer. Auditory media, such as radio, make dialogue salient. For example, children given a story in audio only or with a visual + audio combination (same soundtrack) recall dialogue better when it is given as audio only. Television presentation seems to be better for action information, which has improved recall when presented via television [5]. Whilst audio information seems to stimulate imagination, spatial visualization is better handled visually, as might be expected. Other studies have indicated that diagrams are better at conveying ideas, whereas text is better for detail [22]. However, the visual channel does not always dominate. Walker and Scott [24] found that human beings judge light as being of a shorter duration than an identical tone, and when these are presented together, the auditory channel dominates. Pezdek [21] carried out experiments to determine if the visual channel dominated over the auditory channel in the comprehension of information on television. Whilst there was evidence of visual domination, the presence of the auditory channel actually improved comprehension and vice versa.

Text is often better for communicating complex information to experts, whilst pictures are better for exploratory learning. In particular, visual representations are excellent for synthesis. These ideas are shown in figure 1, where the usability of different channels (visual and auditory) is contrasted with the previous knowledge and experience in the knowledge area. For some tasks, text can be a very effective medium of communication. Each different medium of communication, therefore, has properties that will enhance or restrict its capability for transmitting particular types of knowledge.

At a higher level, we can form new media by combining existing ones. When two media are combined, new syntactic and semantic units become possible. These higher level media often use more than one sensory channel. Examples would include movie films and animated diagrams with verbal talk-over. It might be thought that a more complex (or rich) medium would always be preferable to a simpler one, but this is not always so. For example, experiments with televised weather forecasts and radio weather forecasts have shown that the auditory medium is superior in many cases. One can also remember radio plays that are more "vivid" than television plays. As Kosslyn [13] states,

Multimedia technology can deliver information like a fire hose delivers water. Just as drinking from a fire hose is not an efficient way to quench one's thirst, high powered multimedia presentations can overload the senses and fail to communicate information effectively. Multimedia relies on the essential truth in the Chinese proverb that tells us a picture "is worth more than a thousand words." Unfortunately some pictures do not help to control the flow of information, and actually make it worse.

Figure 1

Work on the effectiveness of different media for communicating information has been carried out over many years, but interpretation of the results is not straightforward. Washburne [26] found, for example, that graphs were easier to interpret than tables, whilst Vernon [23] found the opposite. These apparent divergencies are not necessarily surprising. Later work [11, 9] has shown that the usefulness of the different display formats is highly dependent upon the tasks being performed. Effectiveness is dependent upon the nature of the information being sought by the reader.

3. Are human beings aware of the capabilities of different media?

Human beings in general, and computer interface designers in particular, are not really aware of the capabilities and limitations of different media. Early computer output was restricted to text. This is a medium that most people understand, since it has been used for hundreds of years. Therefore we have an intuitive understanding of how to communicate with text, at least at a fairly basic level. When graphics became possible on computers, the situation did not materially change because graphics had also been used by human beings to communicate (often in conjunction with text) for many years. When colour became available, however, the situation deteriorated. Colour was used in a totally indiscriminate manner in early interface design. It was not surprising that the quality of interfaces deteriorated. Most people cannot choose their own wallpaper or clothes with a real colour sense. It was much later that designers realized that the spare use of colour was the key to good interface design.

It is quite interesting to reflect on why human beings (in the Western world in particular) do not have good colour sense. Mary White [25] makes the interesting point that "imagery" (which in her definition encompasses paintings, sculpture, and television as well as computer graphics and video) has been used as a primary learning tool for centuries. The invention of the printing press changed all that, and the primary mechanism for learning became the printed word. Ironically, towards the end of the twentieth century we have moved back to a world where imagery is important again, and it has replaced the word as a major political communication medium. There is, however, little research on imagery and its effects.

We are now about to enter the era of multimedia interface design, and I expect the experience with colour to be repeated, writ large. The quality of home videos testifies to the level of most peoples' fitness for designing inter faces using moving pictures and voice! Since human beings are only vaguely aware of the differences between media, we need a methodology that helps to answer the question, "What medium when?" to achieve a particular interface goal. I will return to the issue of the methodology later.


Contents - Previous - Next