Summary: Hypertext has a surprisingly rich history compared to most phenomena in the personal computer industry, especially considering that most people had not heard of it until a few years ago. I have been to talks at major conferences where the speakers were ignorant of any hypertext developments preceding the introduction of the WWW. Table 3.1 gives an overview of the history of hypertext; the major events are discussed in more detail in this chapter.
This is chapter 3 from Jakob Nielsen's book Multimedia and Hypertext: The Internet and Beyond, Morgan Kaufmann Publishers, 1995. (For full literature references, please see the bibliography in the book.)
Vannevar Bush (1890–1974) is normally considered the "grandfather" of hypertext, since he proposed a system we would now describe as a hypertext system as long ago as 1945. This system, the Memex ("memory extender"), was never implemented, however, but was only described in theory in Bush's papers.
Bush actually developed some of his ideas for the Memex in 1932 and 1933 and finally wrote a draft paper on it in 1939. For various reasons [Nyce and Kahn 1989, 1991] this manuscript was not published until 1945, when it appeared in the Atlantic Monthly under the title " As We May Think."
Bush described the Memex as "a sort of mechanized private file and library" and as "a device in which an individual stores his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility." The Memex would store this information on microfilm, which would be kept in the user's desk. This desk was intended to have several microfilm projection positions to enable the user to compare different microfilms, in a manner very similar to the windows that became popular on personal computers more than forty years later.
The Memex would have a scanner to enable the user to input new material, and it would also allow the user to make handwritten marginal notes and comments. But Bush envisaged that
most of the Memex contents are purchased on microfilm ready for insertion. Books of all sorts, pictures, current periodicals, newspapers, are thus obtained and dropped into place. Business correspondence takes the same path.
1945 Vannevar Bush proposes Memex
1965 Ted Nelson coins the word " hypertext " (later elaborated in his pioneering book Literary Machines)
1967 The Hypertext Editing System and FRESS, Brown University, Andy van Dam
1968 Doug Engelbart demo of NLS system at FJCC
1975 ZOG (now KMS): CMU
1978 Aspen Movie Map , first hypermedia videodisk, Andy Lippman, MIT Architecture Machine Group (now Media Lab)
1984 Filevision from Telos; limited hypermedia database widely available for the Macintosh
1985 Symbolics Document Examiner , Janet Walker
1985 Intermedia , Brown University, Norman Meyrowitz
1986 OWL introduces Guide , first widely available hypertext
1987 Apple introduces HyperCard , Bill Atkinson
1987 Hypertext'87 first major conference on hypertext
1991 World Wide Webat CERN becomes first global hypertext, Tim Berners-Lee
1992 New York Times Book Review cover story on hypertext fiction
1993 Mosaic anointed Internet killer app, National Center for Supercomputing Applications
1993 A Hard Day's Night becomes the first full-length feature film in hypermedia (originally for Macintosh; now also available for Windows)
1993 Hypermedia encyclopedias sell more copies than print encyclopedias
1995 Netscape Corp. gains market value of almost $3B on first day of stock market trading (1998: AOL buys Netscape for $4B)
Actually we have not yet reached the state of hypertext development where there is a significant amount of preprocessed information for sale that can be integrated with a user's existing hypertext structure.
The main reason Vannevar Bush developed his proposal for the Memex was that he was worried about the explosion of scientific information which made it impossible even for specialists to follow developments in a discipline. Of course, this situation is much worse now, but even in 1945 Bush discussed the need to allow people to find information more easily than was possible on paper. After having described his various ideas for microfilm and projection equipment, he stated that
All this is conventional, except for the projection forward of present-day mechanisms and gadgetry. It affords an immediate step, however, to associative indexing, the basic idea of which is a provision whereby any item may be caused at will to select immediately and automatically another. This is the essential feature of the memex. The process of tying two items together is the important thing.
Hypertext, in other words!
In addition to the establishment of individual links, Bush wanted the Memex to support the building of trails through the material in the form of a set of links that would combine information of relevance for a specific perspective on a specific topic. He even forecast the establishment of a new profession of "trail blazers,""who find delight in the task of establishing useful trails through the enormous mass of the common record." In current terminology, these trail blazers would be people who add value to published collections of text and other information by providing a web of hypertext links to supplement the basic information. But since we do not even have a market for basic hypertexts yet, we unfortunately have to do without professional trail blazers. Amateur trail blazers have come into existence in recent years in the form of people who list WWW sites they find interesting on their home page.
The building of trails would also be an activity for the ordinary Memex user, and using his microfilm ideas, Bush assumed that such a user might want to photograph a whole trail for friends to put in their Memexes. Again we should note that current technology is not up to Bush's vision, since it is almost impossible to transfer selected subsets of a hypertext structure to another hypertext, especially if the two hypertexts are based on different systems.
Vannevar Bush was a famous scientist in his days and was the science advisor to President Roosevelt during the Second World War, when science-based issues like inventing nuclear weapons were of great importance. After "As We May Think" ran in the Atlantic Monthly , it caused considerable discussion, and both Time and Life ran stories on the Memex. Life even had an artist draw up illustrations of how the Memex would look and a scenario of its projection positions as the user was completing a link. Doug Engelbart, who later became a pioneer in the development of interactive computing and invented the mouse, got part of his inspiration by reading Bush's article while waiting for a ship home from the Philippines in 1945.
In spite of all this early interest surrounding the Memex it never got built. As hinted above, our current computer technology is still not able to support Bush's vision in its entirety [Meyrowitz 1989b]. We do have computers with most of the Memex functionality but they are based on a completely different technology from the microfilm discussed by Bush.
It is interesting to recall that Bush was one of the pioneering scientists in the development of computer hardware and was famous for such inventions as the MIT Differential Analyzer in 1931. Alan Kay from Apple has suggested that the areas about which we know most may be those where we are most in accurate in predicting the future, since we see all the problems inherent in them. Therefore Bush could gladly dream about impossible advances in microfilm technology but he would have been reluctant to publish an article about personal computing since he "knew" that computers were huge things costing millions of dollars.
After Bush's article from 1945, nothing much happened in the hypertext field for twenty years. People were busy improving computers to the point where it would be feasible to use them interactively, but they were so expensive that most funding agencies viewed as completely irresponsible the suggestion that computer resources should be wasted on nonnumeric tasks such as text processing.
In spite of this attitude, Doug Engelbart started work in 1962 on his Augment project, developing computer tools to augment human capabilities and productivity. This project was the first major work in areas like office automation and text processing; in fact the entire project was much more ambitious and broad in scope than the productivity tools we currently enjoy in the professional work environment. The project was conducted at SRI (Stanford Research Institute) with a staff that grew to 45 people.
One part of the Augment project was NLS (for oN-Line System), which had several hypertext features even though it was not developed as a hypertext system. (The reason for the strange acronym was to distinguish the name from that of the oFf-Line System, FLS.) During the Augment project, the researchers stored all their papers, reports, and memos in a shared "journal" facility that enabled them to include cross-references to other work in their own writings. This journal grew to over 100,000 items and is still unique as a hypertext structure for support of real work over an extended time.
In 1968 Engelbart gave a demo of NLS at a special session of the 1968 Fall Joint Computer Conference. Giving this first public demo of many of the basic ideas in interactive computing was something of gamble for the group. Engelbart had to use much of his grant money to obtain special video projectors, run microwave transmission lines between his lab and the conference center, and get other kinds of specialized hardware built, and he would have been in big trouble if the demo had failed. But it worked, and in retrospect spending the money was the right decision; many people have said that it was that demo that got them fired up about inventing interactive computing.
In spite of the successful demo, the government dropped its research support of Engelbart in 1975 at a time when he had more or less invented half the concepts of modern computing. (After the Augment project was as good as terminated, several people from Engelbart's staff went on to Xerox PARC and helped invent many of the second half of the concepts of modern computing.) Augment continued as an office automation service but was not really developed further. Engelbart himself is still pushing his original augmentation ideas and a few years ago started the "Bootstrap Project," located at Stanford University.
The actual word "hypertext" was coined by Ted Nelson in 1965. Nelson was an early hypertext pioneer with his Xanadu system, which he has been developing ever since. Parts of Xanadu do work and have been a product from the Xanadu Operating Company since 1990.
The Xanadu vision has never been implemented, however, and probably never will be (at least not in the foreseeable future). The basic Xanadu idea is that of a repository for everything that anybody has ever written and thereby of a truly universal hypertext. Nelson views hypertext as a literary medium; and he believes that "everything is deeply intertwingled" and therefore has to be online together. Nelson's main book on hypertext is actually entitled Literary Machines . Robert.; Glushko [1989b], in contrast, believes that multidocument hypertext is only called for in comparatively few cases where users have explicit tasks that require the combination of information.
If Nelson's vision of having all the world's literature in a single hypertext system is to be fulfilled, it will obviously be impossible to rely on local storage of information in the user's own personal computer. Indeed, Nelson's Xanadu design is based on a combination of back end and local databases, which would enable fast response for most hypertext access since the information used the most by individual users would still be stored on their local computers. Whenever the user activates a link to more exotic information, the front end computer transparently links to the back end repository through the network and retrieves the information.
In Xanadu it is possible to address any substring of any document from any other document. In combination with the distributed storage of information, this capability means that Xanadu includes a scheme for giving a unique address to every single byte in the world if there should be a need for it.
Furthermore, the full Xanadu system will never delete any text, not even when new versions are added to the system, because other readers may have added links to the previous version of the text. This permanent record of all versions makes it possible for other documents to link either with the current version of a document or with a specific version. Frequently one will want to link with the most up-to-date material, as when referring to census statistics or weather forecasts, but in more polemic documents one may want to ensure a reference to a specific version of a position one is arguing against.
The reader of a document linked to a specific version of another document will always have the option of asking the system to display the most current version. This "temporal scrolling" can also be used to show how documents have looked in previous versions and can be useful, for instance for version management of software development.
Nelson does realize that this scheme means that billions of new bytes will have to be added to Xanadu every day without the hope of freeing storage by deleting old documents. His comment is, "So what...?" and a reference to the fact that the current load on the telephone system would have been impossible under the traditional technology of human operators connecting every call. The history of computer technology until now does give reason for some optimism with respect to being able to support the Xanadu vision some time in the future.
When everything is online in a single system and when everybody may link with everybody else, there will be tremendous copyright problems if the traditional view of copyright is maintained [Samuelson and Glushko 1991]. Nelson's answer is to abolish the traditional copyright to the extent that information placed in Xanadu will always be available to everybody. This principle may be feasible; the system would still keep track of original authorship and provide royalties to the original author based on the number of bytes seen by each reader.
Publishing an anthology would be a simple matter of creating a new document with some explanatory and combining text and with links to the original documents by other authors who would not need to be contacted for permission. Because of the royalty, everybody would be financially motivated to allow other people to link with their work, since it will be through the links that readers discover material worth reading. Even so, some authors might fear being quoted out of context or having their work misrepresented by other authors. This problem is taken care of in theory in Xanadu because the reader always has the option of asking for the complete text of any document being quoted by a link. In practice, many readers will probably not bother looking at the full text of linked documents, so one might need a mechanism for allowing authors to flag links to their work with an attribute indicating that they believe that the link is misleading.
During some of his early work on Xanadu, Ted Nelson was associated with Brown University (Providence, RI). Since then he has mostly been an independent visionary and author, though he was with Autodesk, Inc. for some time.
Hypertext Editing System (1967) and FRESS (1968)
Even though Xanadu was not even partly implemented until recently, hypertext systems were built at Brown University in the 1960s under the leadership of Andries van Dam. The Hypertext Editing System built in 1967 was the world's first working hypertext system. It ran in a 128K memory partition on a small IBM/360 mainframe and was funded by an IBM research contract.
After the Hypertext Editing System was finished as a research project at Brown University, IBM sold it to the Houston Manned Spacecraft Center, where it was actually used to produce documentation for the Apollo missions.
The second hypertext system was FRESS (File Retrieval and Editing System), which was done at Brown University in 1968 as a follow-up to the Hypertext Editing System and was also implemented on an IBM mainframe. Because of this extremely stable platform, it was actually possible to run a demonstration of this code, more than twenty years old, at the 1989 ACM Hypertext conference.
Both these early hypertext systems had the basic hypertext functionality of linking and jumping to other documents, but most of their user interface was text-based and required indirect user specification of the jumps.
Brown University has been a major player in the hypertext field ever since, with its most prominent effort being the development of the Intermedia system (discussed further later in this chapter).
Aspen Movie Map (1978)
Probably the first hyper media system was the Aspen Movie Map developed by Andrew Lippman and colleagues at the MIT Architecture Machine Group (which has now merged with other MIT groups to form the Media Lab). Aspen was a surrogate travel application that allowed the user to take a simulated "drive" through the city of Aspen on a computer screen.
The Aspen system was implemented with a set of videodisks containing photographs of all the streets of the city of Aspen, Colorado. Filming was done by mounting four cameras aimed at 90° intervals on a truck that was driven through all the city streets, each camera taking a frame every ten feet (three meters). The hypermedia aspects of the system come from accessing these pictures not as a traditional database ("show me 149 Main Street") but as a linked set of information.
Each photograph was linked to the other relevant photographs a person would see by continuing straight ahead, backing up, or moving to the left or to the right. The user navigated the information space by using a joystick to indicate the desired direction of movement, and the system retrieved the relevant next picture. The resulting feeling was that of driving through the city and being able to turn at will at any intersection. The videodisk player could in theory display the photos as quickly as one frame per 33 millisecond, which would correspond to driving through the streets at 200 mph (330 km/h). To achieve a better simulation of driving, the actual system was slowed down to display successive photos with a speed depending on the user's wishes, but no faster than ten frames/sec., corresponding to a speed of 68 mph (110 km/h).
It was also possible for the user to stop in front of a building and "walk" inside, since many of the buildings in Aspen had been filmed for the videodisk. As a final control, the user could select the time of year for the drive by a "season knob," since the entire town was recorded in both fall and winter. This concept is somewhat related to the "temporal scrolling" in the Xanadu system described above. But the Aspen season knob is probably easier to understand for users because it relates directly to a well-known concept from the real world even though it provides a functionality that would be impossible in the real world. Figures 3.1 and 3.2 show the use of a similar feature in the more recent Ecodisc system. The Ecodisc is an instructional hypertext for learning about ecology by allowing the user to move about a lake and observe its varied habitats [Nielsen 1990e].
The Aspen system used two monitors for its interface, but in a more natural way than the traditional two-screen solution discussed in Chapter 7. One monitor was a regular vertical screen and showed the street images filmed from the truck. This provided users with an immersive view of the city and made them feel as if they had entered into the environment. The second screen was horizontal and placed in front of the immersive screen. Used to show a street map, it provided the user with an overview of the environment. The user could point to a spot on the map and jump directly to it instead of having to navigate through the streets. The overview map also provided "landmarks" by highlighting the two main streets of the city. This two-screen solution allowed users easily to understand their position relative to these two main streets.
One reason for the availability of funding to build surrogate travel applications in the late 1970s was the successful liberation of hostages from the Entebbe airport by Israeli troops. Even though these soldiers had never been to Uganda before, they were able to carry through their mission extremely well because they had practiced in a full-scale mockup of the airport that had been built in Israel. Computerized surrogate travel systems might make it possible to train for similar missions in the future without actually having to build entire mockup cities.
It is also possible to imagine that surrogate travel systems like Aspen might be used on a routine basis in the future not just by commando soldiers training for a mission but also by tourists planning their vacations. In the near future, however, the main use of surrogate travel will probably be for educational use; the Palenque system described further in Chapter 4 is a good example.
The Aspen system itself was not really an "application" in the sense that it actually helped anybody accomplish anything. But it was far ahead of its time and of great historical significance in showing the way for future applications. Even now, almost twenty years after the Aspen project was completed, it still stands as one of the more sophisticated hypermedia systems ever built.
As a follow-up to Aspen, the MIT Architecture Machine Group built a more practically oriented system using hypermedia technology to integrate video and computer data. This project was called the Movie Manual and involved car and bicycle repair manuals. It is discussed further in Chapter 4.
The Movie Manual could use either a regular touch-sensitive computer display or it could project its image on an entire wall in a media room. It had a picture of a car as the table of contents and allowed the user to point to the area that needed repair. The Movie Manual would then show its instructions in a mixture of video, annotated images, and ordinary text, allowing the user to customize the screen layout by making the video window larger or smaller. The user could stop the video or play it faster, slower, or backward.
Figure 3.1. View of an area at a lake during the summer from the Ecodisc system. The user can "turn" to look in another direction by clicking the radio buttons in the inner part of the compass rose and can "move" to another location by clicking the arrows in the outer part of the compass rose. Clicking on the snow crystal activates movement in time to the winter view in Figure 3.2, as an example of temporal navigation. Copyright © 1990 by ESM, Ltd., reprinted by permission.
Figure 3.2. Winter view of the same part of the Ecodisc lake as that shown in Figure 3.1. Unfortunately the two photographs are not perfectly aligned, indicating the need for extreme precision when recording the camera positions and angles in the production of hypermedia with multiple views of the same scenes. Copyright © 1990 by ESM, Ltd., reprinted by permission.
KMS probably has the distinction of being the oldest among the currently popular hypertext systems since it is a direct descendant of the ZOG research system developed at Carnegie Mellon University with some development as early as 1972 and as a full-scale project from 1975 [Robertson et al. 1981]. The word ZOG does not mean anything but was chosen because it "is short, easily pronounced and easily remembered." At first, ZOG ran on mainframe computers; it was then moved to PERQ workstations, 28 of which were installed on the aircraft carrier USS Carl Vinson in 1983 for a field test of such applications as a maintenance manual for weapons elevators.
KMS is an abbreviation for Knowledge Management System and has been a commercial product since 1983. It runs on Unix workstations and has been used for a large number of applications. KMS is designed to manage fairly large hypertexts with many tens of thousands of nodes and has been designed from the start to work across local area networks.
KMS has a very simple data structure based on a single type of node called the frame. A frame can take over the entire workstation screen, but normally the screen is split into two frames, each of which is about as big as a letter-sized page of paper. Users cannot mix small and large nodes and cannot have more than two nodes on the screen at the same time. This might seem limiting at first but proponents of KMS claim that it is much better to use the hypertext navigation mechanism to change the contents of the display than to have to use window management operations to find the desired information among many overlapping windows.
KMS has been optimized for speed of navigation, so the destination frame will normally be displayed "instantaneously" as the user clicks the mouse on an anchor. The time to display a new frame is actually about a half-second, and the designers of KMS claim that there is no real benefit to being faster than that. They tried an experimental system to change the display in 0.05 seconds, but that was so fast that users had trouble noticing whether or not the screen had changed.
If an item on the screen is not linked to another node, then clicking on it will generate an empty frame, making node and link creation seem like a special form of navigation to the user. It is also possible for a click on an item to run a small program written in the special KMS action language. This language is not quite as general as the integrated InterLisp in NoteCards, but it still allows the user to customize KMS for many special applications. See for example the discussion in Chapter 4 of the use of KMS to support the research of a biologist.
KMS does not provide an overview diagram but instead relies on fast navigation and a hierarchical structure of the nodes. Links across the hierarchy are prefixed with an "@" to let users know that they are moving to another part of the information space. Two additional facilities to help users navigate are the landmark status of a special "home" frame, which is directly accessible from any location, and the special ease and global availability of backtracking to the previous node by single-clicking the mouse as long as it points to empty space on the screen.
Hyperties was started as a research project by Ben Shneiderman [Shneiderman 1987b] at the University of Maryland around 1983. It was originally called TIES as an abbreviation for The Electronic Encyclopedia System, but since that name was trademarked by somebody else, the name was changed to Hyperties to indicate the use of hypertext concepts in the system.
Since 1987 Hyperties has been available as a commercial product on standard PCs from Cognetics Corporation. Research continues at the University of Maryland, where a workstation version has been implemented on Sun workstations.
One of the interesting aspects of the commercial version of Hyperties is that it works with the plain text screen shown in Figure 3.3. It is thus suited for DOS users. Hyperties also works with the main graphics formats on PCs and PS/2s and can display color images if the screen can handle them.
The interaction techniques in Hyperties are extremely simple and allow the interface to be operated without a mouse. Some of the text on the screen is highlighted and the user can activate those anchors either by clicking on them with a mouse, touching if a touch screen is available, or simply by using the arrow keys to move the cursor until it is over the text and then hitting ENTER. Hyperties uses the arrow keys in a special manner called "jump keys," which causes the cursor to jump in a single step directly to the next active anchor in the direction of the arrow. This way of using arrow keys has been optimized for hypertext where there are normally only a few areas on the screen that the user can point to and the use of keys has been measured to be slightly faster than the mouse (see Chapter 6).
In the example in Figure 3.3, the user is activating the string "Xerox PARC," which is indicated by inverse video. In the color version of Hyperties it is possible for the user to edit a preference file to determine other types of feedback for selections such as the use of contrasting color.
Figure 3.3 . An example of a Hyperties screen as it typically looks on a text-only screen on a plain vanilla DOS machine.
Instead of taking the user directly to the destination node as almost all other hypertext systems do, Hyperties at first lets the user stay at the same navigational location and displays only a small "definition" at the bottom of the screen. This definition provides the user with a prospective view of what would happen if the link were indeed followed to its destination and it allows the user to see the information in the context of the anchor point. In many cases just seeing the definition is enough. Otherwise the user can of course choose to complete the link.
A Hyperties link points to an entire "article," which may consist of several pages. Users following the link will always be taken to the first page of the article and will have to page through it themselves. This set-up is in contrast to the KMS model, where a link always points to a single page, and to the Intermedia model where a link points to a specific text string within an article. The advantage of the Hyperties model is that authors do not need to specify destinations very precisely. They just indicate the name of the article they want to link to, and the authoring system completes the link.
The same text phrase will always point to the same article in Hyperties, which again simplifies the authoring interface but makes the system less flexible. Many applications call for having different destinations, depending on the context or perhaps on the system's model of the user's level of expertise.
Many of the design choices in Hyperties follow from the original emphasis on applications like museum information systems. These applications need a very simple reading interface without advanced facilities like overview diagrams (which cannot be supported on plain DOS machines anyway). Furthermore, the writers of the hypertexts were museum curators and historians who are mostly not very motivated for learning complex high-technology solutions, so the similarity of the Hyperties authoring facilities to traditional text processing was well suited for the initial users. Now Hyperties is being used for a much wider spectrum of applications.
The commercial version of Hyperties uses a full-screen user interface as shown in Figure 3.3, whereas the research system on the Sun uses a two-frame approach similar to that of KMS.
NoteCards may be the most famous of the original hypertext research systems because its design was been especially well documented [Halasz et al. 1987]. It was designed at Xerox PARC and is now available as a commercial product.
Originally, NoteCards ran only on the Xerox family of D-machines. These computers are fairly specialized Lisp machines and not in very widespread use outside the research world. Therefore the commercial version of NoteCards was ported to general workstations like the Sun.
One reason for implementing NoteCards on the Xerox Lisp machines was that they provided the powerful InterLisp programming environment. InterLisp made it easy to program a complex system like NoteCards, and it also gave users the option to customize NoteCards to their own special needs since it is fully integrated with the Lisp system. Users who know Lisp can in principle change any aspect of NoteCards and they can implement specialized card types as mentioned below.
NoteCards was built on the four basic kinds of objects shown in Figure 3.4:
Figure 3.4. The general layout of a NoteCards screen with the four basic objects: notecards, a link, FileBoxes, and a browser card.
- Each node is a single notecard that can be opened as a window on the screen. These cards are not really "cards" in the HyperCard sense of having a fixed size but are really standard resizeable windows. Users can have as many notecards open on the screen as they want but quickly risk facing the "messy desktop" problem if they open too many. The notecards can have different types depending on the data they contain. The simplest card types are plain text or graphics but there are at least 50 specialized types of cards for individual applications that need special data structures. For example a legal application might need notecards containing forms for court decisions with fields for the standard units of information (defendant, plaintiff, etc.).
- The links are typed connections between cards. Links can be displayed as a small link icon as in Figure 3.4 or they can be shown as a box with the title of their destination card. Users open the destination card in a new window on the screen by clicking on the link icon with the mouse. The link type is a label chosen by the user to specify the relation between the departure card and the destination card for the link. To continue the legal example, lawyers might want one type of link to court decisions supporting their own position and another type of link to decisions that refute their position.
- The third kind of object is the browser card, which contains a structural overview diagram of the notecards and links. As shown in Figure 3.4, the different link types are indicated by different line patterns in the browser, thus giving the user an indication of the connection among the nodes. The browser card is an active overview diagram and allows users to edit the underlying hypertext nodes and links by carrying out operations on the boxes and lines in the browser. The user can also go to a card by clicking on the box representing it. The layout of the browser card is computed by system and therefore reflects the changing structure of the hypertext as users add or delete nodes and links.
- The fourth kind of object is the FileBox , which is used for hierarchical nesting of notecards. Each notecard is listed in exactly one FileBox. Actually, the FileBox is a special-purpose notecard, so FileBoxes can contain other FileBoxes and it is possible to construct links from other cards to a FileBox.
In one case users customized NoteCards so extensively that the result may be said to be a new system. The Instructional Design Environment (IDE) developed at Xerox PARC [Jordan et al. 1989] is built on top of NoteCards but provides a new user interface to help courseware developers construct hypertext structures semi-automatically. IDE supports structure accelerators that speed up hypertext construction by allowing the user to generate an entire set of nodes and links from a template with a single action.
The standard version of NoteCards has been used for several years both within Xerox and at customer locations. One of the interesting early empirical studies of the actual use of NoteCards was a longitudinal study [Monty and Moran 1986] of a history graduate student who used the system to write a research paper over a period of seven months. This user did not use links across the FileBox hierarchy very much, but that result may not be generalized to other users. The important aspect of the study is that it investigated the behavior of the test subject for an extended period of time and observed the use of the system for a fairly large task.
Symbolics Document Examiner (1985)
The early hypertext systems can best be classified as proof-of-concept systems showing that hypertext was not just a wild idea but could actually be implemented on computers. Even though some systems, like Engelbart's NLS and the early Brown University systems, were used for real work, that use was mostly in-house at the same institutions where the systems were designed.
In contrast, the Symbolics Document Examiner [Walker 1987] was designed as a real product for users of the Symbolics workstations. The project started in 1982 and shipped in 1985, making it the first hypertext system to see real-world use. The Document Examiner was a hypertext interface to the online documentation for the Symbolics workstation, and people got it and used it because it was the best way to get information about the Symbolics, not because it was a hypertext system as such.
The Symbolics manual also existed in an 8,000-page printed version. This information was represented in a 10,000 nodes hypertext with 23,000 links taking up a total of ten megabytes storage space. This hypertext would still be considered fairly large today and was possible in 1985 only because the Symbolics workstation was a very powerful personal computer. To produce all this hypertext, the technical writers at Symbolics used a special writing interface called Concordia, which is discussed further in Chapter 11.
The information in the 8,000-page manual was modularized according to an analysis of the users' probable information needs. The basic principle was to have a node for any piece of information that a user might want.
Furthermore, the design goal for the user interface was to be as simple as possible and not scare users off. Since hypertext was not yet a popular concept in 1985, this goal meant using a book metaphor for the interface instead of trying to get users to use network-based navigation principles. The information was divided into "chapters" and "sections" and had a table of contents. Furthermore, users could insert "bookmarks" at nodes they wanted to return to later.
To assess the usability of the Symbolics Document Examiner, the designers conducted a survey of 24 users. Two of them did prefer the printed version of the manual, but half used only the hypertext version and eight had not even taken off the shrinkwrap of the printed manual [Walker et al. 1989]. These users were engineers and they were using advanced artificial intelligence workstations, so they might have been more motivated to use high-technology solutions than ordinary users are.
Intermedia was a highly integrated hypertext environment developed at Brown University over several years [Yankelovich et al. 1988a; Haan et al. 1992]. It ran on the Macintosh but unfortunately only under Apple's version of the Unix operating system. Since most Macintosh buyers do not want to touch Unix, that choice of operating system severely restricted the practical utility of Intermedia and may have been a cause of its eventual failure.
Intermedia was based on the scrolling window model, like Guide and NoteCards, but otherwise it followed a different philosophy from the other systems discussed in this chapter. The core of Intermedia was a linking protocol defining the way other applications should link to and from Intermedia documents [Meyrowitz 1986]. It was possible to write new specialized hypertext applications and have them integrated into the existing Intermedia framework, since all the existing Intermedia applications would already know how to interact with the new one [Haan et al. 1992].
The links in Intermedia were highly based on the idea of connecting two anchors rather than two nodes. The links were bidirectional so that there was no difference between departure anchors and destination anchors. When a user activated a link from one of its anchors, the system would open a window with the document containing the other anchor and scroll that window until the anchor became visible. Thus Intermedia authors were encouraged to construct fairly long documents, since they could easily link to specific points in the documents.
Intermedia had two kinds of overview diagram as shown in Figure 3.5. The web view was constructed automatically by the system, and overview documents like the Mitosis OV document in the figure were constructed manually by the author using a drawing package and only by convention have a common layout with the name of the topic in the center and related concepts in a circle around it.
Figure 3.5. An Intermedia web view. The InterDraw document called Mitosis OV is open. Each arrow icon in the overview diagram indicates the existence of one or more links. These connections are dynamically represented in the "Cell Motility: Web View" document. The web view is individual to each user and is saved from session to session. One of its functions is to provide the user with a path showing which documents he or she has opened, when they were opened, and how the document was reached (by following a link, opening the document from the desktop, and so on). The figure also illustrates another function of the web view: For the current document (the document most recently activated), the web view provides users with a map of where they can go next, thus allowing them to preview links and follow only those that they want to see. Copyright © 1989 by Brown University, reprinted with permission.
A typical Intermedia hypertext for a given course would contain many such overview documents, one for each of the central concepts in the course material.
Intermedia was designed for educational use on the university level and was used to teach several courses in both humanities and natural sciences. There is no reason why it could not be used for many of the other hypertext applications listed in Chapter 4, but the educational origin has had some impact on the design. For example, the Intermedia model assumes that several users (i.e., students) will access the same set of hypertext documents (i.e., course readings) and make their own annotations and new links. Therefore Intermedia stores separate files with links for each user in the form of so-called webs. Figure 3.6 shows the creation of a link in Intermedia. When the user has selected the other anchor for the link (for example the event listed under 1879 in the InterVal timeline) and has activated the "Complete Link" command, the new link will be added to the user's web.
Figure 3.6. To create a link in Intermedia, the user may select any portion of a document and choose the "Start Link" command. The link creation interface was modeled after the Macintosh cut/copy/paste paradigm; thus, the user may perform any number of intermediate actions and the link will remain pending until the user selects the other anchor for the link and activates the "Complete Link" command. Copyright © 1989 by Brown University, reprinted with permission.
Unfortunately, the funding agencies that had been supporting the development of Intermedia decided to discontinue funding the project in 1991, so even though Intermedia was the most promising educational hypertext system in the early 1990s, it does not exist anymore.
Guide was the first popular commercial hypertext system [Brown 1987] when it was released for the Macintosh in 1986. Soon thereafter it was also released for the IBM PC, and was the first hypertext system that was available on both platforms. The user interface looked exactly the same on the two computers. Recent versions of Guide have been restricted to the Windows platform.
Peter Brown started Guide as a research project at the University of Kent in the U.K. in 1982, and he had the first version running on PERQ workstations in 1983. In 1984 the company Office Workstations Ltd. (OWL) got interested in the program and decided to release it as a commercial product. They made several changes to the prototype, including some that were necessary to get the user interface to conform to the Macintosh user interface.
Peter Brown continues to conduct research in hypertext using the Unix version of Guide that is maintained at the university [Brown 1992]. It is also used for some consulting projects in industry. If nothing else is stated, my use of the term "Guide" will refer to the commercial version on the IBM PC and the Macintosh and not to the Unix workstation version, since there are several differences between them.
Guide is similar to NoteCards in being based on scrolling text windows instead of fixed frames. But whereas the links in NoteCards refer to other cards, links in Guide often just scroll the window to a new position to reveal a destination contained within a single file. Link anchors are associated with text strings and move over the screen as the user scrolls or edits the text. This approach is in contrast to, say, HyperCard, where anchors are fixed graphic regions on the screen. Guide does include support for graphic links, but they seem somewhat less natural to work with in the Guide user interface, and graphics have to be imported from external drawing programs.
Guide supported three different forms for hypertext link: Replacements, pop-ups, and jumps.
The replacement buttons were used for in-line expansion of the text of the anchor to a new and normally larger text in a concept that is sometimes called stretchtext. (The "stretchtext" term is probably due to Ted Nelson. Similar concepts were found in Augment and several early text editors at Xerox PARC)
Replacement buttons formed a hierarchical structure of text and were useful for representing text in the manner of a traditional textbook with chapters, sections, and subsections. Typically, the initial display would show all the chapter headings and users would then expand the one chapter in which they were interested by replacing the chapter heading with the list of sections in the chapter. They could then further replace the section that interested them the most with its list of subsections, and so on. While making these replacements, the user continuously had the other chapter headings available (perhaps by scrolling the window a little) and thereby preserved context. The reverse action of a replacement was to close the expanded text and have it re-replaced with the original text.
The replacement button existed in a variation called inquiry replacement, which was used to list several options and have the user choose one. When the user clicked on a replacement button that was part of an inquiry, that button would expand, and the other buttons in the inquiry would be removed from the screen until the user closed the expansion again. This interface was useful for multiple-choice type applications, like a repair manual where the user was asked to click on the type of equipment that needs repair. The explanation for the selected type was expanded and the other, irrelevant types would be hidden.
The second type of hypertext was small pop-up windows provided by clicking note buttons as shown in Figure 3.7. This facility was useful for footnote-type annotations, which are closely connected to the information in the main window. The pop-up was displayed only as long as the user held the mouse button down over the note button, implying that the "backtrack" command consisted simply of letting the mouse button go. This type of user interface is sometimes called a "spring loaded mode" because users are in the mode only as long as they continue to activate a dialogue element that will revert to normal as soon as it is released. The pop-ups are modes, nevertheless, since they make it impossible for the user to perform other actions (e.g., making a copy of the text in the pop-up window) as long as they are displayed.
Figure 3.7. A typical Guide screen where the user is pressing down the mouse button over the anchor for a pop-up note, which is temporarily displayed in the small window at the top right of the screen.
The third form for hypertext in Guide was the reference button, which was used to jump to another location in the hypertext. To get back to the departure point, users had to click a special backtrack icon.
The three different kinds of hypertext in Guide were revealed to the user by changing the shape of the cursor, as shown in Figure 3.8 One might have imagined that this fairly extensive set of different types of hypertext in a single small system would confuse users, but our field studies [Nielsen and Lyngbæk 1990] showed that users had no problems distinguishing among the three kinds of button.
Figure 3.8. Guide used varying cursor shapes to indicate the type of hypertext action available to the user.
As further discussed in Chapter 10, we also found that users liked the note button for pop-ups best and that the reference button for jumps got the worst ratings. It is interesting to consider that the reference button is exactly the feature that was not included in the "cleanly designed" research prototype of Guide but was added for the commercial release [Brown 1987]. It is of course impossible to say from our data whether the reference button was rated relatively poorly because it was not integrated nicely into the overall design or because gotos are just harmful in general.
Version 2 of Guide introduced a fourth type of button called the command button, which executes a script in the special-purpose Genesis language when clicked. Genesis was not a general programming language like HyperCard's HyperTalk, however, and was typically only used to access a videodisk to play a specified set of frames.
It is important to note that the designer of HyperCard, Bill Atkinson, has admitted that it was not really a hypertext product from the beginning. He originally built HyperCard as a graphic programming environment and many of the applications built into HyperCard actually have nothing to do with hypertext. Even so, HyperCard was probably the most famous hypertext product in the world in the late 1980s.
There are several reasons for HyperCard's popularity. A very pragmatic one is that it was bundled free with every Macintosh sold by Apple from 1987 to 1992. You could not beat that price, and the fact that it came automatically with the machine also meant that it was introduced to a large number of people who would otherwise never have dreamt of getting a hypertext system. Even after Apple started selling HyperCard as a traditional product, they still supplied a HyperCard reader for free with every Macintosh sold, meaning that HyperCard developers were ensured of their market.
The second reason for HyperCard's popularity is that it includes a general programming language called HyperTalk, which is fairly easy to learn. My experiments indicate that people with some previous programming experience can learn HyperTalk programming in as little as two days [Nielsen et al. 1991a]. Furthermore, this programming language is quite powerful with respect to prototyping graphic user interfaces. It is not very well suited for implementing larger software systems needing maintenance over periods of several years, however.
HyperCard is a good match for many of the innovative things people want to experiment with in the hypertext field. It is easy to learn, it can produce aesthetically pleasing screen designs, and it allows fast prototyping of new design ideas. One of my own hypertext systems, described in Chapter 2, was implemented in HyperCard. HyperTalk makes HyperCard well suited for experiments with computational hypertext where information is generated at read-time under program control.
As the name implies, HyperCard is strongly based on the card metaphor. It is a frame-based system like KMS but mostly based on a much smaller frame size. Most HyperCard stacks are restricted to the size of the original small Macintosh screen even if the user has a larger screen. This is to make sure that all HyperCard designs will run on all Macintosh machines, thereby ensuring a reasonably wide distribution for HyperCard products. Version 1 of HyperCard enforced the card size restriction without exceptions, but the newer version 2 has made it possible to take advantage of larger screens.
The basic node object in HyperCard is the card, and a collection of cards is called a stack. The main hypertext support is the ability to construct rectangular buttons on the screen and associate a HyperTalk program with them. This program will often just contain a single line of code written by the user in the form of a
goto statement to achieve a hypertext jump. Buttons are normally activated when the user clicks on them, but one of the flexible aspects of HyperCard is that it allows actions to be taken also in the case of other events, such as when the cursor enters the rectangular region, or even when a specified time period has passed without any user activity.
The main advantage of the HyperCard approach of implementing hypertext jumps as program language statements is that links do not need to be hardwired. Anything you can compute can be used as the destination for a link.
In addition to the basic jumps to other cards, HyperCard can at least simulate pop-ups like the ones in Guide by the use of special
hide commands. The designer can determine that a specific text field should normally be hidden from the user but that it will be made visible when the user clicks some button. The end result of these manipulations will be very similar to the Guide pop-ups.
HyperCard does have one serious problem compared to Guide, however, and that is the question of having hypertext anchors associated with text strings. In Guide these "sticky buttons" are the standard, allowing users to edit the text as much as they like and still keep their hypertext links so long as they do not delete the anchor strings. In HyperCard, an anchor is normally associated with a text string by placing the rectangular region of a button at the same location of the screen as the text string. But this anchoring method means big trouble if the user ever edits the text, since it is sure to change the physical location of the anchor string on the screen.
Figure 3.9. An example of a screen implemented in HyperCard. Figure 3.10 gives a general idea of how this design was implemented.
Figure 3.10 gives a simplified view of how I implemented the hypertext design from Figure 3.9 in HyperCard. First the general graphic design of the nodes was drawn as a background object that would be inherited by all the nodes in its class. This design included the picture of a book and the global overview diagram (since it would be unchanged for all nodes). The background design also included an empty placeholder field for the text to be added in the individual nodes.
Figure 3.10. A simplified view of the HyperCard implementation of the hypertext design in Figure 3.9. The background level contains graphics that are common for several nodes, whereas the foreground level contains the text and graphics that are specific for the individual node. Finally, the designer has placed several buttons on top of the text and graphics.
For each individual node I then added a foreground layer with the text of the node and some graphics. The foreground graphics included the local overview diagram (since it would be different from node to node) and the heavy rectangles used to highlight the current location in the local and global overview diagrams. Since HyperCard displays all the levels as a single image on the screen, following the same principle as when an animation artist photographs a pile of acetates, the user would never know that the visual appearance of the global overview diagram was created by a combination of a fixed background image and a changing foreground rectangle.
Finally, I added a set of buttons to each individual node to achieve the hypertext links. Some of these buttons were for the local overview diagram and were placed over the corresponding graphics, whereas other buttons were anchors associated with text strings in the foreground layer and had to be carefully positioned over the relevant text. Actually, the complete screen contains even more buttons since there are also some global buttons that are common for all nodes and are therefore placed in the background level. They are not shown specifically here.
HyperCard has several competitors, including SuperCard, Plus, and MetaCard. SuperCard has integrated facilities for dealing with color and several variable-sized windows at the same time and also allows object-oriented graphics of non-rectangular shapes to act as buttons. Plus is available both for the Macintosh and for the IBM PC (under Microsoft Windows as well as OS/2), affording cross-platform compatibility of its file format. MetaCard runs on workstations using X Windows, thus expanding the range of platforms on which the basic HyperCard type of hypertext can be used. Several other limitations have not been addressed by these competing products, however.
Some of these unsolved problems are not all that conceptually difficult, and one could imagine that HyperCard would address them in a possible version 3. This is true of the missing sticky buttons and the slow execution speed of HyperTalk programs. Other problems are harder to address since they conflict with the basic nature of HyperCard. These include issues such as changing the programming language to be completely object-oriented and more maintainable and designing advanced hypertext features or multiuser access.
Interestingly, HyperCard's early success was not just due to its conceptual structure or the power of the underlying system. A major reason many people started authoring their own HyperCard stacks was the inclusion of a "construction kit" of graphical user interface elements with the basic system. I can't begin to count the times I have seen people using the picture of a man thinking in front of a computer that was included with the original HyperCard. More important, instead of just providing square boxes for buttons and hoping that people would fill them in with their own icons, HyperCard shipped with a large collection of pointing hands, turning arrows, and other appropriate designs that could be used as building blocks for new user interfaces. The attractiveness of these sample and template materials made many of the early HyperCard stacks look pretty good and helped build critical mass. I would definitely advise developers of future systems to include plenty of GUI widgets and pre-designed graphics.
Hypertext Grows Up
Symbolics Document Examiner was an example of hypertext meeting the real world since it saw real use by real customers. But the Symbolics was a fairly specialized artificial intelligence workstation and was very expensive when the Document Examiner was first introduced. So even though it counts as the first real-world use of hypertext, it was not a widely distributed and known system.
Several hypertext systems were announced in 1985 and saw widespread use in the late 1980s and early 1990s, including NoteCards from Xerox and Intermedia from Brown University.
In contrast, when Office Workstations Limited (OWL) introduced Guide in 1986, it was as a commercial product. Guide was the first widely available hypertext to run on ordinary personal computers of the type people have in their homes or offices. To some extent the release of Guide could be said to mark the transition of hypertext from an exotic research concept to a "real world" computer technique for use in actual applications.
The final step to "realworldness" came when Apple introduced HyperCard in 1987. A nice product in its own right, its real significance was to be found in the marketing concept of giving away the program (or later a reader) for free with every Macintosh sold after 1987.
An event that really marked the graduation of hypertext from a pet project of a few fanatics to widespread popularity was the first ACM conference on hypertext, Hypertext'87, held at the University of North Carolina on November 13–15, 1987. Almost everybody who had been active in the hypertext field was there, all the way from the original pioneers (except Vannevar Bush) to this author. Unfortunately the conference organizers had completely underestimated the growing interest in hypertext and had to turn away about half of the 500 people who wanted to attend the conference. Even so, we were crammed into two auditoriums that were connected by video transmission, and people had to sit on the floor. For those people who were lucky enough to get in, this was a great conference with plenty of opportunity to meet everybody in the field and to see the richness of ongoing hypertext research and development.
History repeated itself when the first open conference on hypertext in Europe was held in 1989. This was the Hypertext'2 conference in York in the U.K. on June 29–30, 1989. The reason this conference was called Hypertext'2 was that there had been a first, closed conference in Aberdeen the year before. Again the organizers had underestimated the growth of the field and had facilities to accommodate only 250 people. But 500 wanted to come, so half had to be turned away.
The year 1989 also saw the birth of the first scientific journal devoted to hypertext, Hypermedia , published by Taylor Graham. It is discussed further in the bibliography.
In the mid-1990s, hypermedia systems came to the attention of the larger public through the proliferation of CD-ROMs. For example, the first full-length feature film in hypermedia form was shipped on a CD-ROM in 1993 when the Voyager Company released the Beatles film A Hard Day's Night . In 1993, compression technology was still primitive enough to make this something of a feat, and the only reason it was possible is that A Hard Day's Night is a rather short film and was filmed in black-and-white. Figure 3.11 shows a screen from this hypermedia production. The Voyager Company released a large number of other titles throughout the 1990s and proved that it had become possible to launch a successful publishing company by concentrating on shipping hypertext.
Figure 3.11. Screen from the CD-ROM edition of the Beatles film A Hard Day's Night. The film itself is playing in the upper left window and the rest of the screen updates to show the part of the original script that corresponds to the scene currently playing. Pop-up controls allow the user to move directly to various scenes or songs and to view related films. Copyright © 1964 by Proscenium Films, 1993 by The Voyager Company, reprinted by permission.
A final event in the mid-1990s was the extremely rapid growth of hypertext on the Internet, spearheaded by the specification of the World Wide Web by Tim Berners-Lee and colleagues at CERN (the European Center for Nuclear Physics Research in Geneva, Switzerland). Almost immediately after its introduction by the National Center for Supercomputing Applications (NCSA) in January 1993, Mosaic became the most popular browser for the WWW and the growth of Internet hypertext accelerated even more. See Chapter 7 for a more extended treatment of hypertext over the Internet.
It is interesting to contemplate the fact that Mosaic and the WWW more or less succeeded in establishing a universal hypertext system in just three years, even though Ted Nelson could not get his Xanadu system accepted in thirty years of trying. One major reason for this difference is doubtless that the WWW projects were paid for by the taxpayers (the European taxpayers in the case of CERN and the American taxpayers in the case of NCSA). It always makes it easier to sell a product when the cost is $0. Even so, there are also other reasons why WWW succeeded where Xanadu failed. The most important differences are the open systems nature of the WWW and its ability to be backwards compatible with legacy data.
The WWW designers compromised and designed their system to work with the Internet through open standards with capabilities matching the kind of data that was available on the net at the time of the launch. These compromises ensured the success of the WWW but also hampered its ability to provide all the features one would ideally want in a hypertext system. The specification of the WWW's underlying hypertext markup language (HTML) has been through three versions in the first four years after the introduction of the system and it is still not ideal. There is no doubt that this reliance on iterative design and evolutionary change is better than waiting for the revolution that never comes. After all, if the choice is between perfection and nothing, then nothing wins every time. We should be grateful to the WWW designers for offering us a third choice.
In conclusion, we can say that hypertext was conceived in 1945, born in the 1960s, slowly nurtured in the 1970s, and finally entered the real world in the 1980s with an especially rapid growth after 1985, culminating in a fully established field during 1989. We now have several real-world systems that anybody can buy in their local computer store (or get for free bundled with their computer); we have successful conferences and a journal; and most important of all, we have many examples of actual use of hypertext for real projects. These examples are the subject of the next chapter.