It may still be possible to buy the proceedings online.
Pittsburgh, PA, 5-8 November 1989
by Jakob Nielsen
At the first hypertext conference, Hypertext'87, there was some talk that we would soon stop having special hypertext conferences, just as we do not have special conferences about, say word processors. Future conferences would be focused on various application domains and might every now and then include papers on the use of a hypertext system. On the other hand there are still conferences about e.g. databases even though they have been one of the most commercially successful and widespread applications of computers. It may be too soon to call the final outcome, but right now it seems that hypertext conferences are proliferating (almost too much, in fact).
In any case, the Hypertext'89 conference was a success with a large number of papers submitted and high quality for those papers that got accepted. The conference attracted 650 participants, which was fewer than I had expected but still quite good for a second conference in a new field.
Integration with the World
The main theme of the conference was integrating hypertext with the rest of the world. Until now, hypertext systems have mostly been monolithic stand-alone systems with no connections to the rest of the user's computational environment or to alternative ways of accessing information. The only reason this has been acceptable is that hypertext provides such wonderful new facilities that users have been overwhelmed by the freedom and flexibility offered by hypertext navigation. It is actually possible to live within the bounds of a hypertext system and not suffer too much because hypertext is flexible enough to structure information in any way the user wants.
In the long term, however, users also want to do something with the information they browse in a hypertext and this is when hypertext systems will need to be integrated with the rest of the world. This conference presented new integrational trends in four areas (discussed further below):
hypertext as a system service
hypertext as file system interfaces
Talking about integration, Bob Glushko from Search Technology presented a paper on the issues in integrating multi-document hypertext, claiming that cross-document linking would not always make sense. Considering that links are the Holy Grail of hypertext, this position was somewhat controversial, but then Glushko likes being controversial. His example was to consider the following four documents: 1) the classified telephone directory (yellow pages), 2) a software reference manual, 3) a thesaurus, and 4) a street map. Each of these would make a good hypertext document in its own right but it would be no good to have a hypertext system integrating the yellow pages and your software manual. Of these four documents, combining the yellow pages and the street map would make the most sense but there could also be some benefit from combining the yellow pages and the thesaurus. The latter example was not symmetric, however, since the thesaurus would help in the use of the yellow pages while having the yellow pages available would not help a user with a specific interest in using the thesaurus. Glushko's conclusion was that the two documents needed to have a certain degree of complementarity for a hypertext combination to be useful. There should be some overlap between the documents but not a complete overlap.
Given that a multi-document hypertext might be desirable, the next issue is how much to integrate the documents. Glushko's experience was that usability was enhanced by the use of the explicit structure of each document so he advocated keeping the identity of each document and avoiding a complete integration. Their approach was to let the user select one document as the primary focus of browsing and then have links to related information in other documents appear as cross-references. In their system, users do not switch context when following a cross-reference unless they make an explicit choice to do so.
Even the Dog has Hypertext (Meyrowitz)
Norman Meyrowitz from Brown University is one of the main developers of Intermedia and gave a very good keynote address entitled "Hypertext-does it reduce cholesterol, too?" He introduced the topic by a parody of the current "hyper hype" where every computer product seems to get hypertext added, just like some food products get added fiber. Meyrowitz asked if hypertext was getting to be the oat bran of computing. He hoped that we would not miss the real advantage of hypertext among all this superficial excitement.
Meyrowitz' vision for hypertext was an intelligent infrastructure to link together information at our fingertips in an information environment. This in contrast to the currently prevailing desktop metaphor for user interfaces where users move among objects manually. Any selection by the content of objects is haphazard and is at best done by the name of the object. For example it can be quite hard to find the appropriate file on a large harddisk. Instead it would be possible to integrate hypertext with the basic operating system and use hypertext links to facilitate moving among the user's units of information (whether files or smaller "chunks" of data).
When the user transfers data by a copy-paste operation, a temporary link is formed between the source document and the destination document but hypertext could make these links persistent-even across applications if the basic linking mechanism was a system hypertext service. Some limited support for cross-application hypertext links is starting to appear in e.g. some of Microsoft's applications and in version 7 of the Macintosh operating system, but Meyrowitz would like to see a true shift to navigational linking as the extension of the traditional cut-copy-paste mechanism for moving data. His own hypertext system, Intermedia, is in fact built on the principle of providing a standardized linking protocol for other applications to use, and this has proven quite useful in extending Intermedia with new specialized packages. Even so, the link service is internal to applications running under Intermedia and not integrated with the complete computing environment.
There will be many different media types needed in future hypermedia documents. Not even the Intermedia people will be able to produce good enough editors and utilities for all of them so Meyrowitz felt the need for third party developers. Similar problems have been felt for traditional integrated business software where closed integration rarely satisfies users. The solution has been open integration based on system services like the Macintosh clipboard which Meyrowitz would like to see extended to a "linkboard" to handle hypertext anchors with names, keywords, and attributes. Furthermore, Meyrowitz wanted computer systems to offer a set of standard building blocks for hypertext integration like dictionaries, glossaries, thesauri, spelling correctors, and linguistic tools like morphology. Some of this already exists like the Intermedia dictionary and the Perseus morphology tools, but only the NeXT machine's Webster's Dictionary comes close to being a true system service utilized by external applications.
Most fields have too little vision but Meyrowitz warned us that the hypertext field may have too much: We can all smell the future so well that we may forget to build it. He would like to see major projects building actual hypertext documents with interesting content instead of the current proliferation of one-shot proof-of-concept projects. Also it was a problem that too much funding in our field was for short term projects like "port Intermedia to system X in six months" instead of more long term projects. In contrast, there were currently plans in the US for almost two billion dollars government funding for a supercomputer network without any guarantee that it would be able to support hypermedia needs.
We Cannot Build the Memex
Besides talking about the need to have hypertext as a system service and integrating it with computer file systems, Meyrowitz also spent some of his keynote address comparing Vannevar Bush's original Memex proposal from 1945 with current hypertext capabilities. In general, it turns out that we are still far from being able to support Bush's vision even if we consider frontline computer research capabilities which have not been integrated with hypertext yet.
Besides the main desktop Memex, Bush also envisaged portable extensions of his hypertext system in the form of a wireless device in communication with the home system. It would be possible to construct such a device on the basis of cellular telephone technology combined with modems but the bandwidth would not be large enough to transmit high-resolution images. For input, Bush described voice recording, handwriting, and scanning. None of these are currently in common use for personal computers. Scanning is possible but still expensive and have cumbersome software, both with regard to optical character recognition and adjustment of scanned images to look well on a computer screen. Handwriting recognition has been demonstrated to some extent and we seem to be within a few years of a breakthrough. Voice recognition is still not possible in the general case but voice applications like voice mail and voice response systems are starting to appear. For integration with hypertext, however, we do need the ability to have voice input transcribed to a permanent record.
Also, Bush described a portable input device in the form of a walnut-sized camera worn on the users forehead. This would be possible with current digital camera technology but it is not likely to see widespread use.
Even if some of the input devices suggested by Bush might become available in hypertext systems sometime in the future, we are still not up to handling the storage needs for these media. For example a user with the walnut-sized head-mounted camera might snap 5,000 photos per day for a storage need of about 10 Gigabytes. This translates to about 20 writable optical disks per day at a current cost of $5,000 per day. Meyrowitz suggested that CD-ROMs might not be the most appropriate storage technology for the future but we have still not found a really cheap mass storage device. Somebody else remarked that CD-ROMs are almost always either much too big or much too small for the amount of data one has for any given application.
The Memex vision remains to be realized even in its more social and cultural aspects. Bush had described a Memex user as buying most hypertext material from outside sources for merging with the user's home Memex base but there is almost no current hypertext data for general use available for sale. And in most cases it would not be possible to integrate it with the user's own hypertext materials anyway.
Demos: Many Features, Little Content
The Hypertext'89 conference had a nice demo room with lots of activity every evening. Unfortunately I did not get to see all that many demos myself because I was busy most of the time showing my own demo of a hypertext interface to the Unix network news. Demonstrations were run on a continuous basis and allowed people to drift between the demos in small groups. The advantage of this format was that it was flexible and allowed more interaction between the demo presenters and the audience than demos set up in an auditorium. Also, it was often possible to get hands-on experience with the systems. The disadvantage (besides taking up too much of the presenters' time) was that it was very hard to see the more popular demos because no screen projection was being used. Maybe the hypertext conferences have grown too large for this kind of very informal demos.
In any case, I had already seen most of the demos at earlier visits to various research centers so I did not miss all that much. For me the most interesting demo might have been the least advanced of them all: David Durand showing the FRESS system (File Retrieval and Editing System) developed at Brown University in 1971 as a successor to the original Hypertext Editing System (the world's first hypertext system). The demo ran on a Macintosh simulating the old vector graphics scope used in 1971. The Mac was connected by a modem to the IBM 370 mainframe at Brown University which was still able to run the original old code, thus showing the advantage of having IBM maintain the same architecture all these years.
There was some talk about getting the actual hypertext documents off the system before FRESS finally died. For example they had an interesting set of annotated poems originally used to teach poetry at Brown University in a very early field test of hypertext. It would probably not be all that hard to convert them to another hypertext format and this should certainly be done for the historical interest. It is amazing how little respect we have for the history of technology and how few resources are devoted to saving early technology for future generations. Consider how we condemn those Romans who tore down classical monuments during the Middle Ages in order to build new palaces from the marble. We are in effect doing the same when we use our resources exclusively for the production of new software and not on saving the most important artifacts and software in the history of technology.
The demos were mostly oriented towards systems and features instead of content. The Perseus Project did show an interesting set of documents oriented towards the study of classical Greek culture but otherwise there were very few really well prepared materials shown at the demos. It would have been interesting to see more actual sets of documents or data in hypertext available for people to buy. Mark Bernstein from Eastgate Systems did circulate a call for hypertexts to be published by their new publications program, the Eastgate Press. They plan to publish about four new titles per year of fiction as well as non-fiction. Bernstein showed some interesting initial works developed in Hypergate: The Election of 1912 and The Flag Was Still There (about the period 1808-1815). In yet another example of the integration trend, these hypertexts integrated the hypertext documents with simulations of the historical events discussed in the text.
At one of the panel sessions, Ben Shneiderman said that he too was surprised by not seeing more ambitious projects and people doing larger hypertexts. He would like to have some beautiful hypertext documents to show people what we can do. Shneiderman recalled Engelbart's and van Dam's stories at the Hypertext'87 conference about the lack of acceptance of their pioneering hypertext work in the 1960s but he had thought that we were above that now. The transfer of technology from the laboratory to everyday use often takes much longer than one would expect.
Another observation is that flashy multimedia systems were conspicuously absent from the conference which was dominated by more pragmatic, text oriented systems and applications, (not just at the demos but even more so at the paper sessions). This does not mean that the conference was dominated by command-line user interfaces. On the contrary, almost everybody used graphical user interfaces and many had features like graphical overview diagrams. There were just very few examples of the use of video and sound in hypertext systems.
With respect to hardware platforms, this conference was not nearly as dominated by the Macintosh as the European Hypertext'2 conference had been a few months earlier. There were certainly some Mac systems at the conference (e.g. Intermedia and the Perseus Project) but they did not nearly constitute a majority. The reason for this difference is definitely not that Apple might have had a larger market share in Europe than in the United States (the opposite is true). It is more likely to be due to the smaller scale of funding in Europe: The entry costs for getting into hypertext are small on the Macintosh since everybody has one already and can develop systems in HyperCard on their own without too much programming effort. Many US research and development projects have rejected that approach in favor of getting larger workstations and using larger groups of programmers to develop their own software.
Confessions of the Hypertext Designers
An interesting panel session had representatives of many of the main hypertext systems describe the shortcomings of their designs. Norman Meyrowitz started by describing Intermedia as a monolithic system. It only works with applications written within the system, making it impossible to integrate outside applications. He liked their web concept, but sometimes wanted to compare or superimpose two or more webs at the same time (Intermedia's webs are basically files of links for a given set of hypertext nodes and different users will frequently have different webs over the same nodes). This was impossible because of Intermedia's basic assumption of having a single active web of links. Furthermore, there are no filtering mechanisms for the links in a web and the single link- icon makes it impossible to see what will happen when the link is activated or what the destination will be like. A final problem was the lack of a print linearization method, making it impossible to print out a hypertext.
Greg Kimberly from Apple represented HyperCard on the panel. His first comment was that HyperCard is not hypertext even though there is some overlap between the two. The inspiration for HyperCard was Bill Atkinson's old rolodex program. He was very fond of BitBlt and found that he could throw graphics up quickly. Furthermore, Dan Winkler added the idea of programmability in the form of the HyperTalk language (partly inspired by Lisp).
They had originally considered the marketing slogan "HyperCard-You Figure it Out!" and people have indeed done many different things with HyperCard. Kimberly claimed that the missing features were a deliberate feature of HyperCard. For example, it goes to great lengths to avoid abstractions and does not have abstract links. This feature makes it difficult to do things like overview maps. They also avoided things which would not fit on the Mac Plus computer such as object oriented graphics and color.
Challenges for future versions of HyperCard included factoring documents into cards and coming up with methods to take a document form it into cards. The Electronic Whole Earth Catalog was a good example of a match between the document and cards because of the brief nature of most of its elements. Furthermore, there was a need for version control as well as for managing automatic links and getting some kind of high-level control.
Amy Pearl from Sun mentioned that the goal of the Sun Link Service was to support consistent connections. They built an open architecture where the links are managed by the link server while the node content is managed by the individual applications. Another goal was putting minimum constraints on the user interface of the various tools. The service is not general enough, however, and lacks support for inclusion links (automatically updated hot links). It also lacks link types, node and link attributes, and a browser. A more marketing-related problem is that the Sun Link Service depends on the application to provide any functionality. The Link Service by itself cannot be used for anything.
Don McCracken from Knowledge Systems classified the problems of KMS in two categories: Some problems were wrongs of omission and could in principle be corrected in future version. This category included weak searching and indexing, lack of bidirectional links and embedded links, and the external links not being hot enough. McCracken felt that even the latest crop of workstations was still too slow for some of the things one would want to do in a hypertext system.
Other problems were wrongs of commission and were done "wrong" on purpose. These problems are therefore much harder to correct since they involve design tradeoffs. Even though they may seem wrong in isolation, McCracken felt that they might still be right from a more global design perspective. This category included KMS having its own idiosyncratic user interface with a strange big cursor. But because the interface is very tightly coupled with the data model of KMS, they cannot just rip it out and replace it with a standard user interface. Another wrong of commission is the lack of a simplified system for readers. They do not see any reason to design a limited annotation facility for readers since the best way to annotate is to have the full KMS features available. The only exception might be for a museum application or in similar environments.
On perhaps a slightly more commercial note, McCracken noted that the KMS user interface seemed so transparent that people do not understand why they need 100,000 limes of code to support it. In that regard, KMS may be "too simple for its own good" (or at least for the vendor's ability to charge a large sum of money for it).
Using McCracken's classification, Frank Halasz from Xerox PARC referred to his paper on NoteCards from the Hypertext'87 conference as being about the "wrongs of omission." He had discussed how to expand NoteCards to handle the kind of tasks people were bringing to NoteCards. Now he wanted to want to talk about things he did wrong because he did not know the right way to do it then. After having looked at other systems and the use of NoteCards over the years, he now believed that he knew how to do it.
NoteCards was built almost overnight because Interlisp was so powerful. The power of Interlisp also allowed people to extend NoteCards easily. But having NoteCards run under Interlisp also meant that it never got out to other people because so few had Interlisp available. Also, they never had to think about the open system issues because Interlisp was a large monolithic environment. The lack of openness have resulted in having people implement new applications which have been hard to integrate in NoteCards. So Interlisp encouraged bad design at the same time as it allowed them to get it done at all. Halasz would have preferred having used a general database to support Notecards since they could have saved hours of time if only they had used somebody else's database for storage, compression, etc.
Talking about databases, Halasz felt that the hypertext community ought to borrow the concept of schemas from the database community for talking about the conceptual structure of information compared to the implementation structure. In NoteCards, every node (card) has a type, but there are really two intermixed kinds of types: implementation types (text, graphics, video, etc.) and user oriented representation types such as position card, issue card, etc. These two type hierarchies should be orthogonal and that had actually been supported in the original design of NoteCards. Unfortunately, the concept of two different type hierarchies was taken out because they felt that it would be too difficult. Instead it has turned out that users have had to build incredibly complicated type hierarchies to get both kinds of types right.
Links are second-class citizens in NoteCards. Only the cards have a full object-oriented type hierachy with inheritance while the typing of the links is restricted to plain named labels without any behavior. This is an asymmetric model and Halasz now felt that links are as important as nodes. It was also a mistake to implement links as embedded objects instead of the persistent selections (anchors) in Intermedia. Halasz complained that the same mistake has recenely been repeated in the CMU Andrew system.
The NoteCards programmer's language was an add-on. Halasz preferred the design of the new Emacs editor where a basic extension language is implemented by a small kernel and the rest of the system is implemented in that language. A major difficulty had been to modify the design of NoteCards from the original single-user system to the collaborative system wanted by many users.
Halasz finally mentioned the lack of search and query facilities in NoteCards as well as the need for filtering mechanisms. In a new design, he felt that some form of indexing should be a first class concept.
Ben Shneiderman emphasized the different goals and applications for the various hypertext systems. His Hyperties system was developed for use in museums. In that application, there will be many more readers than authors, leading them to have a separate authoring interface. A problem with this design was the need for author-like facilities like annotations and the ability to add new links also in the browsing interface.
A lot of Hyperties limitations are due to the IBM PC world, such as being stuck with the text orientation and having a small screen. On the other hand, they have aimed for wide dissemination, and that is an advantage of IBM platform. For example, they have already sold 4,000 copies of their Hypertext Hands-On! book with its Hyperties diskettes. Also, ACM has sold 1,800 copies of the Hyperties version of Hypertext on Hypertext.
Multiple authors are a problem in projects like a New York museum with twelve authors. They need better facilities for indexing large number of articles automatically and for importing existing text automatically. These conversion features are not yet part of the commercial Hyperties system even though they have been demonstrated as research projects. There are also problems with exporting information from the hypertext to a traditional text file (e.g. for printing). One can easily export a single article, but methods for exporting an entire web are missing. For the Hypertext Hands-On! book the text and hypertext versions were produced in two processes and the authors had to add all the cross-references to the paper-version by hand instead of having the system do so automatically.
A person from the audience asked why most systems still use WYSIWYG and emulate paper, since we are supposed to be moving away from paper. Halasz' answer was that it takes time for new forms of expression to develop, but he referred to two papers at the conference: People from the design firm Fitch RichardsonSmith showed some new typographic design ideas for hypertext cues. And Cathy Marshall from Xerox PARC showed interfaces with big arrows dynamically pointing at interesting information and looking different from traditional documents.
Shneiderman said that he would like to know how to make hypertext less paper-like, but that they are only now finding out what to do with the first deviation of using colored words which do something when you touch them. Kimberly's answer was having things move on the screen or having simulations etc. He emphasized that we need to give people something more than paper considering that all studies show that reading speed is 30% slower from screens.
Domain-Specific Applications with Hypertext
I would probably have given my award for best presentation to Frank Halasz if his analysis of the conceptual shortcomings of NoteCards had been given as a full paper instead of being crammed into an 8 minute panel statement. Norman Meyrowitz is another candidate for best presentation, but of course invited speakers don't get these awards, even if they are only Nielsen's ratings.
So I name Laura De Young best Hypertext'89 presenter for her talk on applications of hypertext in the auditing domain. She discussed current work at the Price Waterhouse Technology Centre aimed at building a specialized hypertext system to support auditors. De Young was able to describe this perhaps inherently somewhat boring domain with such zest that I got all fired up on having discovered a new application for hypertext.
The auditor's job is to look at a business carefully enough to determine whether the company has the same financial status as they claim they have. This is done by having a team of 5-8 people go to a client and gather evidence by going over the client's accounts. As the auditors go through these auditing procedures, they write down every piece of evidence and make sure to have references back to the origin of the evidence. A lot of documents are gathered and created and they include lots of references. De Young said that these "Audit Working Papers" are the best example she has ever seen of a manually constructed hypertext structure and she showed some interesting slides of heavily annotated and cross-referenced sample pages. It is so critical that these references are right that the people creating them sometimes need to take personal responsibility for them by initialing each reference.
De Young emphasized that her domain is real and has a real potential payoff: Studies have shown that approximately thirty percent of the time spent on an audit is dedicated to producing, relating, and reviewing the "Audit Working Papers." This does not even take the time spent obtaining the information into account. Therefore a good hypertext system for auditing support has the potential to reduce the time needed to conduct an effective audit. And therefore they also get real support from the company for their research.
On the other hand, they would also need to handle large sets of data. For example an audit of Shell Oil Australia had generated about 150 kg of paper (300 pounds) taking up six file cabinets (and that is not even the largest subsidiary of Shell Oil). So the hypertext techniques need to scale from the prototype to any potential production level system.
The Price Waterhouse prototype was not implemented in one of the standard hypertext systems but was programmed from the ground up in Common Lisp. They were using typed link and automatic link generation as well as the creation of paths to make it possible to follow a certain concept throughout the audit.
Special "review notes" can be attached to any of the other documents to point out potential problems in the client's papers. These notes were not just traditional annotation links but really drive the audit process. They can do a computation on the basis of the hypertext links to get an overview of how many review notes remain to be resolved, thus helping manage the auditing process.
Of course a complete auditing hypertext system should be integrated with documents created outside the system. They are currently able to open any document created by other applications like a spreadsheet or word processor and they can also scan in hard copy documents. But their hypertext access is limited to linking to entire documents and they do need more detailed access. One solution may be to use overlays and link to specific locations on the page.
A completely different application domain was discussed by John Schnase from Texas A&M University. Schnase is a biologist in addition to being a computer scientist and had written a biological modelling paper in KMS taking advantage of its computational hypertext features.
The biology research addressed how an animal spends its time and energy. Schnase wanted to develop energy budgets for Cassin's Sparrow and to build computer simulations of that organism. The research combined field studies of the bird's activities and theoretical work on the computer.
The complete hypertext had about 1500-2000 KMS frames and contained a data subnet with the actual data collected in the field, an article subnet with Schnase's paper, a program subnet with the energy simulation, and a communication subnet with the necessary KMS action language statements to allows printing and sharing of information.
Schnase did his writing on the left half of the Sun screen since KMS allows you to print out that half while keeping the right half for hypertext links and related material which should not be a part of a printout. So his screen design was partly determined by his need to print out the final paper for publication in a traditional journal.
For the programs implementing the energy simulation, the first approach was to use the built-in KMS action language programs. This language has special instructions for actions on abstractions in the hypertext (e.g. a command for "create a new hypertext node"). These action language programs were too slow and not general enough so another approach was necessary. The second approach was C programs to act on KMS's low-level encoding of the nodes as data in Unix files.
The third approach turned out to be the most appropriate and involved a combination of the two first approaches. First the KMS action language was used to export the information needed for a particular computation to a generic ASCII file and then a C program was used for the actual simulations. Then the action language was used again to reimport the results into the hypertext. It was cumbersome to access the operating system, but it was important to be able to do so.
Schnase concluded that hypertext provides an integrated personal environment for scientific knowledge work. It also supports community scholarship among scientists by its distributed nature over local networks (important while research project is under way) as well as by email and the possibility to send an entire research setup to a colleague, having the hypertext and associated data and programs form a "virtual laboratory". This can substantially change the nature of dialogues in science: You don't just share results but the entire context in which the results were developed.
The third applications paper was on the use of hypertext in a law office. This example did not excite me quite as much as the other two, probably because I have been thinking of the legal domain as a natural application of hypertext for a long time. Law books are full of annotations and cross references. Bob Glushko took advantage of his good connections in the legal community and gave an example in his conference tutorial of a page from a law journal with footnotes taking up two thirds of the page and having references to other footnotes.
But even though the theoretical opportunities for hypertext in the legal domain are quite familiar, it is of course still interesting to learn about a practical implementation in a real law office office. Elise Yoder from Knowledge Workshop presented an example of the use of the KMS-based HyperLex system at the law firm of Reed Smith Shaw & McClay in Pittsburgh to support intellectual property and patent law.
The HyperLex hypertext structure contains a general CSCW-like subnet with a group bulletin board, calendars, etc. and an office automation subnet with nodes for each client and links to correspondence and legal documents related to that client. It also contains nodes for the actual legal documents like patents and lawsuits. The patent nodes again have links to nodes representing the filing history, the prior art (earlier patents in the same area), etc.
One important use of the hypertext system is to search for the relevant prior art nodes in the database to see whether there is something in earlier patents that would conflict with a new potential patent.
They also build up hypertext networks for the litigation documents in trials. This application really needs a remote access capability through the phone system for attorneys in other cities or away from the office.
Yoder emphasized that they are trading off power for generality. It is very important in the law office to be able to hide information to give the attorneys the ability to take in info at a glance. They then also sometimes need to go into more depth with certain issues, and this can be offered through hypertext links to supporting material.
A person from the audience noted that the most important work during certain litigation was the building of the database of information. For example in the CDC antitrust case against IBM, the CDC people had built a very well organized database and there is a rumor that part of the settlement agreement was that CDC should destroy their database after having given IBM a copy. This implied that one would have to consider the possibility for a requirement to make hypertext links public and/or to hand them over to others. Traditionally one could at most risk having to make the documents themselves available and a large enough mass (and mess) of un-linked documents would discourage most opponents from digging deeply enough to find embarrassing material.
The conference had several sessions on information retrieval and showed a major trend towards integrating these more sophisticated searching capabilities with the basic hypertext framework. Michael Lesk from Bellcore discussed methods for query in huge hypertext networks, taking his examples from a book catalog containing 800,000 items. In his experience, people tend to type queries with single words only, and we should get them to type entire phrases to get more material to work with in the search. He was also working at giving users a graphical display to show them the classification of their query results.
For example, Lesk utilized the classification of the books in the two major library classification systems, Dewey and Library of Congress. In theory there should be a one-to-one between these two classifications but in reality there is not-either because of differences in the two taxonomies of world knowledge or simply because two different librarians have served as catalogers in the two systems. This makes it possible to show users a two-dimensional overview diagram of the result of their query by using each of the classification systems for one of the axes and showing either dots or book names in the diagram for each hit.
While Lesk's queries were based on the entry of keywords using overview diagrams for output, the GraphLog approach of Mariano Consens and Alberto Mendelzon from the University of Toronto was to get users to specify queries in terms of network structures.
One of their examples was the desire to find circular arguments in a NoteCards hypertext. This would involve finding a path that by following only support-links (links claiming that a given argument is supported by another argument) gets you back to the original node. This is a structure query where the user is looking for a structure in the hypertext graph and cannot be done with purely content-based search.
Consens and Mendelzon's solution was to design the GraphLog query language allowing users to specify query graphs as graph patterns consisting of a regular expression of links.
Another example another example was a network operation to link each notecard to its most reliable supporting argument according to confidence ratings supplied by the author. These virtual links can be either frozen snapshots or views recomputed whenever the author changes the hypertext.
These examples led me to consider the possibility for defining a set of standard GraphLog queries for debugging a hypertext or for heuristic interface evaluation pointing out nodes which would bear further usability evaluation.
Mark Frisse from the Washington University School of Medicine gave an update on the dynamic medical handbook project previously presented at Hypertext'87. They now have several different versions implemented in HyperCard, the NeXT Digital Librarian and their own Lisp prototypes. They are also considering the potentials for a true medical workstation with the ability to display X-ray pictures etc.
In Frisse's 1987 paper, the main hypothesis was that propagating query scores along the hypertext links would increase the user's sense of context. Unfortunately they had found that their medical users could not work with traditional complicated query languages. Therefore he was now looking at having an index space structured to allow automatic inference to help users in their navigation.
Frisse did not like traditional flat indexes with words ordered alphabetically. This type of index simply leads to a flat information space just like having a set of unrelated papers on your desktop. Instead he was looking at a hierarchically ordered set of index terms from the National Library of Medicine. In true hypertext works, the index would of course be a network and one would have a computational engine operating on the index spaces.
They currently have a complete hypertext book where the chapters are indexed by hand. The next step will be to take the Unified Medical Language Thesaurus (a 30,000 node network of words) and use it for the inference methods.
Interchange and Standards
A major session on interchange standards for hypertext was started by Frank Halasz with a plea for the use of model-based interchange. He believed that the interchange of hypertext requires a common language for describing hypertext content and structure. And such a language must be based on a model of what hypertext is. Interchange standards will be more successful if they are explicitly derived from a model. Halasz warned against the trend in many other current projects to start from the standard and trying to add a the model later.
One attempt to articulate the important abstractions in current hypertext systems was the Dexter hypertext reference model developed by the "Dexter Group" of thirteen prominent hypertext designers. The group is named after the inn in New Hampshire where they had the first meeting.
The Dexter model has a layered look and includes a runtime layer (presentation of the hypertext), a hypertext layer, and a component layer (the content inside the nodes). The hypertext interchange standard focus on the hypertext layer's definition of the node and link structures unique for hypertext while the component layer it is the responsibility of other standards like ODA etc. With respect to the user interfaces of the runtime layer, Halasz felt that there was no hope for a standard at all. Later in the session, Rob Akscyn gave an analogy with the telephone system. It has a good interchange standard allowing you to call any other telephone in the world but that has not removed the cultural and language differences between the people of the world.
The hardest work in defining the model came at the interfaces between the main layers. For example anchoring is really an interface between the hypertext network structure (two nodes are connected) and the internals of the atomic components (the link is anchored in a specific sub-part of the node's content). Their solution was to use indirect links from a kind of table in the node to the place in the node you link to. In this way you can have anchored links without having the hypertext layer knowing about the internal addressing within the components. This does mean that the components need to contain not only the node contents but also all anchors within it in the table.
Jeremy Bornstein from Apple demoed the preliminary result of a translation of a NoteCards document to HyperCard using the Dexter Hypertext Interchange Format, DHIF. The resulting HyperCard stack had a decidedly unimpressive user interface compared to both the standard NoteCards interface and to some of the better HyperCard stacks around. This could be because Bornstein concentrated on getting the actual hypertext transfer to work and not on the user interface of the result. It was certainly impressive to get a hypertext interchange to work. But it could also be because hypertext documents need to be written for their specific presentation environment to result in a good reading user interface. It remains to be seen whether interchange systems can automatically construct pleasing hypertext documents in the destination format or whether a human designer is needed to clean up the result.
One very fundamental distinction between hypertext systems is the card-based versus the scrolling windows. This was a severe problem in converting between HyperCard and NoteCards. Also the HyperCard scripts were difficult to translate-currently they just loose the script properties of HyperCard and translate them to dumb links.
Victor Riley from Brown University said that Intermedia plans to follow the Dexter model soon. They are also looking at other standards under development such as the Apple Hypertext Interchange Format, HIP and the CGA sponsored standard, X3V1.8M.
Riley complained that most of these standards have no consistent understanding of the basic hypertext objects. Objects are not unique across systems, leading to problems. For example, if you copy an object from one system to another, change it, and copy it back: Is it still the same object?
Tim Oren, from Apple mentioned a project called MIFF to define a non-proprietary format for hypertext interchange and storage for the Macintosh. It will go down to the byte level for file interchange, and it is being done in collaboration with Apple's developers.
MIFF is neutral to policy choices of e.g. linking methods and how entities are arranged among files (e.g. one huge file versus many small). They want to maintain as many neutralities as possible. When people in the hypertext field can still argue about some of the issues after so many years, Oren felt that Apple had better keep neutrality with respect to these issues and accommodate both views.
Apple was supporting two efforts at the same time because they do see the need for a public standard for e.g. the large customers like the government where a multi-vendor environment is required. Public standards are important for the long term health of the hypertext industry. But they also need a Mac-only standard very soon since their users are further along the learning curve: They have been using HyperCard for two years now and are discovering its limits.
Rob Akscyn, from Knowledge Systems (KMS) felt that it was too early to actually standardize, but that we should start working on the conceptual differences. It will take a lot of time to end up with a good enough model to form the basis for a standard.
Akscyn's company is very interested in dynamic interaction between different systems in the form of links to documents stored in other systems. They take an incremental approach and start with the ability to read in a single node on demand from another system. They would also like to have write-access to that other system, but he felt that it was a research topic to come up with a method for doing so.
Akscyn mentioned several practical fundamental difficulties in hypertext interchange. With respect to node types and sizes it may be easy to go from HyperCard to KMS but hard to go the other way since the KMS cards (based on the Sun screen) are so much larger than the HyperCard cards. Also the computational hypertext aspects will be hard to interchange. For example, the HyperCard script language is object oriented while the KMS action language is a traditional Algol-like block-oriented language. Translating between two such languages is a Ph.D.-type project.
Lawrence Welch from the National Institute for Standards and Technology, NIST presented a long list of relevant standards with respect to node content. Also, they were sponsoring the first "official" workshop on hypertext standards in January 1990. It seems that the responsibility for hypertext standards has now shifted from a self-established group of pioneers to the official bureaucracy.
Lessons from the Communications of the ACM Project
The Communications of the ACM had a special issue on hypertext in July 1988. The text of this issue was later converted to hypertext form in many different hypertext systems in a project called Hypertext on Hypertext . A panel at the conference gathered the designers of the various versions for a perspective on the conversion project. This has been one of the few successful hypertext publishing ventures so far. As of now, there have been 2500 copies sold and almost everybody raised a hand when Bernard Rous from the ACM asked the audience how many had seen at least one version of the Hypertext on Hypertext.
The KMS version was presented by Elise Yoder. It had 520 nodes structured as far as possible according to the original hierarchical structure of the articles in sections and subsections. One of their chief editorial contributions was an index of topics. They also built a combined bibliography. The whole process turned out to be much more time consuming that they thought it would be. They did not have any automatic facilities for anything, not even for creating the more routine links. The effort was one person in six weeks and the most time consuming activity was adding "value-added" links between articles.
Nicole Yankelovich presented the HyperCard version done at Brown University's IRIS institute. They normally work in their own Intermedia system so they undertook this project to learn about HyperCard. Therefore they had to learn the tool at the same time as they were developing the hypertext. The overall structure had 11 interconnected stacks with an average of 40 cards each. The total number of physical nodes was 424 cards but they only considered the 11 stacks as their conceptual nodes.
The Hyperties version presented by Ben Shneiderman from the University of Maryland had 307 nodes. He added more information to the package than just the original articles such as e.g. the full IRIS hypertext bibliography and those sections of my trip report from Hypertext'87 dealing with those talks later rewritten for the Communications.
They had chosen to use CGA as their graphics platform to ensure the universality of the Hyperties version. But in retrospect he regretted this because the graphics had so low quality in some cases that they had to add a disclaimer saying that the figure was better in the original printed version. This was in spite of their having to redraw all figures to the CGA medium.
The total development effort for the Hyperties version was two people half time for 6 weeks.
I asked the panel members what they would have done differently if they had had more time. Yankelovich would have added more additional meaningful information in addition to the original set of articles. Also, she would have had a HyperCard programmer get things to work in the way she really wanted, such as building a better navigation aid. Shneiderman would have added better content-oriented links, and Yoder would have liked an extra round with the original authors to get feedback on the conversion since they got a lot out of the single round they had.
The papers from the CACM hypertext special issue may not have been the ideal set for a hypertext: It was a fairly small set and the papers had been selected for the journal exactly because of their diversity, so there may not have been so many interesting links between them. This led Bob Glushko to ask how a hypertext version of the full proceedings of the Hypertext'89 conference would be different. Nicole Yankelovich's answer was that it would not be that different but since the scale is much bigger, there might be more value added. Indeed some preparations for a hypertext proceedings has already been made: The authors submitted their papers in machine-readable form and the audience was asked to provide lists of potential links between the papers.
Missing Aspects of Hypertext
At the closing panel at the Hypertext'87 conference, Frank Halasz listed five domains/applications which he had not seen at that conference but would like to see in the future:
Now, two years further down the path we did see a paper on the use of hypertext for lawyers but the other four groups were mostly missing. There were several papers on information retrieval but they were mostly focused on how to use methods developed for searching huge bibliographic databases in searching hypertext structures. Nobody really talked about how to use hypertext in regular libraries (but some work is being done such as e.g. the Danish Book House project).
Publishers are still extremely conservative with respect to publishing hypertexts. Shneiderman said that it had been a struggle to get his Hypertext Hands-On! book into bookstores because of the disks in the back. Bookstores are nervous about the disks since it has usually been the responsibility of the store if books were damaged. Addison-Wesley has recently changed their policy and will accept damaged disks back and replace them instead of putting the responsibility on the bookstore.
Greg Kimberly also complained that is it is difficult to get hypertexts out. Software dealers do not want to deal with $30 items and bookshops do not know what electronic books are. Therefore he felt that many of the more interesting hypertexts to come out in the near future will be at universities because they do have the distribution channels. The lack of hypertext publishing certainly seemed to be the second theme at this conference (in addition to the more positive theme of integrating hypertext with the rest of the world mentioned above). Maybe the problem will solve itself as hypertext systems do become more integrated (and thus more widely used) and as interchange standards appear (making it cheaper to produce hypertext documents for larger markets).
Finally, with respect to the two non-university types of learning, there were not very many examples at the conference. But this is probably more related to the type of people who attend scientific conferences than the true status of the field. For example, Harry McMahon and Bill O'Neill from the University of Ulster showed a very nice example of having pupils in elementary school construct interactive fiction at the U.K. Hypertext'2 conference, and Per-Olof Nerbrant from Ericsson Telecom showed a system at the first Swedish Multimedia conference for communicating a customer-oriented service attitude by interlinking corporate policy statements and videos of customer experiences.