The conference proceedings can be bought online.
Bill Buxton (University of Toronto) opened the conference by stressing that computer-human interaction work is a critical resource for improvements and that we are going to have significant impact on society-if only we let society at large know what we are working on. I could add that it is simply amazing how many people participate in the public debate about computers and subjects such as computer education on the basis of their experience with obsolete, "user-hostile" systems. Buxton warned that we have to realize that computer fobia is real and that we have to work to prevent it from happening and destroying the positive impact of modern user interfaces. We have to remember that we are not just designing computers - we are designing quality of life. So we have to strive for excellence and as an example, Buxton showed a video film from Pixar which was the second computer-generated film ever to be nominated for an Oscar. This was an extremely cute film of two animated Luxo lamps, and the technical quality was so good that one person (who came late and did not hear Buxton's introduction to the film) later asked me whether the pictures had been computer generated or were photographed.
It Fits Like a Glove
-of course it does; it is a glove: DataGlove from VPL Research was probably the most innovative interaction device shown at the conference. You put it on like a normal glove and it then acts as an input device that senses finger movements and hand position (3 coordinates) and orientation (3 coordinates) in real time. They even experiment with providing tactile feedback in the glove so that it would feel different to bend your fingers depending on whether you were "grasping" a data object with the glove.
The DataGlove had three applications at the moment. The first was to substitute for medical measurements of finger-movement ability of people who have certain diseases. Measuring how much people can flex their fingers is currently done with fairly expensive and awkward equipment. Instead the patients can just put on a DataGlove and let the computer make the measurements.
The second application was probably the most interesting: Direct manipulation of 3-dimensional objects. They had a 3-D CAD program running on the Macintosh where the user could use the DataGlove to grasp a 3-D object (e.g. a box) and move and rotate it by moving and rotating the hand. You would then let go of the object by spreading your fingers, and the object would be in its new position. The 2-D computer display showed a projection of the simulated 3-D space in which the DataGlove operated and the position and rotation of the user's hand was represented by a (somewhat sinister) skeleton hand on the screen. The real advantage of this system is that introduces as direct a manipulation as possible without 3-D holographic projection of the data space. In current system using a mouse or other 2-D locater device, 3-D operations are somewhat indirect.
Finally, the third application was to use hand- and finger-motion tracking as an input device. The demo showed a finger-painting system where you painted on the screen by moving your finger. The screen was cleared by spreading your fingers. This was not all that impressive since it was somewhat like the Videoplace system.
Noobie: A Furry Computer to Play With
One of the most talked-about events at the conference was the demonstration and exhibition of Noobie-The Furry Computer by Allison Druin from the MIT Media Lab. Noobie is a computer intended for small children who don't know who to use a keyboard-and who may not even like to use a mouse. The "input device" is simply a huge (somewhat larger than a person) plush animal (designed using help from the Muppet Show). The user (child) sits in the lap of the animal or crawls all over it. When the user squeezes some part of the animal, it acts as input to the computer which responds accordingly. At the moment the feedback consists of a drawing of another animal on the computer screen looking somewhat like the physical stuffed animal. But the computer animated animal can change its parts (e.g. from hands to claws) and the parts of the computer animal will cycle through its different when the user squeezes the corresponding parts of the stuffed animal.
The problem with this approach is that it probably does not teach the child algorithmic thinking or any general problem solving. The advantage is that the computer feels very responsive since you only get immediate feedback and no delayed programming effects. It is also an open question whether Noobie would motivate children for extended use. Certainly the children (of all ages) present during the demonstration had great fun. It could be used to generate positive feelings towards computers and of course it could probably also be used for more advanced interaction techniques. Certainly the concept of having the entire animal as the input device for a computer is both novel and challenging.
Noobie is implemented using a Macintosh which is buried inside the stuffed animal. The screen is visible in its stomach. The squeezing input is implemented with sensors throughout the animal which are connected to the Macintosh as though each sensor was a key on the keyboard. This has the advantage that the Macintosh does not have to be modified but it also means that a squeeze is simply sensed as an on-off action (i.e. not pressure-sensitively).
Looking Good, Informing Great (Edward Tufte)
Edward Tufte from Yale University gave the opening speech on information design and statistical graphics. The talk was rather similar to Tufte's book The Visual Display of Quantitative Information which I had already read (and can highly recommend). Tufte advocated simplicity of design. He wanted richness and complexity of the contents and substance of a graph but no "self-advertising design".
In some sense I feel that Tufte is too much of a fanatic in his opposition to fancy graphics (made by computer or otherwise). Often it is a good criterion to maximize information contents in graphs, but attention-getting "business graphics" surely also has a place. Sometimes you simply get in a good mood by seeing a flashy graph. But of course there is no excuse for misleading graphics where you e.g. use the size of two-dimensional objects to depict changes in a one-dimensional variable (as in the figure where B is two times as large a number as A but looks 4 times as big).
On the other hand, Tufte was not so worried about the classical problem How to Lie with Statistics since he felt that our "lie detectors" are probably better for graphics than they are for words. The preoccupation with showing the Truth has sidetracked the design of statistical graphs-e.g., the principle of always showing the 0-point on the y axis may lead to less informative graphs.
The basic challenge of graph design is escape the 2-D Flatland of paper and the computer screen to represent the multi-D richness of our complex world. We need to add dimensionality. One approach advocated by Tufte is micro-macro reading-the ability to read a graph from the macro view to grasp the total picture and then to look at selected details from a micro view. As an example, Tufte distributed a reprint of a tourist map of Manhattan: One of these maps where all buildings are drawn as they look rather than as colored squares. The macro view of the map gave you an idea of the New York skyline (you could never mistake this kind of map of Manhattan with a similar map of Copenhagen) while the micro view allowed you to identify your own office window or your favorite news stand.
Psychology as Input to System Design (Tom Landauer)
Tom Landauer from Bell Communications Research (Bellcore) is one of the grandfathers of CHI Research and gave a keynote talk entitled Psychology as a Mother of Invention about the possibility for using psychological research results to drive the design of new products (rather than the traditional method of starting with the design and then-two weeks before the release date-rushing through a few evaluative studies). Landauer wanted to move our focus from that of usability (of systems whose functionality of already chosen) to usefulness. In other words: What computer products do we need in the first place. We should take responsibility for inventing functionality to help people solve problems.
The first solution to the interface problem has been to use the extra CPU-power available in modern computers to be more generous with help and other interface features. ("Burn cycles, not people" a quote from Daniel Nachbar). We also have to get away from the egocentric intuitive fallacy of believing that everybody else can see what you see and that they therefore will be able to use your "obvious" interface design with no problems. These changes are taking place, and examples were presented in other talks at the conference. We are also getting design guidelines as a distillation of our user interface experience.
So now we know how to solve a number of user interface problems. The next step according to Landauer is to make computers useful to do things people could not do before: Focus on providing service to the user rather than just on the surface of the system in the form of the user interface. We should do synthesis by analysis since once you know what the (real) problem is, the solution is normally easy. Landauer presented the following three methods to achieve this goal:
Failure analysis. Look at what goes wrong at the moment. As an example Landauer discussed library catalog lookup where people get the wrong result half of the time. If you just ask people, they say that the catalog is OK since they can't imagine how it could be done better. But their studies showed that the reason for the many wrong results was that people look up under other words than the one used in the index. This is because there is so little agreement in naming: There is only 10-20% probability that two people will use the same name for the same thing. So the solution was to introduce unlimited aliasing by letting things be known by as many names as people want. This is prohibitive with index cards (store 20 copies of each card) but trivial with computers (just add pointers).
Individual differences analysis. We can look at what kind of people have trouble and what kind of people have success at doing some task. Then we can improve the tools such that everybody can be successful.
Time profiling. We can look at how people spend their time and then focus our attention on the important things to change. In one system they studied, users spent 53% of their time on error tasks and only 47% on error-free tasks (i.e. productive work). So in this system it would be important to either reduce the number of errors or to speed up error recovery significantly.
Landauer wanted us to invent new tools for thought to augment our cognitive ability. One such example is the spreadsheet but there are very few other examples. The best examples of cognitive tools are arithmetic and reading.writing. The technology for these two tools are marks on pieces of paper, so they are "hand tools for the mind." We should now get power tools for the mind.
Computer tools for reading has not been any good as reading from computer displays until now has been slower than reading from paper. Maybe better displays will alleviate this problem. We are also getting new reading tools such as hypertext which has been discussed for many years but has not seen much use until now. The problem here is that we lack an analysis of what people really want/need to read.
An important precondition to being able to impact design is to have the right working methods. Judith Olson (University of Michigan) was the discussant at the paper session on Methodological issues and presented the following list of tools for researchers and designers in computer-human interaction:
One tool (for analyzing usability) was presented by Wendy Kellogg (IBM Yorktown): Scaling techniques for analyzing users' conceptual models of systems. Users were asked to sort cards with the names of different parts of a document formatting system in piles according to their perceived similarity. Not surprisingly, there was considerable difference between expert and novice users in their models of the system. But we may not have expected Kellogg and Breen to find 23% of the system concepts misdefined by the expert users. This may partly have been because of a weakness in their experiment since they defined the reference model of the system from the manual instead of by testing the original designers of the system. Anyway, their result must have some implications for how to write the next version of the manual (either try to get users to acquire the "correct" model or redefine the way the system is explained if the expert users' model is actually OK).
Another tool (mostly for internal use among CHI researchers) was the scenario method presented by Phil Barnard. Barnard views a scenario as an idealized, simplified description of a specific instance of user-computer interaction: An established phenomenon in human factors which is exemplified in a was that allows researchers to take the scenario into account when discussing new models (e.g. cognitive models). A proposed model must be able to account for the events in the scenario by some sensible explanation. This is a way of saving researchers lots of time in going through experimental tests of their new models: First try the encapsulated essence of previous experiments and see if your model can stomach them. As Barnard put it: "We can turbocharge the tortoise of science" (referring to the keynote talk by Allen Newell at CHI'85 who said that the race is between the tortoise of cumulative science and the hare of intuitive design).
What we are now waiting for, is an annotated catalog of good interaction scenarios to really speed up our respective tortoises. It even seems to me that this method could be of use to practitioners too (maybe it will not speed up the hare-but it could get it to run in the right direction): It is well known that examples are much more easy to understand than abstract guidelines, so maybe a good set of interaction scenarios could show designers some of the things to take into account.
The Social Dimension Hostage (Rob Kling)
It seems that every user interface conference (CHI, INTERACT, etc.) has to have one keynote speaker who acts as a hostage to the organizational and social impact interests. Once we have gotten this talk over with, we can go back to being techno freaks and look at neat menu systems.
This year Rob Kling from the University of California had this ungrateful job. He discussed the familiar automobile metaphor for user interfaces and noted that in the world of real cars, it is not enough to optimize the individual car. Even if he owned a Porsche, he could still not get to work much faster if the freeway was jammed because of other people. And this problem would not even be solved if everybody drove Porsches. Cars operate in a world of rules and regulations: As soon as we leave our own driveway, we see streetsigns.
As a conclusion to his discussion of the car metaphor, Kling recommended that we use urban design as our metaphor for building computer systems rather than architecture which is the most common current metaphor. On the other hand, Kling did not want the audience to conclude that he felt that good user interfaces were not useful. The point was simply that they are not sufficient since computerization is socially mediated and takes place in an interaction among many people and groups. An example of this is the ZOG computer system at the USS Carl Vinson: It was a failure in real use because the system assumed that all Navy personnel was committed to total data validity-and some were not.
Another concrete example where the social context determines the user interface is the use of text processing in university departments. The secretaries view is as a "text factory" and don't want unneeded complexity such as in UNIX and TEX while the faculty members (especially in a computer science department) want flexible and programmable systems.
Real World Design
Very à propos Kling's talk, a group from Uppsala University in Sweden had earlier at the conference given presented a paper entitled "The Interface is Often Not the Problem." They had studied the users of a database system which performed poorly. Then they had redesigned the database user interface to be much better. And people still did not use the system! The explanation turned out to be that they lacked confidence in the validity of the results of database queries. Users were not able to judge whether the results were reasonable and they did not know whether their use of logic to formulate the queries was correct. In another situation the Uppsala group did solve a usability problem: They redesigned the organization of a health care unit into three smaller units so that the nurses using the computer system would be able to have personal knowledge of the situation of the patients and physicians.
Mary Beth Rosson (IBM Yorktown) presented the results of a study of design practice in the real world. It would be extremely useful to know more details about how what aspects of user interface design results in usable interfaces and what aspects are a waste of time (or possibly harmful). But unfortunately she did not have such clear-cut results. This is not surprising considering how hard a problem this is to study and considering how few other results are available. So this project represents a step forward from knowing very little to actually knowing a little bit.
They had interviewed 22 designers about their experience. It seemed that about half had used a phased design model while the other half had used an incremental design model of letting the system evolve during the design. There was very few cases of early user testing. Most empirical work had been done to evaluate designs rather than the possibly more important point of learning something about users to drive the design. Most designers expressed the following opinion about working with human factors people: "Yeah, it seems to be useful but it was too late..." Furthermore, many designers had difficulties separating the user interface and the functionality. For them, the user interface is "what the user does" and not just the form of the dialog.
Intelligence Should Be in the User's Head
I have often complained that too little human factors evaluation is being done of expert systems and so-called intelligent interfaces. Well, at least this year's CHI has a panel session entitled Intelligence in Interfaces so I sat down ready to hear all about how to use AI to achieve better interfaces. Actually, all panel participants agreed that the goal should not be to put intelligence into the system. Rather, the intelligence should reside in the user and the system should work as a tool. Tom Malone (MIT) used the slogan "naked systems" to signify not dressing up systems in fancy intelligent interfaces. Instead one should open up and expose the knowledge and reasoning in the system to the user so that users can use their own intelligence to do what the system cannot do.
Allan Cypfer (Intellicorp) would prefer that we keep intelligence out of the user interface and put it in the application itself. One example from Intellicorp was a structure editor for molecules: It knew about molecules in a way a simple graphics editor could not do and it would e.g. show 4 different views of the same molecule on the screen at the same time. The interaction techniques did not show any signs of intelligence but the system had semantic domain knowledge.
John Seely Brown (Xerox PARC) advocated three design goals:
Design for Guessing - the user should be able to say "let's try..."
Design for Group Think - we should engineer the infrastructure to let each of us tap the mind of the group, thus facilitating learning
Design for Bootstrapping - have self-explanatory systems that people can use in a lifetime learning situation in the workplace.
Bob Neches (USC Information Sciences Institute) did suggest a knowledge based tool to help in the design process. He would force designers to use a common vocabulary by storing the definition of the different parts of the user interface in a knowledge base. Consistency between different parts of a user interface is certainly a big problem. Sometimes because designers want to use their own "good" ideas, but most often because they do not realize that a design decision in another part of the project will conflict with their part of the user interface.
Ben Shneiderman (University of Maryland) commented that the panelists had talked about intelligence in the design of interfaces but not about visible presentation of the actions available to the user. Many systems show the objects visibly on the screen but users still have to remember the commands (what you can do with the objects). Shneiderman thought that we could do better than just menus.
One of the paper sessions was officially entitled User Interface Metaphors but quickly became known as the Xerox Session as all three papers were by PARC staff. The session discussant, George W. Furnas said that his outside view of PARC was that their empirical work was OK but that their design work was really great . He suggested that the reasons were that they
have a tight coupling between computer science and psychology
leverage cumulative design (instead of starting from scratch each time)
perform careful observation and analysis of system use followed by hacking, hacking, hacking
are very metaphorical.
Randy Smith (now at SunLabs) got big laughs from describing the Xerox metaphor in his Alternate Reality Kit (ARK) where a button marked "Xerox" is used to make copies of objects. The ARK shows simulations on a graphical display and allows the user to interact very directly with the simulated objects. One example is a simulation of a solar system with small balls as "planets." The metaphor here is that the computer system works like everyday objects so that you can pick up a planet with the mouse (the cursor is shown as a drawing of a hand on the screen-it changes shape when you grab things) and you can throw a planet if you want to add speed to its orbit. People seem to learn this fairly quickly as it is a literal application of the metaphor (even though we don't go around making photocopies of the Moon everyday). It has of course been recognized for some time that metaphors also are harmful in that they limit more advanced features of a computer system. To overcome this problem, Smith introduces the concept of magic which is defined as those features which cannot be explained by the metaphor. Each element of magic in a design requires its own explanation and thus adds heavily to the learning time. An example of magic in the ARK is the law of gravity in the solar system simulation. The (abstract) concept of the constant of gravity is represented as a (physical) slider which can be used to increase or reduce the force of gravity.
Frank Halasz (who was with MCC during the conference but is now back working at PARC) presented the NoteCards system which was intended as a vehicle for studying the idea processing task. The research goal was to assist analysts of the CIA kind in writing better reports on the basis of lots of collected information. They used the following model of authoring: 1) Collecting sources from databases (computerized tools available) and from brainstorming (currently no tools). 2) Extracting ideas (currently no tools). 3) Analyzing and structuring ideas (currently no tools). 4) Organizing the presentation (tools: outlining processors). 5) Writing the document (tools: text editors). 6) Producing the document (tools: page layout programs).
NoteCards is intended to address the question of the missing tools. Since they did not have experience with existing tools for these tasks, they had to change the way the system worked as their understanding of the task changed. Currently NoteCards is used by about 70-100 users. Most users actually find NoteCards hard to use, first of all because playing with ideas is hard work! Other reasons are that most tasks really do not need the full power of computerized idea processing and that the current system has limitations such as screen space (not big enough to display the full graph of ideas). Another problem is that NoteCards is too general a system which does not include specialized support for special tasks such as legal reasoning. But to support the way a lawyer works, you have to now how a lawyer works! The current system can be tailored by the user but this is not good enough for non-programmers.
NoteCards requires the user to explicitly segment ideas into separate notecards which must be given a title and a classification. Often this is hard to do in the early stages where people do not yet know what they are analyzing. Another problem with the computerized representation of ideas is that they all look the same (typewritten cards). There are no coffee stains, paper clips, or other small reminders of the context in which the idea was generated.
Stu Card and Austin Henderson gave a collaborative talk in which they took turns presenting the theoretical issues and implementation issues of the Rooms concept. The general problem is that the metaphor of personal computer desktops is somewhat mistaken. One real desktop corresponds to 42 Macintosh screens. So a single display cannot hold all the information we want to use. Instead they introduced the concept of a room which is a full display holding certain windows which have been grouped for use in some task. When you shift task, you move (through a "door") to another room which has the windows useful for that other task.
This presentation showed a creative use of a videotape to accompany the talk itself. Interactive systems are often hard to describe using words only: You have to see them. So the video program is an important part8 of every user interface conference. More and more of the traditional paper presentations also include short video clips, but the Rooms presentation was illustrated throughout by a videotape instead of by overheads or slides. This of course meant that the speakers had to synchronize their talk with the tape, but they did that with no problems.
The Politics of Human Factors
Panels discussions are always good at CHI, and one of the best this year was the panel on the politics of human factors. The word "politics" was not meant as party politics but as company politics: How can we get computer companies to do what we want them to do?
Steve Boies of IBM found three problems with human factors people: 1) Many times behavioral people ignore economic constraints such as the need to market a product before it becomes obsolete. 2) Often their recommendations are difficult to implement compared with their importance. 3) Sometimes the quality of human factors recommendations is too low and opinions are confused with expertise. In one case the first human factors person to be consulted in a project recommended shifting a key from the right to the left on the keyboard. And later, a second human factors person recommended that the same key (which was now on the left) be shifted to the right. But in spite of these problems, Boies (who has a user interface background himself!) said that human factors was so important for computer companies that they should fund it at the same level as e.g. CMOS. Also, he suggested that cheap but good results could be gained from instrumenting systems to measure how they are used in real life.
Charles Grantham from Pacific Bell recommended that human factors should be placed at the corporate level in a company because this is the only place with the overall view required to balance all the tradeoffs. The individual project managers want their product out the door and are tempted to suboptimize user interfaces by choosing solutions which may work in their specific product but which destroys the commonalty between system thus impairing transfer of user learning. John Whiteside from DEC commented that the problem with this approach would be that the higher you are placed, the more thinly you are spread as you touch more and more projects. Grantham's answer was that the value of going to a strategic view of user interfaces would outweigh that disadvantage.
Dennis Wixon discussed the DEC experience with the concept of Usability Engineering Development Cycle which usually in their experience results in a 30% improvement in operationally defined usability indices and the solution of about 80% of the usability problems. They simply iterate user interface development and measure usability until reasonable goals are achieved. At some point they reach the point of diminishing returns, but they showed a nice curve of their experience with interface improvements as the result of their method. Two comments to this method was: 1) [Boies] A company like IBM will not switch technology from proven traditional systems unless at least 100% improvement can be shown. Otherwise it is just relatively insignificant tuning. 2) You cannot develop measurable usability specifications in the case where you try to get people to think of totally new systems.
Other comments were that human factors is a legitimate part of engineering and than you only do good engineering if you take human factors into account (Rubinstein, DEC) but that we just don't have the same track record of performance as other engineers at the moment to prove a steady level of improvements as the result of our human factors work (Boies, IBM).
A lot of discussion centered around how to trick management and/or developers into doing what we want: "whoever gives the best demo wins" - so give a really neat demo if you want to convince management to things your way. Let the developers feel collective ownership of the user interface solutions-the case where a user interface expert is the only person who "owns" the user interface will often end up as "you can write the HELP-messages;" Unless human factors is an item on the development team's agenda, it will be ignored.
Surprisingly, nobody mentioned the possibility of having the users put pressure on computer companies to develop usable products. It could be a real possibility that big customers got together with a team of user interface experts to develop requirement specifications for user interfaces as well as for other aspects of computer systems. It could also be a job for labor unions to require better user interfaces as part of their quest for better working conditions. Unfortunately, the major trade union for office workers in Denmark is currently focusing its efforts in the computer field on the non-issue of radiation from terminals. They will probably succeed in getting even lower low-level radiation in the workplace but at the cost of poorer working conditions as effort and money is spent on terminal radiation rather than on e.g. graphical workstations allowing a more varied work situation.
Since I was chairing the Workshop on Classification of Dialog Techniques, I did not attend as many tutorials during this CHI as I normally like to do. At least some tutorials are good because they offer a chance to dwell at more length on a topic than is possible in a single paper or panel presentation. And the tutorial program at this CHI looked better than ever-partly because of a number of new tutorial speakers who may have been at the conference because it was also this year's Graphics Interface Conference.
I only attended the tutorial on documentation graphics. This is an area which is growing in importance as desktop publishing is spreading to more and more places. Just a few years ago, terms like font and kerning were only used by very few, specialized people and now they are part of the everyday language of both computer scientists and many ordinary users. Chuck Bigelow emphasized that there are very few artifacts in use that are the same today as they were 500 years ago-except letters. So Bigelow wanted us to use the long tradition of typeface design and use-of course taking into account the present limitations of screen and output resolution. Sometimes only small improvements in technology can lead to quite large improvements in readability. One example mentioned by Bigelow was that 100 dots per inch would be a significantly better screen resolution than the 75 or 80 dots per inch now common in most graphics displays.
Much of the tutorial was spent discussing the two page description languages, InterPress and PostScript. These languages are fairly hard to read (I don't think that the designers could have taken the Psychology of Programming tutorial!) which may not be a problem as long as they are only used as an internal representation which is shipped over the network from the computer to the printer. But now more and more applications are being marketed which require the user to write or edit PostScript programs directly. So maybe a new and more readable language is required for presenting a page description to the user. We can still use the standardized PostScript for transmission to the printer.
Several other, more technical issues were covered at this tutorial [i.e. graphics programming]. But one that I would like so single out was the AI-based illustration design system by Jock Mackinlay from Stanford University (now Xerox PARC). This is a system with at the moment 200 rules that knows about good layout and data presentation. Some fairly nice examples of output from the system were shown though it is still far from perfection (and of course is no good at "creative" design such as that done by good advertising agencies). Anyway this kind of system is promising in three ways: 1) It could be used for totally automatic generation of graphics in situations where the underlying data changes so rapidly that the intervention of a human designer is not feasible (e.g. foreign exchange rates). 2) It could be used to generate better first drafts of graphs than those suggested by present "business graphics" systems. A human designer could then refine the graphs if necessary but with far less work than is currently necessary. 3). The system could alleviate the problem of having people (such as myself!) design documents without a sufficient graphics background. We very often see awful designs of reports produced on desktop publishing equipment. If an AI system could design the layout (or critique a human designer), much would be gained.
It is interesting to look back and compare the CHI 83, 85, 86, and 87 conferences all of which I have attended. Both I and several of my colleagues still think that CHI'83 was the best: The most exciting atmosphere and the most classical papers. Also, the keynote speeches were more memorable (both Don Norman, Pat Wright, and John Seely Brown) that those at recent CHIs, and they even continue to be of use as valuable references (this is mostly true of Wright's paper) since they are printed in the proceedings. Of course it must be admitted that it may be natural for the first "official" conference in a new field to contain an especially large proportion of classical papers, but even in the category of "interesting but not classical" work, there was still lots of good stuff at CHI'83.
CHI'85 on the other hand was not nearly as exciting with mostly predictable papers. Not too many really new ideas or methods were presented. Even the video tape program was rather dull. The enthusiasm (or at least my enthusiasm) was back at CHI'86 which made you feel part of a field with plenty of action. As Ben Shneiderman said in his keynote talk: We are not just fooling around in small labs just to write yet another paper but we are changing the way people work and relate to computers. Maybe there was a tendency to having each of several established research groups work in refining their kind of special methodology rather than having new and exciting methods come up.
This last tendency was even more true at CHI'87: Certain people work in certain set ways, and there are not too many surprises. Maybe this is a sign of a rapidly maturing field.
The quality of the demonstration program has changed a lot over the years. Maybe I liked the format in 85 best with demonstrations being given as real talks to a large audience: I still remember the cheers and thundering applause after the demonstration of the Pinball Construction Set. In 86 I did not have as much time for the demos but I still attended a few good ones (e.g. Javelin) shown using a video projector. Also, Myron Krueger's Videoplace game room was one of the aspects of CHI'86 that gave me a morale boost as a user interface researcher. This year I was too busy (literally running from one meeting to the next) to spend very much time on the system demonstrations which were somewhat uninspirational: Just a bunch of Suns etc. in a big room [some funny quotes at the Siemens exhibit, though]. On the other hand, this new format does offer more opportunities for audience participation during the demos which I enjoyed talking to Allison Druin, the creator of the furry computer mentioned above.
It seems that the theme of this year's CHI was that we have to design the functionality of systems on the basis of human factors principles if we want usable systems. It is not enough to look at the more surface features of the dialog.
Last year, the theme was the change from looking only at the interaction between one user and one computer to looking at computer-supported cooperative work. I am not sure whether CHI'83 and '85 had any such themes running through a number of conference events. If any, it must have been the need for and success of empiricism at CHI'83 and the existence of rigorous research methods at CHI'85. But in any case, of course each conference has been so rich that a single theme cannot describe it in full. It just seems to me that every year there is one subject that underlies a disproportionately large share of conference presentations and informal discussions among participants.
A Few Quotations
John Seely Brown: "The unexpected is the one thing to expect" [when users interact with computers].
Jack Carroll: "...using paper and ..... the thing you use to write with - My vocabulary is used up by now!" [towards the end of a day having given several talks and panel presentations, including some as last-moment stand-in].
Peter Carstensen: "If text processing is the white laboratory rat of traditional computer-human interaction research, then it seems electronic mail is the white rat of computer-supported cooperative work research."
Share this article: Twitter | LinkedIn | Google+ | Email