[Tom's Home Page]
[Professional] [Life,
Fun, &c] [Tell Me...]
[Bookmarks] [Publications
List] <and many papers and essays>
Thomas Erickson
Human Interface, Advanced Technology Group, Apple Computer
(now at) snowfall@acm.org
This chapter argues that feedback can play two important roles in future
human-computer interfaces: coherence and portrayal. Coherence has to do
with human-computer dialogs that have many stages; it is what provides continuity
across the different stages of an extended dialog. Portrayal has to do with
the model that the system presents to its users. Portrayal is important
because it affects the user's experience with the system: how the user interprets
the system's behavior, how the user diagnoses errors, how the user conceives
of the system.
The chapter begins with an analysis of types of visual feedback, and the
roles that feedback plays in today's graphic user interfaces. Next, we examine
a commercial program with sophisticated functionality that illustrates two
problems that are likely to be common in future application programs. Finally,
we discuss an example of an interface design that illustrates the use of
feedback to address these problems.
A decade ago life was simple for the interface designer. Personal computers--at
least those used by ordinary people--were relatively straightforward. They
ran one and only one application program at a time. The program was passive:
the user specified an action and the computer did it. Most interactions
consisted of a series of unconnected action-response pairs: the computer
made no attempt to keep track of what the user had done. Human-computer
interaction occurred through a few input and output channels: the user typed
or used a mouse; the computer displayed text or graphics, or beeped.
Today things have changed. A user can run multiple application programs
at once, switching between them at will. Programs are no longer passive:
they may carry out tasks without direct supervision by their users; they
may interrupt their users to request information or to deliver results.
Human-computer interaction is much more complex: not only may the user be
communicating with several of the programs that are running simultaneously,
but some of those programs may be initiating the communication. Finally,
there are many more channels through which humans and computers can interact:
the user can type, use a mouse, use a stylus to write or gesture, and speak;
the computer can display text, graphics, synthesize speech, and play complex
sounds and animations. All of these factors impose new demands on the human-computer
interface.
What should interfaces of the future look like? How should they support
the increased complexities of human-computer interaction? As desktop computers
begin to offer voice recognition and speech synthesis capabilities, conversation
becomes an increasingly popular candidate for the interface of the future.
Certainly human conversation has many attractive properties. Multiple people
can participate in a conversation, taking turns, interjecting comments,
requesting clarification, and asking questions, all in a remarkably easy
and graceful interaction. And best of all, people already know how to converse.
Unfortunately, turning computers into conversants is a difficult challenge.
Consider some of the fundamental ways in which human-human conversations
and human-computer dialogs differ: The object of a human-computer dialog
is for the human to specify an action for the computer to do; the object
of human-human conversation is usually to accomplish more abstract ends
such as imparting information or altering beliefs. Second, human-human conversations
occur principally through the medium of speech, which consists of a serial
stream of transitory input used to construct and maintain a largely mental
model; in contrast, human-computer dialogs are mediated by an external,
visible representation, which can display information in parallel, and which
persists over time. Third, a human-human conversation is a two way process
in which the participants jointly construct a shared model (e.g., Clark
& Brennan, 1991). In contrast, a human-computer dialog is primarily
a one way process which results, at best, in the user understanding the
computer's model of the situation. In no real sense can the computer be
said to participate in constructing a model, or even to adjust its model
to approximate that of the user. Related to this point is that participants
in a human-human conversation are intelligent, whereas the computer is so
lacking in intelligence--about both the process and content of the dialog--that
even the term 'stupid' is a misnomer. When a human-human conversation breaks
down, human participants are typically aware of the misunderstanding and
take steps to repair the breakdown; when a human-computer dialog fails,
the computer is typically oblivious; it is only in a few well-defined situations--anticipated
by designers--that the computer can detect the misunderstanding and repair
the breakdown.
The basic difficulty is this: Because human-human conversations occur through
the transitory medium of speech, which produces no lasting, external representation,
considerable intelligence and continuous interaction and feedback between
conversants is required to effectively maintain the mental model of what
is occurring. Computers are far from having the requisite intelligence to
do this. Instead, I believe that the most promising approach is to use one
of the strengths of computers--their ability to produce a persistent visual
representation--to instantiate some of the more general properties of human
conversations.
With this approach in mind, I begin by presenting an analysis of the types
and roles of visual feedback used in today's graphic user interfaces. I
suggest that two uses of feedback, supporting coherence in multi-stage dialogs
and providing system portrayals, have important roles to play in making
future human-computer interfaces more conversational. Next, I describe a
commercial program with sophisticated functionality that illustrates two
problems that I believe will be common in future application programs. Finally,
I give an example of an interface design that illustrates the use of feedback
to address these problems.
In this section, I analyze some of the ways in which feedback is used
in the Macintosh graphical user interface (Apple Computer, Inc., 1992).
The goal is to provide some categories and language for talking about the
use of feedback in future graphical user interfaces. I focus mainly on temporal
properties of feedback; other chapters in this volume (Wroblewski, et al.;
de Vet; Jacob), discuss other aspects of feedback in human-computer interaction.
In interface design the term "feedback" typically refers to providing
information relevant to the interaction in which the user is currently involved
(note that "feedback" is used in a more restricted sense by conversational
theorists). Feedback can be presented in a multitude of ways. It may be
visual, auditory, or tactile; it may be either ephemeral or relatively persistent.
Feedback may use multiple attributes of the modality in which it is represented--thus,
visual feedback may involve the use of text, graphics, color, or animation;
and of course, feedback need not be confined to a single modality. Examples
of feedback in graphic user interfaces range from simple beeps, to dialog
boxes, to animated pointers.
Feedback can be divided into three types based on its temporal relation
to the user's activity: synchronous feedback; background feedback; and completion
feedback. As I describe these types of feedback, I'll provide examples by
referring to the feedback that occurs during a single operation: copying
a folder that contains many files by selecting its icon and dragging it
to a window on another volume (figure 1).
Synchronous feedback is closely coupled with the user's physical actions;
in most cases, it is important that there be no perceptible time lag in
the coupling between the user's actions and the feedback. For example, the
Macintosh usually displays a pointer that moves in synchrony with the mouse.
On the Macintosh, synchronous feedback is the default state: at virtually
any time, a user's physical interactions with the system ought to--in some
way--be mirrored by the interface.
When a user copies a folder, several kinds of synchronous feedback occur:
the pointer is shown moving to the to-be-copied folder in synchrony with
the user's movements, the folder icon turns black when the user clicks on
it to select it, and the outline of the folder is displayed as it is dragged
to the new window, again in synchrony with the user's movements of the mouse
(figure 1a).
Background feedback is provided after the user specifies the action,
but before the system completes the action: it represents the activity of
the system as distinct from that of the user. Its basic purpose is to let
the user know that the system is carrying out the specified action. Originally,
when the Macintosh was single-tasking, the user could do nothing else during
this period; now the user can initiate other actions. It is important that
background feedback be provided whenever an operation takes longer than
about half a second.
In the folder copying example, after the user drags the folder to the new
window and releases the mouse button, it may take the system some time to
copy the contents of the folder: in this case, the system puts up a progress
indicator to assure the user that the system hasn't crashed, and to allow
some estimate as to how long the system will take to complete the operation
(figure 1b). The background feedback in this example also tells the user
two other things: the presence of a stop button in the progress indicator
tells the user that the operation may be interrupted; the presence of the
title bar along the top of the indicator tells those who understand the
Macintosh's visual language that another operation may be started before
this one finishes.
Completion feedback is simply an indication that the operation has been
completed or at least that the system can do no more (in the latter case
it may need more information, or an error may have occurred). Completion
feedback fulfills two purposes: it represents the new state of the system,
and it may be used to notify the user that a lengthy operation has been
completed.
In the case of the copy operation, an icon representing the newly copied
folder is displayed (figure 1c). Completion feedback differs from synchronous
and background feedback in one noticeable way: the other types of feedback
are usually ephemeral--they last only a short time, vanishing after the
operation is completed (although the idea of wear as feedback proposed by
Wroblewski, et al., this volume, can be viewed as giving synchronous feedback
some persistent components). Completion feedback often has components that
are persistent. The persistence of components of completion feedback can
serve as an important way of reflecting what has been achieved by a series
of operations.
The typical role of feedback is to support the operation the user is currently performing. Moving the cursor in synchrony with the mouse enables the use of the mouse to become an automatic process; providing background feedback provides assurance that the system hasn't crashed, and often provides some indication of how much longer it will take; the completion feedback, of course, alerts the user that the operation has finished, and often provides a new representation on which the user may perform direct manipulation. The use of feedback for these purposes is essential in allowing users to gracefully complete operations. Ideally, skillful use of feedback permits users to automatically perform operations without thinking about the details of what they are doing. For example, it's very natural to say 'Now click on the OK button'; only the rawest novice needs to be told 'Use the mouse to position the pointer on the screen over the OK button on the screen and then press the button on the mouse.' It is synchronous feedback that permits the user to meld the physical operations of moving and clicking the mouse with clicking the OK button on the screen.
A second role for feedback is to create coherence across the stages of
extended dialogs. An extended dialog is a series of operations all aimed
at accomplishing a particular, high-level goal. Examples of extended dialogs
include retrieving a useful set of records from a database, changing the
layout of a document, and reading and managing electronic mail. However,
today computers have almost no awareness of extended dialogs: the fact that
one user-action follows another has no relevance; the system typically does
not recognize that the user may have a goal that goes beyond completion
of the current operation.
A limited example of supporting coherence in extended dialogs is the way
the Macintosh deals with some error conditions. For example, suppose a user
tries to empty the trash (this is graphical user interface parlance for
deleting files) when the trash contains a running application as well as
other files. The first stage in the dialog is when the user chooses the
"Empty Trash" command. In response, the system displays a standard
dialog box that tells the user how many files will be deleted and asks for
confirmation. Once the user provides confirmation, the system will attempt
to delete the files and will discover that one of the files is a running
application that we will call X. Since deleting a running application is
likely to be a mistake, the system initiates a new stage of the dialog:
it displays a dialog box that explains that the trash contains a running
application called X that it can not delete, and gives the user the choice
of stopping or continuing (deleting the other files). The key point here
is that the system is still aware of what the user did in the previous stage
of the dialog, and gives the user the option of deleting the other items
in the trash and thus accomplishing as much of the original goal as possible.
While this seems like a sensible response, unworthy of special remark, the
fact is that today's systems would be more likely to abort the entire operation.
In general, today's systems do not recognize higher level goals, and do
not support incremental progress towards them.
As computing systems begin to manifest increasingly complex functionality,
it is becoming increasingly important that users receive feedback that allows
them to build up a mental model of the system. That is, rather than just
supporting the current operation, feedback can work in a global way, helping
the user understand not only the state of the current operation, but the
structure of the application program, and the ways in which the program
accomplishes actions. I call this portrayal.
An example of portrayal can be found in the use of background feedback in
an electronic mail and bulletin board program called AppleLink. After a
user launches AppleLink and enters the password, it accesses a modem and
connects to a remote, mainframe computer. Since it takes several seconds
to make this connection, AppleLink displays a connection storyboard showing
the stages in connecting to the remote computer (figure 2 illustrates two
states of the connection storyboard).
The connection storyboard plays two roles. First, it plays an operational
role, showing the user that the program is doing something and indicating
approximately how far along the system is. Second, the storyboard also provides
a portrayal by depicting a simple model of the system and the connection
process (although the model could be improved, as it contains some frivolous
and obscure elements). By watching the connection storyboard, users can
learn that the system is working over a phone line, that it is connecting
to a different computer, that it is using the password the user entered
to gain access to the other computer, and so on. None of this is immediately
useful information. However if something goes wrong-there is trouble with
the phone system, or the mainframe is down-the user has a better chance
of understanding the problem.
This section has presented an analysis of feedback according to how it temporally relates to the activities of the user and the system. It identified three types: synchronous feedback, background feedback, and completion feedback. Feedback can play at least three roles in human-computer interaction: First, it can be used for to support the user in smoothly completing the current operation. Second, feedback can be used to add coherence to a human-computer dialog by recognizing that users' have higher level goals, and supporting extended dialogs by preserving information across the stages of the dialog. Finally, feedback can assist the user in forming appropriate mental models of the overall structure of the system and its processes: portrayal.
DowQuest (Dow Jones & Co., 1989) is a commercially available, on-line
system with sophisticated functionality. It provides access to the full
text of the last 6 to 12 months of over 350 news sources, and permits users
to retrieve articles via pseudo natural language and an information retrieval
technique called relevance feedback (Stanfill & Kahle, 1986). Relevance
feedback means that users instruct the system on how to improve its search
criteria by showing it examples of what is wanted. Relevance feedback allows
users to say, in essence, 'find more like that one.'
While the version of DowQuest described here does not have a state-of-the-art
interface, it has two characteristics of interest to us: it is based on
the assumption that its users will interact with it through multi-stage
dialogs; it appears to possess some degree of intelligence. These characteristics
are relevant because they seem likely to be true of many future computer
systems and applications, and because they both give rise to usage problems.
Let's examine the process of retrieving information in DowQuest.
The user begins by entering a query describing the desired information in natural language. As the user's manual says, DowQuest "lets you describe your topic using everyday English. You don't have to be an expert researcher or learn complicated commands." For example, the user might enter: "Tell me about the eruption of the Alaskan volcano." However, DowQuest does not really understand natural language; instead it uses only the lower frequency words of the query in conjunction with statistical retrieval algorithms. In the example shown, the system eliminates the words "tell," "me," "about," "the," and "of," and uses the other, lower frequency words--"eruption," "Alaskan," and "volcano"--to search the database.
In response to the initial query the system returns a list of titles
called the "Starter List" (figure 3 shows the Starter List for
the "Alaskan volcano" query). The list is ordered by relevance,
with the first article being most relevant, and so on; "relevance"
is defined by a complex statistical algorithm based on a variety of features
of which the user has no knowledge. While this list of articles may contain
some relevant items, it also usually contains items that appear--to the
user--to be irrelevant. The next stage of retrieving information is where
the real power of DowQuest lies.
DOWQUEST STARTER LIST HEADLINE PAGE 1 OF 4 1 OCS: BILL SEEKS TO IMPOSE BROAD LIMITS ON INTERIOR... INSIDE ENERGY, 11/27/98 (935 words) 2 Alaska Volcano Spews Ash, Causes Tremors DOW JONES NEWS SERVICE , 01/09/90 (241) 3 Air Transport: Volcanic Ash Cloud Shuts Down All Four... AVIATION WEEK & SPACE TECHNOLOGY, 01/01/90 (742) 4 Volcanic Explosions Stall Air Traffic in Anchorage WASHINGTON POST: A SECTION, 01/04/90 (679) * * * * *
In stage 2 of the retrieval process the user employs relevance feedback
to refine the query. A simple command language is used to tell the system
which articles in the starter list are good examples of what is wanted.
The user may either specify an entire article or may display an article
and specify paragraphs within it (in the "Alaskan volcano" example,
the user might enter "search 2, 3, 4"). The system takes the full
text of the selected articles and chooses a limited number of the most informative
words for use in the new version of the query. It then returns a new list
of the 'most relevant' items (figure 4). This second, relevance feedback
retrieval stage may be repeated as many times as desired. Because the real
power of DowQuest lies in its ability to do relevance feedback, it is in
the user's best interest to perform this stage of the query process at least
once, and preferably a couple of times.
DOWQUEST SECOND SEARCH HEADLINE PAGE 1 OF 4 1 Air Transport: Volcanic Ash Cloud Shuts Down All Four... AVIATION WEEK & SPACE TECHNOLOGY, 01/01/90 (742 words) 2 Alaska Volcano Spews Ash, Causes Tremors DOW JONES NEWS SERVICE , 01/09/90 (241) 3 Volcanic Explosions Stall Air Traffic in Anchorage WASHINGTON POST: A SECTION, 01/04/90 (679) 4 Alaska's Redoubt Volcano Gushes Ash, Possibly Lava DOW JONES NEWS SERVICE , 01/03/90 (364) * * * * *
Users encountered difficulties due to two general problems: failure to support multi-stage dialogs, and unrealistic expectations of intelligence.
One problem with DowQuest was that although users had to go through two stages of dialog before getting the benefits of the system's power, the only support provided for extended dialogs was to display the number of iterations the user had gone through. In general, the system erased commands after they were executed, and provided no feedback on which articles had been accessed. Thus, users had to rely on their memories or, more typically, jotted notes, for information such as the text of the original query; which articles had been opened and read; which articles had been sent to the printer; which articles or paragraphs had been used as examples in relevance feedback; which titles in the retrieval list had shown up in previous iterations of the search; and so on. This missing information made the search process cumbersome.
Although no explicit attempt was made to portray DowQuest as intelligent,
new users of DowQuest generally expected it to exhibit intelligent behavior.
One reason for this is that DowQuest's behavior implied intelligence. It
appeared that DowQuest could understand English; the fact that DowQuest
dropped words out of the search query and used a weighted keyword search
was never made explicit in the interface. It appeared that DowQuest could
be given examples of what was wanted, and could retrieve articles that were
like those examples; the fact that this was an entirely statistical process
was not made clear to the users. It appeared that DowQuest could order a
list of articles in terms of their relevance; the fact that DowQuest's definition
of relevance was very different from its user's definition was not evident.
Finally, the fact that some users knew that DowQuest ran on a supercomputer
may have contributed to the expectations of intelligence.
Users' expectations of intelligent behavior were usually not met. For example,
one user typed in a question about "Ocean Technologies" (a maker
of optical disk drives) and got back a list of stories about pollution control
technologies (for controlling pollution produced by off-shore oil rigs).
He responded by concluding that the system was no good, and never tried
it again. While such a reaction is perfectly appropriate in the case of
conventional applications--a spreadsheet that adds incorrectly should be
rejected--it prevented the user from proceeding to a point where he could
have benefited from the system's power.
It is interesting that in spite of such disappointments, many users continued to act as if DowQuest was intelligent; in fact, assumptions of intelligence were used to generate reasons for the program's behavior in extended dialogs. For example, one study revealed an interesting problem in the second stage of a DowQuest query (Meier, et al. 1990). Users would ask the system to retrieve more articles 'like that one.' In response, the system would display a new list of articles ordered by relevance. Typically, the list would begin with the article that had been used as the example for relevance feedback. While computer scientists will be unsurprised to find that a document is most relevant to itself, ordinary users lacked this insight. Instead, some users assumed that the only reason for the system to display something they had already seen was that there was nothing else that was relevant. Thus, some users never looked at the rest of list. This behavior is in accord with Grice's (1975) conversational postulates, where a conversational partner is expected to provide new information if it is possessed; this reasoning fails when one of the 'conversants' is utterly lacking in intelligence.
While DowQuest does not have a state-of-the-art user interface, it is a useful example because it has two properties that will be common in future applications and computing systems. Its users need to interact with it through multi-stage dialogs, and it appears to understand natural language and to possess other capabilities that seem intelligent. As we have seen, both of these characteristics can give rise to problems.
In this section I describe elements of a new interface design for a system with DowQuest-like functionality that illustrate the use of feedback for portrayal and coherence.
There is no single method for using feedback to support coherence. In general, the approach is to make use of completion feedback which persists over the many stages of extended human-computer dialogs. The example that follows shows five stages in a dialog in which someone is retrieving documents; it is based on a prototype system known as Rosebud that uses agents called Reporters to conduct searches of databases distributed across a network (see Erickson and Salomon, 1991, and Kahle, et al., 1992, for more information). Note that the interface described below provided feedback by using color and other subtle graphic effects that are not easily reproducible in black and white figures; where necessary, these effects have been transformed to make them visible (e.g., color to italic text). [[This has, of course, been converted from publication format to the web -- some day I may redo the figures with color... --TE]]
The dialog begins with the user entering some initial search terms and
specifying databases for the system to search (this stage of the dialog
is not shown). After the user presses the Search Now button, the dialog
box in figure 5 appears. In the top pane, the system lists the initial set
of documents it has found. These items are all displayed in a special highlight
color (represented here by italic text), that indicates that this is new
information that the user has not previously seen. In the next to the last
pane, the system retains the search terms previously entered ("Motorola
Lawsuit")
See figure 6. At this stage in the dialog, the user has selected the
second item in the Results List by clicking on it. That item is highlighted,
and a "preview" of its contents is shown in the second pane in
the window. Note that the original search terms are still visible in the
lower part of the window, and the retrieved documents are still shown in
the new information highlight color. Completion feedback which persists
across turns is being used to provide coherence.
See figure 7. The user has asked the system to save the document to his
computer by pressing the "Save" button. The system does so, and
marks the document icon with an "S" as a persistent indicator
that it has been saved.
See figure 8. The user has just clicked on the Add to Search button (telling
the system that the second document is a good example of what is wanted).
At this point, the document icon and title showed up in the bottom pane
of the window; the document title and icon are displayed in the new information
highlight color (as indicated by the italic typeface). The goal is to help
the user distinguish between information that was entered previously (and
that has determined the current set of results), and information that applies
to future stages of the dialog (e.g., when the next iteration of the search
is carried out). The Search Now button is also highlighted with this color
because pressing it will make use of the new information.
See Figure 9. The user has pressed the Search Now button, and the system
has carried out a search using the new information. The new results appear
in the top pane. Documents that have not been retrieved before are shown
in the new information highlight color (indicated here by italic text);
documents that had been brought back by previous searches are no longer
highlighted. Similarly, the Search Now button has reverted to its ordinary
color. Highlighting new items shows the user that new items have indeed
been found, and directs the attention to the most relevant portion of the
results.
Having looked at ways of using feedback to support coherence over five
stages of an extended dialog, let's turn to the problem of controlling expectations
of intelligence. There are two complementary approaches. First, designers
need to avoid creating unrealistic expectations to the extent possible.
This is difficult because, as systems take on increasingly sophisticated
and complex functionality, the easiest means of explaining the functionality
is through analogy to intelligent behavior. But as we have seen in the case
of DowQuest, unrealistic expectations can lead the user astray.
A more positive approach to the problem is to use background feedback to
portray what the program is actually doing. A storyboard could be used to
reveal the mechanism that underlies information retrieval (see figure 10).
In this case, the storyboard explicitly tells the user that it is dropping
out common words like 'Tell', 'me', 'about' and only using keywords to search;
and it also provides an explanation of why a particular document was retrieved.
Using background feedback in this way does two things: it lessens the chance
that users will assume the system is intelligent, and it gives the user
a chance at understanding why the system did not produce the anticipated
results, and thus provides the option for users to appropriately adjust
their strategies. Because the user and the system really don't have a shared
model of what is happening, it is essential that feedback be used to portray
the system as accurately as possible.
In human-human conversations, parties to the conversation establish a
common ground, a shared set of mutually understood terms, concepts, and
referents. As the conversation proceeds, both parties repeatedly refer to
the common ground, thus mutually reminding one another about it, and gradually
extending and refining it. While this works well in human-human conversations,
the verbal establishment and maintenance of common ground is likely to be
beyond the capabilities of computers for quite some time.
In the absence of such intelligence, a valuable course to pursue is to use
feedback to represent the common ground of the human-computer 'conversation.'
In this chapter we've looked at the use of visual feedback to provide coherence
and portrayal in the dialog between human and computer. We looked at the
use of completion feedback to provide coherence over five stages of an extended
dialog. It was used to indicate selected items, the state of retrieved documents
(saved to the user's disk or not), and to distinguish between new and old
information. In general, completion feedback was used to build up a persistent,
explicit model of the what had happened. Similarly, background feedback
was used to lower expectations of intelligent behavior by explicitly portraying
the basically mechanical processes of the program. Portrayal is important
because, in the absence of an explicit model, the user may make unwarranted
assumptions about the system's intelligence, and misinterpret the system's
responses. Even as feedback is used to provide the human computer dialog
with coherence closer to that of a human conversation, feedback must also
be used to make it clear that the dialog is being carried out between a
human and a non-intelligent system.
Feedback is a vast topic, and I have touched on only a few of the more important
points. I see two important directions for further research. First, we need
a better understanding of how visual feedback can be used to support human-computer
interaction. One important line of research is the study of design conversations.
A variety of investigators (e.g. Tang, 1989; Minneman, et. al., 1991; Lee,
this volume) are examining conversations among members of design teams;
such conversations occur in parallel with the use of visual and other physical
representations and reveal interesting interactions between conversation
and persistent visual feedback. Understanding the ways in which people use
physical representations to help support design conversations is likely
to yield insights into ways of improving visual feedback in graphic user
interfaces. A second direction for investigation is the use of sound as
feedback. Sound has great potential for enhancing portrayal through both
synchronous feedback (e.g., Gaver, 1989) and background feedback (e.g.,
Gaver, 1991; Cohen, 1993), but has not yet received sufficient attention.
Gitta Salomon was a co-designer of the Rosebud interface described in this chapter. Other people who contributed to the design and subsequent implementation of Rosebud are: Charlie Bedard, David Casseres, Steve Cisler, Ruth Ritter, Eric Roth, Kevin Tiene, and Janet Vratny. My ideas on conversation and feedback have benefited from discussions with Susan Brennan, Jonathan Cohen, Gitta Salomon, and Yin Yin Wong. Jonathan Cohen and an anonymous reviewer provided helpful suggestions on earlier drafts of this chapter.
Apple Computer, Inc. Macintosh Human Interface Guidelines
. Reading, MA: Addison-Wesley, 1992.
Clark, H. H. & Brennan, S. E. (1991). Grounding in Communication. In
L. B. Resnick, J. Levine, & S. D. Teasley (Eds.), Perspectives
on Socially Shared Cognition . Washington, DC: APA.
Cohen, J. (1993) Monitoring Background Activities. First International
Conference on Auditory Display . Santa Fe, NM.
Dow Jones & Company, Inc. (1989) Dow Jones News/Retrieval User's Guide.
Erickson, T. & Salomon, G., (1991) Designing a Desktop Information System:
Observations and Issues. Human Factors in Computing Systems: the Proceedings
of CHI '91 . ACM Press.
Gaver, W. W. (1989) The Sonic Finder: An Interface that Uses Auditory Icons.
Journal of Human-Computer Interaction , 4:1. Lawrence Erlbaum
Associates.
Gaver, W. W., O'Shea, T. & Smith, R. B. (1991) Effective Sounds in Complex
Systems: The ARKola Simulation. Human Factors in Computing Systems:
the Proceedings of CHI '91 . ACM Press.
Grice, H. P. (1975) Logic and Conversation . In P. Cole &
J. L. Morgan (eds.), Syntax and Semantics, Volume 3: Speech Acts. New York:
Seminar Press.
Jacob, R. J. K. Natural Dialogue in Modes Other than Natural Language. This
volume.
Lee, J. Graphics and Natural Language in Design and Instruction. This volume.
Meier, E., Minjarez, F., Page, P., Robertson, M. & Roggenstroh, E. (1990)
Personal communication.
Minneman, S. L. & Bly, S. A. (1991) Managing á Trois: A Study
of a Multi-User Drawing Tool in Distributed Design Work. Human Factors
in Computing Systems: CHI '91 Conference Proceedings , 217-223.
Stanfill, C. and Kahle, B. (1986) Parallel Free-text Search on the Connection
Machine System. Communications of the ACM. 29:12, 1229-1239.
Tang, J. C. (1989) Listing, Drawing, and Gesturing in Design: A Study of
the Use of Shared Workspaces by Design Teams. PhD Dissertation, Stanford
University, 1989. (Also available as a Xerox PARC Technical Report, SSL-89-3,
April 1989.)
de Vet, J. H. M. Feedback Issues In Consumer Appliances. This volume.
Wroblewski, D. A., McCandless, T. P., and Hill, W. C. Advertisements, Proxies,
and Wear: Three Methods for Feedback in Interactive Systems. This volume.
[Tom's Home Page]
[Professional] [Life,
Fun, &c] [Tell Me...]
[Bookmarks] [Publications
List] <and many papers and essays>
© Copyright 1995 by Thomas Erickson. All Rights Reserved.