Yeah, I hear you: Why aren’t there more
sounds and graphics in our applications?
Workshop/Advanced Topic Summary
UPA Conference, June 7-11, 2004
By Alice Preston and Susan Fowler
The Tip of the Iceberg
This page presents a summary of the work
done by the attendees (the authors and Julie Bzostek, Millicent Cooley, and John Flowers) of a 2004 UPA workshop, “Yeah, I Hear You. Why
Aren’t There More Sounds and Graphics in Our Interfaces?”
The workshop collected information on the use of graphics
and sound in complex, data-intensive, and mission-critical designs. The participants
shared information about the neuropsychology of visualization and auralization;
when multiple media are and are not useful; and the challenges of adding multimedia
When Alice Preston and I proposed the workshop, we expected to collect information
about using sound in interfaces and then create a set of guidelines. Instead,
the workshop members convinced us that it was far too early for guidelines,
that trying to come up with rules at this point would cut off experimentation
and constrain rather than expand our knowledge.
John Flowers added that, in
addition to being far too early for guidelines, "an equally important impediment
to making a concise set of guidelines is that guidelines are task- and application-dependent.
For example, principles for good design of displays for online patient monitoring
may be quite different from those for designing auditory displays for exploratory
data analysis, and principles for designing effective auditory alarms may be
very different from the previous two. While there are some general principles
of auditory perception and attention that may be important across most all tasks
and applications, there will be design considerations unique to specific tasks."
So, instead, we described, half jokingly and half seriously,
the knowledge dissemination “iceberg”:
Initially, academics do research and publish their results
in peer-reviewed journals.
Other researchers start writing literature reviews, summarizing
and comparing research areas.
Professors like Ben Shneiderman begin to publish instructional
digests for graduate students, who then do more research.
Articles start to appear in professional journals and
Authors write trade books on the topic: "How To
Create Sound User Interfaces," "SUI Design Handbook," "SUI
Finally, "Sound Interfaces for Dummies" appears.
Sound interface design is still only at the top of the iceberg,
no lower than number 2. A possible exception is Susan Weinschenk’s Designing
Effective Speech Interfaces (Feb. 2000), which covers verbal interfaces.
However, our concentration during the workshop was on non-speech sound.
Once we discovered how far away we were from a set of guidelines,
we decided to at least collect and categorize examples of sounds used in software
and other artifacts. We came up with these categories:
Note that these categories overlap, so any particular application
may actually fall into more than one. For example, the BMW car door is both
a tangible product and an example of branding; speech applications may fall
into both assistive technologies and eyes-busy/hands-busy applications; and
MultiVis project at the University of Glasgow for making visualizations of data (particularly mathematical graphs and tables) accessible to those who are blind or have a visual impairment.
Tangible, Ambient Intelligent Products
Computer-ish devices will disappear and be replaced by
tangible objects that can be used instead of "old-fashioned" interaction
devices such as remote controls, keyboards, et cetera. An important interaction
modality in these devices is probably going to be speech, but also non-verbal
sounds as feedback (Dr. Jettie Hoonhout).
Formerly real feedback: Bubble noise of coffeemakers,
cell-phone rings, the "film advancing" sound that was added to many
digital cameras. (Note: On the plane home, someone had his cell phone
ringer set to sound like a "real" phone—everyone turned around to look. SLF.)
Sound as art—for example, Christopher Janney's
REACH in the New York City subway (http://www.janney.com/index.html;
select "Urban Musical Instruments" on the right, then "Projects"
on the left, then "1996, REACH," then "Video").
During the workshop, Millicent Cooley pointed out that people
rarely have a vocabulary with which they can talk about sounds. Customers may
decide against including sounds in a website or product if they cannot communicate
effectively about them.
So that one of her project teams could listen to and make
decisions about sounds, she developed a sound map that she presented to the
workshop. She also demonstrated how sound can change one's perception of visuals.
Edworthy, J., & Stanton, N. (1995). A user-centred approach
to the design and evaluation of auditory warning signals; methodology. Ergonomics,
Edworthy, J. (1994). The design and implementation of non-verbal
auditory warnings. Applied Ergonomics, 25, 202-210.
Flowers, J. H., Whitwer, L. E., Grafel, D. C., & Kotan,
C. A. 2001. Sonification of daily weather records: Issues of perception, attention
and memory in design choices. Proceedings of the 2001 International Conference
on Auditory Display, 222-226.
Flowers, J. H. & Grafel, D. C. 2002. Perception of
sonified daily weather records. Proceedings of the Human Factors and Ergonomic
Society 46th Annual Meeting, 2002, 1579-1583.
Gaver, W.W. (1993). What in the world do we hear? An ecological
approach to auditory source perception. Ecological Psychology, 5(1),
Gaver, W.W. (1997). Auditory interfaces. In: Helander,
M.G., Landauer, T.K., and Prabhu, P. (eds.). Handbook of Human-Computer
Interaction, 2nd edition. Amsterdam: Elsevier Science.
Kramer, Geoffrey (ed.). Auditory Display: Sonification,
Audification, and Auditory Interfaces. Santa Fe Institute Studies in the
Sciences of Complexity, Proc. Vol. XVIII. Reading, MA: Addison-Wesley, 1994.