Multimodal Multitasking

An project at the University of Oregon Cognitive Modeling and Eye Tracking Lab that aims to explain and predict user performance in complex auditory and visual multitask environments by means of computational modeling and empirical human data collection, including eye tracking.

A simulated screenshot from the task environment

The above diagram shows a simulated screenshot from the task environment, with a user's eye looking at blips on the left side of the screen, and spatialized sound coming from stereo speakers to help guide the user's gaze to the blips that need to be classified.

Watch a QuickTime or MP4 video with 3D audio and the user's gaze superimposed as a small red dot.
A photo of the task environment for collecting human data.

The above photo shows the task environment for collecting human data, including a computer display with an attached eye tracker, chinrest to help with eye tracking, headphones for playing 3D audio, and a keypad and joystick for user input. On the bottom right, there is also a small action figure with a cardboard cutout to remind the user of the mapping between the visual and auditory space that is used to identify the location of blips on the screen that need to be classified. You can watch a QuickTime or MP4 video of the task being executed. The video includes the 3D audio that helps to cue the participant as to which blip needs to be classified next. The user's task is to classify the blips as "hostile" or "neutral" based on their color, shape, speed and direction. The video shows the user's gaze superimposed as a small red dot. Note that the display is "gaze-contingent" in that it only shows the contents of the window where the user is looking. This enables the experiment to specifically study how people can use 3D audio to keep track of what is happening in a secondary visual display that is just out of range of a primary visual display, as would be useful for complex multi-display tasks such as air traffic control.

Project Personnel

Anthony Hornof (Principle Investigator), Yunfeng Zhang, Tim Halverson, Erik Brown, Andy Isaacson, Kyle Vessey.

Scientific and Technical Objectives

The scientific and technical objectives of the project are to advance an understanding of (a) human performance when engaged in multitasking behavior that requires interaction with complex multimodal auditory and visual displays and (b) how to simulate and ultimately predict human performance in such situations by means of computational cognitive modeling, ultimately to inform the design of multimodal watchstations. Computational cognitive models are computer programs that behave in some way like humans. Scientific and technical objectives include developing of these computational cognitive models, and advancing a framework for future development of these models which simulate parallel visual-perceptual and auditory-perceptual processing in multitasking situations. Technical objectives of the project include the redesign of multimodal dual-task software for experimental data collection to introduce eye tracking for both (a) data collection and analysis and (b) gaze-contingent stimulus modifications, such as cueing an operator to look at a blip on a radar screen a fixed amount of time after he or she previously looked at it. Technical objectives also include the computer programming involved with this instrumentation and the modeling itself.

Approach

The approach includes: (a) Focus primarily on one specific dual task and corresponding complex display which together provide a solid foundation for exploring a wide range of auditory-visual perceptual integration issues. (b) Conduct exploratory cognitive modeling to explain data that has already been collected for that task. (c) Carefully redesign and re-implement the software used to collect human data in a tightly controlled multimodal multitasking environment, including both eye tracking and spatialized audio. (d) Conduct a new empirical study with eye tracking to evaluate the predictions of competing models of the task that were built before eye movement data were available. (e) Refine the models and the modeling framework based on eye movement data. (f) Use the models to guide the design of new, complex, improved auditory-visual displays. (g) Build a prototype display using the model as a design assistant and initial test user.

The approach deviates from the original proposal simply with an increased emphasis on the redesign of the software used to collect the human data, and thus the addition of step (c), above.

Videos of the Task

These videos illustrate the three independent variables: wave size, gaze contingency on or off, and sound cueing on or off. The video shows the user's gaze superimposed as a small red dot. Note that the videos are quite large (1024x1280) on your screen. Headphones help to illustrate the 3D audio.
Medium Difficulty: Medium wave (6 blips), no gaze contingency, no sound QuickTime or MP4
Difficult: Bigger wave (8 blips), yes gaze contingency, yes sound QuickTime or MP4

Publications

Pdf

Hornof, A. J., & Zhang, Y. (2010, to appear). Task-constrained interleaving of perceptual and motor processes in a time-critical dual task as revealed through eye tracking. To appear in Proceedings of ICCM 2010: International Conference on Cognitive Modeling, 6 pages.

Pdf

Hornof, A. J., Zhang, Y., & Halverson, T. (2010). Knowing where and when to look in a time-critical multimodal dual task. Proceedings of ACM CHI 2010: Conference on Human Factors in Computing Systems, New York: ACM, 2103-2112. Honorable mention paper (top 5% of all papers submitted). Video that accompanied paper available in QuickTime or MP4.

Pdf

Hornof, A. J., Halverson, T., Issacson, A., & Brown, E. (2008). Transforming object locations on a 2D visual display into cued locations in 3D auditory space. Proceedings of the 52nd Annual Meeting of the Human Factors and Ergonomics Society, 1170-1174

Funding

National Science Foundation (NSF) IIS-0713688. $499,591 for 7/1/10 to 6/30/13. Principal Investigator (PI): Anthony Hornof. Title: HCC: Small: A Computational Theory of Perceptual Integration in Multimodal Multitasking.

Office of Naval Research (ONR) Award No: N00014-06-10054 entitled Computational Modeling and Eye Tracking of Multitasking Performance with Multimodal Auditory and Visual Displays, $439,787 for 10/1/05 to 9/30/08, Anthony Hornof, Principle Investigator. Any opinions, findings, and conclusions or recommendations expressed in these materials are those of the author(s) and do not necessarily reflect the views of ONR.

Last updated by ajh on 6/24/10