Towards a Comprehensive Computational Theory of Human Multitasking: Advancing Cognitive Modeling with Detailed Analyses of Eye Movement Data and Large-Scale Exploration of Task Strategies
Yunfeng Zhang
Committee: Anthony Hornof (chair), Allen Malony, Michal Young, David Kieras, Ulrich Mayr
Dissertation Defense(Jun 2015)
Keywords: multitasking, cognitive modeling, eye tracking

Designs of human-computer systems intended for time-critical multitasking can benefit from an understanding of the human factors that support or limit multitasking performance and a detailed account of the human-machine interactions that unfold in a given task environment. An integrated, computational cognitive model can test and provide such an understanding of the human factors related to multitasking and reveal the dynamic interactions that occur in the task at the level of hundreds of milliseconds. This dissertation provides such a detailed computation model of human multitasking, built for a time-critical, multimodal dual task experiment and validated by the eye tracking data collected from the experiment. This dissertation also develops new approaches to conducting cognitive modeling, which enable efficient and systematical exploration of multitasking strategies, as well as principled model comparisons.

The dual task experiment captures many key aspects of real-world multitasking scenarios such as driving. In the experiment, the participant interleaved two tasks: one requires tracking a constantly-moving target with a joystick, and the other requires keying-in responses to objects moving across a radar display.

Peripheral visibility and auditory conditions of the experiment were manipulated to assess the influence of peripheral visual information and auditory information on multitasking performance. Detailed eye tracking data were collected, and this dissertation presents a detailed analysis of this set of data, which provides the bases for model development and validation.

The cognitive model presented in this dissertation, built based on the EPIC (Executive Processes-Interactive Control) cognitive architecture, accurately accounted for the eye movement data and other behavioral data of each participant using systematic explorations of task strategies and parameters configured for each individual participant. A parallelized cognitive modeling system was developed to accommodate the much increased computational demand of strategy exploration and individualized model building. New model comparison techniques were proposed to determine which strategy best accounts for the empirical data. Payoff analyses were applied, and they revealed people's tendency to locally optimize task performance based on task payoff as well as instantaneous feedback. The results point to new approaches for building a priori models that predict multitasking performance.