Jump down to:   Projects - Overview - Members - Publications - Resources

Cognitive Modeling, Eye Tracking,
and Human-Computer Interaction

An eye tracker connected to a simulated human information processor
LC Technologies Eyegaze System (left);
Card, Moran, and Newell's Model Human Processor (center);
and Kieras and Meyer's EPIC architecture (right)


This page describes the research underway in the Cognitive Modeling and Eye Tracking Lab, which is in the Department of Computer and Information Science at the University of Oregon. The lab is directed by Dr. Anthony Hornof and is located in 335 Deschutes Hall, phone (541)346-1372. The lab is funded by the National Science Foundation and the Office of Naval Research, though the opinions expressed here do not necessarily reflect the views of these funding agencies.

Selected Lab Projects

VizFix--A tool for visualizing data from eye tracking experiments, with Tim Halverson.

Visual search of text in mixed densities and colors explored by Tim Halverson

Running a participant in an eye tracking experiment - a photo-essay by Ishwinder Kaur.

The effect of semantics on visual search investigated by Ishwinder Kaur.

Research Overview

This lab explores how cognitive psychology, the measurement of eye movements, and computer programming can be integrated to build and refine psychological theory, predict aspects of human performance, and contribute to the design and analysis of useful and usable computer systems. Cognitive models are computer programs that behave in the way that humans behave. Cognitive modeling can inform the design of useful and usable human-computer interfaces by (a) providing accurate post-hoc explanations of how people accomplish tasks on a computer and (b) providing a foundation for building accurate a priori predictive models.

A cognitive architecture is a set of reusable computer functions, methods, and data structures that accurately represent the fundamental capabilities and limitations of human performance. The goal is that the architecture characterizes the invariant aspects of human information processing--the parts that are the same for most people--and that a model be built within the framework provided by the architecture. For example, the architecture would represent that the eyes can move on command, and that visual details can only be perceived when at the center of the gaze; and a model might be constructed to move the eyes to a clock in order to get the current time. Important cognitive architectures currently include ACT-R/PM (Atomic Components of Thought with Rational Analysis and Perceptual/Motor enhancements), EPIC (Executive Process-Interactive Control), Soar , and EPIC-Soar (the Soar architecture with enhancements from EPIC).

Eye tracking--the measurement and analysis of eye movement data--can inform cognitive modeling. In particular, gaze fixation locations and durations provide new dependent variables that, in addition to the traditional measures of performance (speed and accuracy), can be used to validate and refine the accuracy of cognitive models. Tracking a person's eye movements is becoming an increasing popular, effective, and informative way to understand the perceptual, cognitive, memory, and motor processing that people use when they accomplish a computer task. Eye movement research can be used to answer questions such as "If flashing banner ads slow people down, is that because people look at them or because they are distracting even when you don't look at them?" or "What is the visual search strategy that someone would use to find the link to tax forms and publications on the IRS web page? One goal of this research is to add automated analysis tools to web design software such as Dreamweaver that would answer questions such as these for designers using this software to build web pages.

The IRS home page    The IRS home page broken up into visual fields
On the left, the main web page for the IRS (www.irs.gov on 7/15/00). How long will it take someone to find the link to tax forms? How will they do it? On the right, the visual regions and objects on the web page. This is the spatial information that would be used by a screen layout analysis tool to predict visual search times. The gray circle at the top left represents how the tool would simulate the foveal region as it starts a maximally efficient foveal sweep of the web page.

Lab Members

smiling people inside
Lab members in 2009: (from left to right) Yunfeng Zhang, Tim Halverson, Kyle Vessey, and Anthony Hornof.

smiling people outside
Lab members in 2007: (from left to right) Tim Halverson, Andy Isaacson,
Erik Brown, Anthony Hornof, and Rachel Nehmer (not shown).

smiling people in the lab
Lab members in 2003: (standing, from left to right) Linda Sato, Anna Cavender, Rob Hoselton,
Ishwinder Kaur, Tim Halverson; (seated) Courtney Stevens, Anthony Hornof.

Selected Publications

Lab Resources

Lab Manifesto (PDF download)
Letters of Recommendation (PDF download)
EPIC Tutorial Materials