NRL Dual Task Software
Software Requirement Specification

Anthony Hornof, Tim Halverson, Erik Brown

March 04, 2007

Background

This Software Requirements Specification (SRS) describes the computer program that is currently being designed and built at the University of Oregon Cognitive Modeling and Eye Tracking Lab, with funding from the Office of Naval Research. The software program will be used to collect human performance data in a multimodal, multitasking, cognitive psychology experiment. The experimental design and software (including at the source code level) builds directly on work previously conducted at the Naval Research Laboratory (NRL). This prior work is discussed in numerous publications, including Brock, Ballas, Stroup, and McClimens (2004), Kieras, Ballas, and Meyer, (2001), and more.

Experiment Description

This experiment will require a user to simultaneously manage two tasks which are displayed side by side on a computer screen. Each task will be displayed within its own window. One task is called the tracking task. This task has the user follow a target, which will be an icon representing a military aircraft, with a joystick that controls a reticle (crosshairs). The target will move somewhat randomly over the task display window and the user will attempt to keep the reticle over the target as accurately as possible. The second task is called the tactical task. This task will present the user with a simulated radar that tracks three different types of aircraft as icons moving from the top of the task display window towards the bottom. The user must either identify the aircraft as hostile or neutral under one of the following two conditions: One, the system may automatically inform the user that an aircraft is hostile or neutral by coloring a target icon (blip) as red or blue, respectively, in which case the user must simply confirm this target. Two, the system may instead color a blip amber, in which case the user must classify the target as hostile or neutral based on the target aircraft type and other attributes. The user will spend the majority of time in the tracking task. Occasionally, the tactical task will require attention and force the user to momentarily abandon the tracking task as target planes need to be either confirmed or classified.

The tracking task will move the target aircraft in a somewhat random manner over the task display window. This movement is controlled by the software and hardwired into the code. The tactical task, however, is controlled by a scenario file. A scenario is one run of the experiment and typically consists of a short five minute session to a 15 minute or longer session. The scenario file contains the information necessary for the software to place target icons into position on the tactical task window and to give the target a speed, heading, and aircraft type. In addition, the scenario file will determine whether each target will need to be confirmed or will have to be classified by the user. The target blips will initially appear black, and will change to either red or blue (to be confirmed by the user) or will change to amber (to be classified by the user). Once a target has been either confirmed or classified, the blip will turn white (from here forward the term classify will be used to mean either confirm or classify and the meaning will be clear from the context). All blips will eventually be removed from the task window as they reach a point near the bottom of the radar or as determined in the scenario file.

The user will classify targets with a keypad. The only available keys will be the numbers 1-9 and the <Enter> and '+' keys. When a target needs to be classified, the user will first press the corresponding key for the target's classification type, either Hostile (<Enter>) or Neutral ('+'). The user will then press the corresponding target's number, 1-9, which will be visible as part of the target blip. The user will not need to press a <Return> key at any point. The tactical display will visually show the keys that are pressed.

Eye tracking

An eye tracker (available from LC Technologies, Inc.) can be used to control the visibility of the task display windows. Based on the location of the user's gaze, the task window that is not currently being looked at will be hidden so that the user is unable to gain any knowledge of that task with peripheral vision. Without an eye tracker, a manual method, such as a keystroke, will be available to switch the display visibility for testing the software without the eye tracker. An option will also be available which can be put into any scenario file that will allow both displays to be visible at all times or possibly to put each task window on separate monitors, if that setup is available.

Spatialized audio

A spatialized audio system (available from AuSIM, Inc.) can be used to add 3D sound to either of the tasks. Sound events can be placed into the same scenario file that is used to control the tactical task. A sound event can be the playback of any sound that has been built into the system and the scenario file will contain the information about when, where in the user's 3D space, and possibly how long to play or loop the sound if appropriate.

User Scenario

Initial training and practice for the tactical task will proceed as follows:

When the participant is able to classify targets within an acceptable criteria for speed and accuracy, the scenarios to collect data can be run. These scenarios will typically last for 5-15 minutes. Most aspects of a scenario will be controlled by a scenario file, which will be available for the experimenter to call up as the tests proceed.

The user will be offered a monetary incentive to quickly classify targets in the tactical task and to accurately track the target icon in the tracking task.

Detailed Description of Requirements

Functional Requirements

  1. [FR 1]Participant identification
    1. [FR 1.1]The system shall use a unique ID to anonymously identify a user.
    2. [FR 1.2]The system shall record the user ID, date and time of experiment, scenario files used, and calibration accuracies.
    3. [FR 1.3]The user ID must be associated with all data collected for that user.
    4. [FR 1.4]It must be possible to specify the participant ID at the start of a set of trials.
  2. [FR 2]Experimenter control and monitoring
    1. [FR 2.1]Ease of use
      1. [FR 2.1.1]The experimenter should be able to skip a scenario without using the mouse.
      2. [FR 2.1.2]A straightforward keyboard command (such as <ESC> and/or Command-period) will end the entire experiment.
    2. [FR 2.2]The number of steps required to start the system, load configuration and scenario files, and begin the experiment should be kept to a minimum. That is, fewer than ten keystrokes and/or mouseclicks should be required.
    3. [FR 2.3]Configuration files
      1. [FR 2.3.1]The system shall use configuration files that will contain the settings for the tracking and tactical tasks, whether to use one or two monitors, and whether to use eye tracking or spatialized audio.
      2. [FR 2.3.2]The system should not have to be restarted between experiments in order to change any mode.
      3. [FR 2.3.3]The experimenter should be able to load all scenario data for all scenarios a participant will encounter with a single open dialog box.
      4. [FR 2.3.4]Prototyping and demos
        1. [FR 2.3.4.1]The input file events for specifying sounds should specify default sounds that will be played if the spatialized audio system is not connected. These may be in stereo to approximate the left/right component of spatialized audio.
        2. [FR 2.3.4.2]It should be possible to run the experiment without an eye tracker. There will be two non-eye-tracking modes: (a) Both task displays are visible at all times. (b) One task or the other is visible, and a button press, such as the space bar or joystick button, will simulate the gaze contingent switching of displays.
  3. [FR 4]Pre-task display
    1. [FR 4.1]The system shall display the application in full screen mode, not as a windowed application.
    2. [FR 4.2]The system shall display the menu bar up until the experimental task begins.
    3. [FR 4.3]The software should adapt to all screen resolutions above 1280x1024, which will be the standard screen resolution for this application. If run on a smaller display, a dialog box will inform the user of the required screen size and that system behavior on smaller displays is unknown.
    4. [FR 4.4]Some instructions or task indication should be displayed before each task (e.g. tactical classifying rules and icons before the tactical practice task, or simply that the tactical-only task is about to start).
    5. [FR 4.5]Just before the experimental task starts, some indication of the task displays should be visible, such as the borders for each of the task frames.
  4. [FR 5]Data collection
    1. [FR 5.1]All data output should be time-stamped so that the analyst can know when each event occurred with respect to the start of the scenario.
    2. [FR 5.2]Participant actions
      1. [FR 5.2.1]The system shall record all participant actions, including all joystick, keypad, and eye movement data.
      2. [FR 5.2.2]The system shall record correct and erroneous responses, including what was the correct response, what was the user's response, and the timing (and location if appropriate).
    3. [FR 5.3]Times of Interest
      1. [FR 5.3.1]The system shall record data, including tracking error and duration, that occurs while a blip is in preclassifiable mode and classifiable mode.
  5. [FR 6]User feedback and incentive
    1. [FR 6.1]Payoff matrix
      1. [FR 6.1.1]Payoff integration of the two tasks
        1. [FR 6.1.1.1]Payoff matrices for both the tracking and tactical tasks will be determined for each task separately and combined for an aggregate payoff.
    2. [FR 6.2]Performance feedback for every response
      1. [FR 6.2.1]The system shall provide instant feedback, with audio or visual, or both, for all actions.
      2. Note: Details for tactical and tracking task feedback are discussed within their respective sub-task sections in this document.
  6. [FR 7]Practice sessions
    1. [FR 7.1]Single task
      1. [FR 7.1.1]The system shall provide the means to run either task (tactical or tracking) separately for practice sessions.
      2. [FR 7.1.2]Practice session data will be recorded.
      3. [FR 7.1.3]It should be possible, when running the software in single-task mode, to turn on the gaze-contingent mode and record if and when stray gazes caused the task display to hide itself.
    2. [FR 7.2]Dual task
  7. [FR 8]Task windows
    1. [FR 8.1]The task windows shall have a light gray background that conforms to MIL STD watchstation display standards.
    2. [FR 8.2]The mouse cursor will be removed from the screen during the experiment.
  8. [FR 9]Tactical task
    1. [FR 9.1]Icons
      1. [FR 9.1.1]The system will permit the use of standard military icons for all blip characteristics including shape, color, and others.
      2. [FR 9.1.2]Blip characteristics will clearly indicate a blip's current status, such as hostile, neutral, or unknown.
      3. [FR 9.1.3]The system will support the addition of vectors to blips to indicate direction and speed.
      4. [FR 9.1.4]Blips will be numbered from 1 to 9. Blips with letters may be used as distractors.
    2. [FR 9.2]Movement
      1. [FR 9.2.1]Insured spacing for eye tracking
        1. [FR 9.2.1.1]To facilitate eye tracking, the icons in the tactical task should never come within 1 degree of visual angle of each other, measured edge to edge. [Note: The maximum average bias error of the eye tracker is 0.7 degrees.]
        2. [FR 9.2.1.2]Alternatively, the system shall provide a means to notify the experimenter when two or more blips come within one degree of visual angle for a given scenario file. This will be used during scenario file development.
    3. [FR 9.3]System responses to user input
      1. [FR 9.3.1]The system can optionally respond to every user input by providing either visual and/or auditory cues or both, such as the current "Neutral 1" text that appears.
      2. [FR 9.3.2]The user will not be permitted to correct an incorrect response.
    4. [FR 9.4]Feedback and incentive
      1. [FR 9.4.1]If a blip is classified early, a penalty will be imposed, recorded, and immediately reported to the user.
      2. [FR 9.4.2]Users will receive a financial reward based on their speed and accuracy. The specific payoff matrix will be determined iteratively.
    5. [FR 9.5]Auditory display
      1. [FR 9.5.1]When a blip changes color from black (to amber, red, or blue), sounds corresponding to any subset of the blip's characteristics may be sent to the auditory display.
    6. [FR 9.6]Configuration file control
      1. [FR 9.6.1]Possibly permit the tactical task display to be completely reset at a given point in the scenario while the tactical display is hidden. This would permit the experiment to effectively erase the user's situational awareness of the tactical task.
  9. [FR 10]Tracking task
    1. [FR 10.1]Icons
      1. [FR 10.1.1]The graphics used for the various visual indicators such as the target plane position and the reticle (the cross hairs in their various states) must be easy to modify by the experimenter, such as with a configuration file. The motivation for this is so that the experimenter can iterate through various possible settings during the development of the software. Flipping the reticle between black and yellow, for example, may be a bit jarring, and peripheral salience may not be necessary given that the indicator will not be peripherally visible when not looking at the task window.
      2. [FR 10.1.2]The system shall use Symbicons for all targets used in the scenarios.
    2. [FR 10.2]Movement
      1. [FR 10.2.1]The system shall provide both an "easy" and "difficult" mode for the tracking task, and optionally other levels in between.
    3. [FR 10.3]System responses to user input
      1. [FR 10.3.1]The system shall provide visual or auditory feedback, or both, regarding accuracy during the tracking task.
    4. [FR 10.4]Feedback and incentive
      1. [FR 10.4.1]In the tracking task, it must be clear to the user where the reticle must be placed on the target plane for the most accurate performance, and whether keeping the reticle in its "on target" visual state (i.e. black) is good enough for a 97% payoff.
  10. [FR 11]User input
    1. [FR 11.1]Keypad entry
      1. [FR 11.1.1]Simple variable settings in the source code will permit the straightforward remapping of the response keys, such as to re-arrange the numeric keypad from an computer keypad configuration to a telephone keypad configuration.
      2. [FR 11.1.2]The blip number should be typed in before the "hostile" or "neutral" classification. This is the opposite order of the original experiment. However, there will be a deadline that can be set in a configuration file that will require the (classification) key to be pressed within a certain time frame (such as 500 ms), or an error will occur.
      3. [FR 11.1.3]Correct user input should thus proceed as follows: digit, hostile/neutral; digit, hostile/neutral; ....
      4. [FR 11.1.4]Possible incorrect keypad entry
        1. [FR 11.1.4.1]The system should recover more gracefully from incorrect user input than the current system, in which the user could see a series of "Incorrect Input" responses. This improvement shall be accomplished in part by moving hostile and neutral responses to unique keys, and in part by more tightly controlling correct and incorrect user inputs.
        2. [FR 11.1.4.2]If a user types two digits in a row, the first digit was an error and the key-entry error signal (probably a buzzer) will sound at the entry of the second digit, even though the second digit can be the start of a correct entry.
          1. If the first digit corresponds to a black blip, then the buzzer will sound immediately when the digit is pressed. The blip stays black and the user will have a later opportunity to classify.
          2. If the first digit corresponds to a red, blue, or amber blip, then the buzzer sounds when the second digit is pressed, and the first digit blip is classified incorrectly and turns white (possibly with a red X on it). This creates the awkward situation in which the user can get buzzed when pressing the second digit even though the second digit is the start of a correct classification. We will live with this. It should happen infrequently.
    2. [FR 11.2]Joystick
  11. [FR 12]Eye tracking
    1. [FR 12.1]Gaze contingent
      1. [FR 12.1.1]The task window that the user is not currently looking at, as determined by the eye tracker, can be blanked out or covered with a mask. This feature can be turned on and off via the configuration file.
      2. [FR 12.1.2]A window's gaze contingent region extends one degree of visual angle beyond its physical borders in all directions. This implies the following: Between the two sub-task windows, there will be a strip of unused space that is a minimum of two degrees of visual angle wide.
      3. [FR 12.1.3]A sub-task window is defined as "looked at" up until two consecutive eye tracker samples occur within the other window.
    2. [FR 12.2]Data collection
      1. [FR 12.2.1]For collecting and analyzing eye movement data, a region of interest must be defined for each blip, and that region of interest will move with the blip.
    3. [FR 12.3]Calibration and recalibration
      1. [FR 12.3.1]The system should show the calibration points when the experimenter (or automated calibration) initiates a calibration.
      2. [FR 12.3.2]Automated calibration accuracy checking
        1. [FR 12.3.2.1]Immediately before the start of each scenario, and immediately following the final scenario, the eye tracker accuracy will be verified by having the participant look, in turn, at four cued locations on the screen. These cued locations will be located near the four extreme corners of the experimental display. If the accuracy for any location does not fall within one degree of visual angle, a recalibration will be triggered, except for the final verification. For any post-scenario verification that is found in error, the preceding scenario's eye tracking data will be marked as questionable, and the accuracy error will be recorded.
        2. [FR 12.3.2.2]Before a recalibration is initiated, a dialog box will instruct the user to tell the experimenter that the eye tracker is about to be recalibrated. The dialog box will have two buttons: <Calibrate> and <End Experiment>.
  12. [FR 13]Spatialized audio
    1. [FR 13.1]A sound event will include time, 3D location, sound file, and perhaps other features such as intensity.
    2. [FR 13.2]It must be very easy to modify the sound configurations, such as by having a configuration file that has a slot or descriptor for every possible unique sound event that could occur. That slot or descriptor will point to a sound file that will be used for that event, or be blank indicating that there is no sound for that event.
    3. [FR 13.3]HRTF selection
    4. [FR 13.4]HRTF validation
    5. [FR 13.5]Preventing overlapped sounds
      1. [FR 13.5.1]An output data file will indicate when two sounds overlap.
    6. [FR 13.6]Configure audio level
    7. [FR 13.6.1]The audio level for all sounds shall be in the 60-70 DB range.
  13. [FR 14]Post-task display
  14. [FR 15]Scenario playback
    1. [FR 15.1]The software will provide an option to display the user's gaze point on a layer behind the task stimuli. This will permit video capture and playback for conference audiences. The gaze point will be indicated by a circle whose stroke is one color and whose fill is another. The size of the circle will be roughly one degree of visual angle in diameter.
  15. [FR 16]Task dynamics

Non-functional Requirements

  1. [NFR 1]For efficiency of programming and consistency in system performance, the new source code should reuse source code inherited from NRL wherever possible.
  2. [NFR 2]Hardware
    1. [NFR 2.1]The joystick shall be a USB joystick.
    2. [NFR 2.2]The software will run on Macintosh OS X (native, not in Classic mode).
  3. [NFR 3]Software performance
    1. [NFR 3.1]Unless otherwise stated, the new software should function and perform similarly to the previous version of the experimental software inherited from NRL.
    2. [NFR 3.2]Metrics, measurement, and validation
      1. [NFR 3.2.1]There should be less than 16ms error in the system's time stamp of eye movement data.
      2. [NFR 3.2.2]The system should play audio files within 15 ms of their initiation.

[NFR 4] Extensibility and reusability

    1. [NFR 4.1] The software will be designed in a modular fashion such that, with confined modifications to the source code, the sound events could be sent to other spatialized sound systems (other than just AuSIM).
    2. [NFR 4.2]The software will run with or without an eye tracker, and with or without a head tracker.
    3. [NFR 4.4]The software may work across two video monitors.
  1. [NFR 5]Reconfiguration of experimental design
    1. [NFR 5.1]Auditory and visual stimuli
    2. [NFR 5.2]User response
    3. [NFR 5.3]Both should be easy.
  2. [NFR 6]Output files
    1. [FR 6.1]The system shall use 7-bit ASCII for all output files.
  3. [NFR 7]System documentation
    1. [NFR 7.1]The SRS (the document you are reading now) must be editable by all developers. A numbering and indentation scheme will be used that permits easy reference, update and modification by all developers. All versions of the SRS will list the authors and the date.

References

Brock, D., Ballas, J. A., Stroup, J. L., & McClimens, B. (2004). The design of mixed-use, virtual auditory displays: Recent findings with a dual-task paradigm. Proceedings of ICAD 04, The Tenth Meeting of the International Conference on Auditory Display, Sydney, Australia, July 6-9, 2004.

Kieras, D. E., Ballas, J., & Meyer, D. E. (2001). Computational Models for the Effects of Localized Sound Cuing in a Complex Dual Task. (EPIC Report No. 13). Ann Arbor, Michigan: University of Michigan, Department of Electrical Engineering and Computer Science.