A.Hornof -- 11/14/2023
CS 443/543, Fall 2023
Project #4
Conduct a User Observation Study
- Due dates:
- • Monday, November 20, 2023, 10PM: An initial draft of Sections I and II of your Usability Report.
• Monday, November 27, 2023, 10PM: Complete project due on Canvas
• Tuesday, November 28, 2023, in class: USB thumb drive of videos, in-class presentation.
- Purpose of assignment:
- Evaluate two systems with a user
observation study. Identify how certain design decisions may help or hinder efficient and effective use of the system. Figure out what works with the interfaces, and what are problems with the interfaces. Propose reusable design decisions based on what worked, or
design modifications to address the problems.
- User Observation Study
- The point of this project is to provide a substantial and realistic evaluation of two user user interfaces. You should strive to get your evaluation as close as possible to observing real users doing real tasks in the actual intended context of use. Follow the guidance provided in the required reading (in Rosson 7, Sharp 8.6, and Sharp 14) for conducting
user observation studies. Write a script that you will follow for each participant. Turn in the
script with your evaluation. The script should incorporate all of Apple Computer's Guidelines for Conducting User Observations (PDF download; by Apple Computer). Your study should be in-person, not remote.
- Some specific experimental design criteria
- 1. Determine two different versions of Project 3 that you will evaluate. (Both systems should work as intended. This project should evaluate usability, not whether a system works at all.)
2. Establish the performance measures that you will use. There should be many measures. If the system is intended to help someone learn, the performance measures should probably include some sort of evaluation, or quiz, outside of the system; the quiz could be on the learning technique embedded into the system, and on material that was studied. However, there should be many intermediary measures of usability as well, not just a quiz at the end.
3. You can also look at practice effects, such as how practice with one interface helps a user to perform in a subsequent block with another, or the same, interface.
4. Decide on whether you want to do a within-subjects or between-subjects design, and justify it in your report.
5. Collect data from at least four participants (or eight participants if you are working as a pair of students). Please do not recruit computer
science students or students in this class for your study. You want your users to be as close to real users as possible.
6. Make sure you have a reasonable set of specific tasks for the user to accomplish. Each user should spend at least 20 minutes doing each task.
- Record your data
- Record the user interaction of your user observation study. Record the user's button presses and all of the system outputs, including the relevant visual feedback that the user gets. This will probably require videorecording.
Create good-quality videorecordings. Make sure that your videos have a stable image (by using a tripod), have adequate sound quality to hear all of the user statements, and have good exposure and focus. Make sure that all of the user's inputs into the system, and all of the system's outputs, whether auditory or visual, can be clearly seen and heard. A selfie stick, duct tape, and a stack of boxes should permit you to position a smartphone just over the laptop.
Test the camera setup before the actual user observation study to make sure
that you will be able to clearly capture what is displayed by the system along with the user's inputs to the system. It is not important to capture the user's face, but it is important to record all user inputs, all system outputs, and everything spoken during the study.
Keep your videos under 1GB per file. Submit mp4 videos, not other formats such as mkv.
You can reserve and check out equipment, such as video cameras and tripods from the UO Technology Service Desk. Start the process by clicking on the "Reserve Equipment" link at the top-right or bottom of the Technology Service Desk web page.
- Submit via the UO Canvas, a single .zip file that contains:
-
1. Usability Report. Use the structure of the report given
below. Submit it as a PDF file.
2. A 5-to-10-slide PDF presentation that shows the results of your study.
3. A scan of an informed consent form (PDF download) signed by each
participant in the study.
4. A .zip file of the code you used in the study (but please do not resubmit the large sound files provided to you for the project.).
- Submit in class:
-
A USB flash drive with your initials on it containing with the video from the user study. The instructor will attempt to provide you with a USB flash drives for this. If you cannot pick up a drive in class, please use your own, and the instructor will return it to you next term. Please do not submit your video via an internet-based or cloud-based service.
- Class Presentation
- Prepare a brief, 5-slide 5-minute presentation that briefly describes the unique aspects of your study in the following areas: the questions that your study examined, the user interface and variations that were used, the experimental methodology, the data you collected (such as speed and accuracy for each interface), and conclusions that you have formed. Your audience will be the students and the instructor in this class, so focus your presentation on the things that were unique to your study. Please do not restate the assignment for everyone. Please skip over slides that do not present unique aspects of your study. Practice your presentation. Guidance for the in-class presentations (PDF) has been provided but, again, please focus on what is unique to your study.
- The User Observation Study Report
-
The report should include about 2,000 words (per student if you are working in pairs) in addition to all of your test materials such as your testing scripts, notes from your debriefing sessions, and consent forms. Structure your report as follows. (Perhaps note that Appendix A.1 of Rosson & Carroll (on p.366) provides an example that follows a structure similar to this. Also note how the graph in Figure A.3 clearly summarizes the data for each level of each condition.)
I. Introduction
- The systems being evaluated - A brief description of the systems.
- Overview of study - A high level overview of the goals of your user observation study such as: (a) specific hypotheses that the study will test, and/or (b) objectively measurable usability goals such as being able to accomplish a set of benchmark tasks within a certain amount of time, without exceeding a certain number of keystrokes or button presses, and while maintaining a certain level of accuracy.
II. Methodology
- This section should make it clear to the reader exactly how the study was conducted, and thus how well the study created an opportunity to find true and real cause-and-effect relationships between the interface and the usability measures.
- Participants - Describe who participated in your study and how you recruited them. For example, describe the participants' ages, occupations or majors, and their experience with the sort of technology and systems being evaluated. This will help the reader to understand the participants and their motivations and expectations when trying out the system. But do not provide any identifying information of the participants, such as their names or their specific family relationship to you.
- Setting - Describe the physical and social setting in which the evaluation took place. It is okay to disclose a specific location.
- Materials - Describe your testing materials, such as the computers and devices that you use to conduct and record the evaluation, and the materials that you use when running the study such as your testing scripts.
- Experimental Design - Describe the different treatments ("conditions") that you will be using in the experiment (such as one interface versus another interface) and the specific organization of the individual tests (called "trials") that you will be presenting to your participants/users, and how the trials are organized into useful groups (called "blocks"). For example: a block of five trials with the visual feedback, followed by a block of five trials without the visual feedback, with this order reversed for half of the participants.
- Procedure - Describe the specific steps that you took to administer your user study. Include the script that you used when running the study.
- Threats to external validity - Identify three threats to external validity and explain how the experimental design combatted these threats.
III. Results - This section summarizes the results—the data that were observed—including from watching the video and from direct observation during the study. This section should identify usability problems that were identified, and should permit the reader to decide whether the data support the hypotheses, and whether performance targets were met. For example, summarize how long each participant took to do each task, with how many button presses, how many errors, and how much backtracking, as a function of each unique experimental condition. Perhaps include a section for each of the measures. The data summaries in the figures and tables in Rosson & Carroll Chapter 7, such as Figure 7.12 on pages 264-266 and Figure A.3 on page 370, provide good examples of how to present your data. Note how easy it is to look at these figures and tables, and understand how people performed across the different conditions.
IV. Analysis or Discussion - This section responds to the results, such as to identify trends in the data, and to propose explanations for why those trends appeared. Rather than reporting the data, this section summarizes and organizes the conclusions that you draw from the data. You can organize your subsections based on the trends. Be sure to include:
- Overall assessment of whether design decisions helped or hindered efficient and effective use of each system, and whether the systems met their usability goals.
- If two systems were compared, an assessment of the strengths and weaknesses that emerged.
- Proposed interface modifications.
V. Conclusion - Briefly summarize the findings of the study, what was learned with this project (either specific to the study or about user interfaces in general), and any thoughts about the system that you built or might like to build as a result of this project.
Additional Guidance
Use real-world tasks that truly capture how the system is intended to be used.
Your Project 4 tasks should be as close to real-world tasks as possible. Such as "Please apply the SQ3R reading technique to this chapter, and really try to learn both the SQ3R technique, as well as the main points of the chapter."
Try to limit your interaction with your users other than to get them started in the experiment.
All of the instructions for actually doing the task should be written down. Please do not engage in a back-and-forth discussion about how to do the task, or what to do next.
Run a user observation study, not a system test.
You should only run your study with Project 3 systems that do not crash or have other serious problems. Please thoroughly test your Project 3 system in the exact test environment that you will use for your user observation study, and then do not modify the code any further. Once you have a system that is adequate for Project 4, and fully tested, please stop developing the system. Last minute modifications before demoing software are often disastrous. If the system crashes on the user, it is not a user observation study, but it instead becomes a failed system test.
Feel free to improve or modify your Project 3 systems. But then test them thoroughly before using them for Project 4.
User-observation studies should be done in person.
Part of the fun and challenge of doing user observation studies is collecting data in-person. One way to recruit participants is to just set up with a little sign in a library or in the EMU, and recruit students on the spot.
Criteria for Evaluation
The projects will be graded based on the following:
- Does the study provide a useful, informative, and substantial comparison of two user interfaces that are different in a specific and interesting manner?
- Does the methodology clearly explain the experimental design and how the data were collected?
- Was the experimental design a good design (such as, well-balanced, with good validity)?
- Were threats to validity identified and combatted against?
- Are the observed data summarized in a useful and informative manner?
- Does the analysis identify interesting trends in the data and propose explanations for these trends?
- Does the conclusion provide a thoughtful summary of what was learned in the project?
- Did the class presentation concisely convey the experiment and conclusions?