[Previous] [Top] [Next]

CHAPTER X
EVALUATION

In previous chapters, three in-depth case studies have demonstrated how this new visualization design process can be applied in practice. Having documented several examples, a discussion of some possible benefits and detriments of the proposed design process will be presented.

The most obvious benefits come from the minimal overhead that developers incur by using existing software products. The primary overheads involved in the process are limited to the initial cost of the software and learning the visualization environment and underlying data model. Traditional development processes have a single, overwhelming overhead: programming. But evaluating this new process must go beyond just the overheads that are encountered. In fact, if this environment is used for prototyping, but the final implementation is to be built from scratch, then both overheads are incurred. Even in this worst case scenario, there are advantages to be gained by using the proposed methods.

It was mentioned earlier that this design technology fosters iterative design and evaluation by minimizing the impact that modifications to either the data or graphical aspects of the visualization have on other parts of the visualization process. This is perhaps the most important and relevant aspect of this research to the field of performance visualization in general. Studies of performance visualizations have been few because of the effort required just to create the displays. Developers are forced to devote a majority of their time to building the visualization tool rather than testing visualization usability [28,42]. Formal evaluation is apparently sacrificed for two primary reasons. First, once a tool is completed, so much time has already gone into the project that developers can not afford to extend their work to include evaluation. Second, even if evaluations were done, modifications could impose additional significant programming costs on the project. Even those tools which market themselves with buzzwords like "modular" and "extensible" usually require considerable programming expertise [10]. Using existing data visualization software stands to refine the field of performance visualization by enabling researchers to more easily conduct usability studies and perform formal evaluations of visualizations - to determine what displays are indeed useful.

Scientific visualization packages by nature offer a high degree of user control. Thus, the ability to customize displays is built into the package. Other features like display interaction, animation, and modularity are also present. In particular, the ability to interact with what is shown on the screen will become an absolutely necessary component of next-generation visualization tools as displays become more complex and take advantage of multiple dimensions. These packages have many of the more difficult-to-program features already established and again allow visualization developers to focus on the quality of their displays rather than the code needed to generate them.

On the other hand, it is conceivable that generalized data visualization packages contain too much functionality, resulting in "environment overkill." In other words, the software may be too general, resulting in slower performance and unnecessary complexity. Experience supports this to a limited degree. The work with Data Explorer was done on an IBM RS/6000 Model 730 which required additional graphics hardware assistance and greater than 64 megabytes of memory to be acceptable. Tools built specifically for performance visualization generally do not have such stressing requirements on the computing system.

Of particular concern to performance visualization developers is the capability of a tool to handle animation (real-time or post-mortem) and dynamic display interaction. Experience suggests that without costly hardware, such usability requirements are not adequately met by simply applying scientific visualization software. It is likely that certain limitations of this approach will be overcome by technology, but for the time being, these limitations make practical use of this technique more difficult. The hardware requirements made by scientific visualization packages are not without reason, though. These systems offer extremely flexible and powerful graphical techniques. For example, performance visualization developers would undoubtedly find the ability to compose a new display from two or more simpler displays advantageous, as was done in the Kiviat diagram examples discussed in Chapter VII. Furthermore, these products offer flexible and general data models that can be used to generate a wide range of visualizations, are designed to handle large multi-dimensional data sets, and are usually portable across many architectures.


Last modified: Wed Jan 20 15:14:39 PST 1999
Steven Hackstadt / hacks@cs.uoregon.edu