[Prev] [Up] [Next] Domain-Specific Metacomputing for Computational Science:
Achieving Specificity Through Abstraction
By Steven Hackstadt

CHAPTER 5
Supportive Research In Computational Science

Whereas domain-specific software architectures address the integration of domain knowledge into well-engineered software systems, they do not guarantee that the computational and analytical capabilities of the resultant system actually meet the demands of the scientist. After all, computational science problems are characterized by diverse computational and analytical requirements that evolve over time. DSSAs clearly focus primarily on design issues.

Problem-Solving Environments

Computational scientists are less concerned with how well-designed a software system is than with how well it solves the specific computational problems they have. In this regard, problem-solving environments are touted as a major breakthrough for computational science because they provide highly usable and useful computational capabilities [GHR94, Rice96, WHRC94]. (As we have mentioned earlier, PSEs also emphasize careful software design [GHR94, MCWH95], but that is not the primary concern here.) Gallopoulos et al. [GHR94] identify three measures of problem-solving environments: scope, power, and reliability. Scope refers to the number of problems that a PSE addresses. Generally speaking, a PSE becomes easier to build as its scope is reduced. The power of a PSE measures its ability to actually solve the problems that can be posed to it. As scope increases, the likelihood that some of the problems cannot be solved also increases. Finally, reliability refers to a PSE's ability to produce correct solutions. For example, returning a message indicating the PSE's inability to solve the problem is much more desirable than a wrong answer [GHR94].

Problem-solving environments seek depth of support for specific computational functionality. As an example, the PDELab problem-solving environment [WHRC94] specializes in solving partial differential equations (PDEs). PDELab attempts to support the entire spectrum of activities in which scientists might engage while working with PDEs, from brain storming, trial and error reasoning, and simulation to optimization, visualization, and interpretation. PDELab's goal is to emulate all of these processes and to automate many of them. PDELab is separated into three logical layers, each of which provides support in several areas:

Application Development Framework: specification tools, computational skeletons, knowledge sources, and visual programming tools.

Software Infrastructure: software bus, object-oriented user interfaces, programming environments, language translation systems, graphics display systems, and machine managers.

Algorithms and Systems Infrastructure: numerical libraries, parallel compilers, and expert system engines.

There is little doubt that systems like PDELab hold significant promise for certain scientists. Indeed, many isolated physical phenomena can be modeled by systems of partial differential equations [GHR94, HCYH94, WHRC94]. But increasingly, scientists are interested in coupling two or more distinct models together. For example, in climate modeling, it is desirable to couple unique models of atmospheric and oceanic circulation and chemistry, land surface and sea ice processes, and trace gas biogeochemistry [MABB94]. Each of these models could individually be addressable by a unique PSE. But supporting composite and complex processes like climate modeling is beyond both the scope and the power of existing PSEs.

Multi-Component Modeling

The example of climate modeling is indicative of a more general trend in computational science toward whole-system modeling. For example, the Accelerated Strategic Computing Initiative (ASCI) calls for "full-system" and "full-physics" applications for maintaining the nation's nuclear stockpile [DOE96]. Full-system refers to all the components (nuclear and non-nuclear) of a weapons system; full-physics refers to all the modeling techniques (e.g., physical, chemical, material, and engineering) required for simulation. Similarly, automobile manufacturers desire more robust modeling capabilities. Houstis et al. [HRJW95] detail the diverse problem areas encountered in modeling an engine:

The analysis of an engine involves... thermodynamics (gives the behavior of the gases in the piston-cylinder assemblies), mechanics (gives the kinematic and dynamic behaviors of pistons, links, cranks, etc.), structures (gives the stresses and strains on the parts) and geometries (gives the shape of the components and the structural constraints).

The total design and analysis of an engine requires that solutions to these individual problem areas interact to determine a final solution. Drashansky et al. [DJRH97] summarize the evolution of multi-component physical systems:

Computational modeling will shift from the current single physical component design to entire physical systems with many components that have different shapes, obey different physical laws, and interact with each other through geometric and physical interfaces.

Thus, for the experimental computational scientist, PSEs pose a dilemma. By definition, a PSE provides computational support for a single, well-defined, and well-understood type of problem [GHR94]. The computational scientist, however, may require support for several such problems. Worse yet, some of those problems may be poorly defined and/or poorly understood [BRRM95, CDHH96, HRJW95].

Multidisciplinary Problem-Solving Environments

Recognizing this and having had at least moderate success with prototype problem-solving environments, the PSE community is now pursuing the creation of multidisciplinary problem-solving environments (MPSEs) [HRJW95]. As suggested above, an MPSE provides a framework and software kernel through which multiple problem-specific PSEs are brought to bear on complex and composite problems. Houstis et al. [HRJW95] identify three general requirements of MPSEs:

MPSEs and PSEs share an emphasis on sound software engineering practices, but MPSEs are particularly concerned with achieving a high degree of software reuse because "without software reuse, it is impractical for anyone to create on his own a large software system for a reasonably complicated application" [HRJW95].

MPSEs also place a greater emphasis on network-based computing than do PSEs. Researchers envision agent-based systems accessible in much the same fashion as the World Wide Web is today. In this scenario, MPSE servers export user interface agents to local desktop computers through which users build and operate MPSE applications. In turn, the MPSE server requests services from a diverse metacomputing environment. In addition to computational hardware, the metacomputing environment contains computational software in the form of traditional problem-solving environments. Each PSE is contained in an agent wrapper that interacts with and controls the execution of the PSE to solve components of larger, multidisciplinary problems [DJRH97].

Whereas basic PSEs provide a consistent abstraction to component numerical libraries and modules, MPSEs create yet another level of abstraction where entire PSEs are the objects of composition. While we discuss the issue of abstraction in the next chapter, we simply pose a question here: how many layers of abstraction are necessary? In addition, since PSEs are constructed as complete systems, it is not clear how easily they could be used as components in a larger system. It is widely accepted that reusable components must be designed and implemented with reuse in mind.

The result of this continuing composition of systems is larger, bulkier software. With each component PSE containing on the order of one million lines of code [JDRW96], it remains to be seen if MPSEs ever achieve the ubiquity predicted by their proponents. Currently, their promising vision is far from being readily available. Even basic PSEs are still mostly an emerging technology with very few prototypes in existence. It is not surprising, then, that an early prototype MPSE [JDRW96] contains just one component PSE, which evolved out of the canonical PSE example, PDELab [WHRC94].

Even if efforts to build MPSEs are successful, MPSEs are ultimately destined to the same limitations of PSEs. With each component PSE able to address only a well-defined, well-understood computational process, MPSEs offer no way to break free of that restriction. Ultimately, to support experimental computational science-where a simulated process may not yet be clearly understood-a different means of support is required. The next section explores one such means.

Domain-Specific Environments

Supporting experimental computational science requires a fundamentally different approach to problem-solving. The problems of experimental computational science are much more diverse than those addressed by PSEs. It may be the case that scientists are trying to refine a method of simulation for a particular physical phenomenon. Or, perhaps several different techniques for modeling a biological, ecological, or environmental system are being explored. In other words, the problems of experimental computational science are often exploratory in nature. They may not be well-defined, and they are rarely well-understood. In addition, the problem-solving process may include several loosely connected steps or stages of analysis, each with its own unique requirements.

While this type of exploratory problem-solving may eventually evolve into a methodology that could be encapsulated in a PSE, until that happens, scientists still require the support of a robust computational environment. In fact, the science they conduct today is what ultimately allows their results to be applied more broadly tomorrow.

To this end, we propose and motivate three critical nonfunctional requirements for the domain-specific environments (DSEs)1 required by exploratory computational scientists:

Programmability facilitates rapid prototyping and provides a mechanism by which domain abstractions and user-defined functionality become an integral and essential part of the environment.

Extensibility allows the environment to be enhanced with new, unanticipated functionality and problem-solving techniques as the scientific process evolves over time.

Interoperability allows a wide range of tools to be brought to bear on the scientific process and enables the loosely connected components of the environment to work together via well-defined, open programming and communication interfaces.

Each of these requirements address limitations of problem-solving environments and multidisciplinary problem-solving environments. In particular, PSEs are programmable only at the highest levels where they support visual programming and some template-based techniques. In contrast, DSEs provide frameworks of tools and analytical support that can be tailored (albeit at a lower level) to the unique requirements of each phase of the exploratory process.

PSEs provide extensibility by supporting the integration of foreign libraries and systems. Unfortunately, the new functionality is only available in the confines of the problem-solving context supported by the particular PSE. DSEs, however, permit the problem-solving context to expand and contract as necessary by maintaining a looser coupling between components, which, in turn, allows unanticipated functionality to be integrated more easily.

Finally, PSEs attempt to provide a complete set of interoperable tools necessary for supporting the problem-solving process; thus, the need to apply external tools is not a primary consideration. MPSEs create interoperability among several PSEs, but they do not facilitate external tools and systems for the same reason. DSEs, on the other hand, try to recognize that scientists may have pre-existing tools and systems that they desire in an initial DSE implementation. Simultaneously, DSEs recognize that as the scientific process evolves, those tools may be replaced or augmented with others. Being able to use commercial, off-the-shelf software as components is also critical. To this end, DSEs emphasize open and well-defined interfaces as a means of increasing this type of component-level interoperability.

These distinctions reveal a fundamental difference between PSEs and DSEs: Whereas PSEs attempt to support specific problem-solving methods, DSEs attempt to support the evolution of such methods. Thus, in many regards, PSEs and DSEs attempt to address very different needs. We contend that domain-specific environments are particularly applicable to exploratory computational science in metacomputing environments. The remainder of this chapter will describe two such environments.

A Domain-Specific Environment for Environmental Modeling

The purpose of the Geographic Environmental Modeling Systems (GEMS) is to provide general environmental modeling capabilities to "policy technicians" in regulatory agencies [BRRM95]. GEMS assists them in developing cost-effective controls for hazardous waste, air pollution, acid rain, and global climate change by providing a comprehensive computational environment which includes transparent access to parallel and distributed computing over high-speed networks; geographically distributed, object-oriented databases; and a robust graphical user interface.

At first glance, GEMS appears to be more like a problem-solving environment: it targets less technical end users, it provides general computational capabilities for a specific problem area, and it provides transparent access to a collection of underlying computational resources. Indeed, GEMS has much in common with the ideals of problem-solving environments. However, a closer examination reveals an overwhelming consistency with the characteristics and objectives of domain-specific environments. This section explores the GEMS system as a representative example of a domain-specific environment.2

With respect to problem area, environmental modeling is far from the requisite well-defined, well-understood process suitable to a PSE. In fact, the science behind environmental problems is not widely agreed upon. Consequently, organizations currently use several different physical and chemical models, no one of which is recognized as being superior [BRRM95]. Furthermore, policy makers require extensive "what-if" capabilities to explore the implications of various control strategies. This type of analysis is complicated since air quality laws in the United States are largely "goal-oriented." That is, state and local policy makers can meet federal air quality regulations in a variety of ways [BRRM95]. Hence, environmental modeling (and the subsequent policy making) is largely an exploratory process in the context of uncertain scientific laws.

Design. Prior to the development of GEMS, researchers made several design decisions. Perhaps the most critical of these was the recognition that the design team itself had to be composed of both computer scientists and application scientists [BRRM95]:

Software engineers by themselves shouldn't try to develop a system with such ill-defined requirements.... [They] require the active participation of domain experts and end users, people who have traditionally been considered "outsiders" in the software development process. At the same time, environmental engineers cannot take risks with the most recent advances in computing technology without seeking to include computer science experts.

What is notable about this statement is the mutual need for collaboration. That is, each group (computer scientists and application scientists) views the other as "domain experts." In addition, though, Bruegge et al. suggest that "ill-defined requirements" are responsible for creating this need. The lack of requirements demanded that the developers identify a series of design goals.

One of the general goals was to develop an "open" modeling system. Whereas PSEs function more like "black boxes," Bruegge et al. [BRRM95] envisioned a "glass box" approach, which echoes the properties of programmability, extensibility, and interoperability:

Existing modeling systems have always been developed for a specific end-to-end purpose.... If the component parts were open to inspection, while at the same time maintaining internal consistency-hence the term "glass box"-it would be much easier to combine the parts to work as a coherent whole.

In fact, many of the design goals reflect these properties as well as the trade-offs that often arise among nonfunctional requirements. For example, the developers had to balance between making the system easy to use and accessible to nonexperts and simultaneously providing experienced users with access to sufficient power and flexibility to which they may be accustomed. A similar trade-off arose as the team considered how to provide robust visualization support:

The main trade-off here is between providing sufficient power and flexibility to achieve a sophisticated analysis and making the system easy to use for more frequent rudimentary analyses.

In general, the developers wanted GEMS to act as a framework that could integrate a diverse set of known and unknown capabilities. Many of the specific design goals contributing to this vision are consistent with the ideas of domain-specific environments:

One of the most unique aspects of GEMS is the intense collaboration between computer scientists and application scientists to build a system that exhibited good software engineering principles and could deliver adequate computational performance, but that was also responsive to user needs and reflected domain knowledge of environmental modeling. The developers adopted a development model based on an object-oriented notation. Similar to the role played by the Zoom system by Anglano et al. [ASWB95], this notation served as a "two-way mirror" between the software engineers and environmental modeling experts [BRRM95].

Implementation. The GEMS prototype consists of about 75,000 lines of C++ code-an order of magnitude less than that required by problem-solving environments. As mentioned earlier, Joshi et al. [JDRW96] claim PSEs can have as many as one million lines of code. Why is there such a large differential in code size between these two types of environments? The answer is not completely clear, but the DSE properties described earlier offer some possibilities. First, DSEs emphasize interoperability with existing systems. A relatively small amount of carefully constructed interface code allows a wide range of functionality to be available through the environment without having to be expressed as part of the system code. Second, programmability offers a similar cost-savings; domain knowledge, tool customization, and a variety of system-wide configurations are expressed outside of the core system code. Finally, the need for extensibility suggests that not all of the anticipated functionality is yet part of the system; hence, the system is operable and usable despite being functionally incomplete. The important observation, though, is that because of the nature of their science, the scientists may not yet know what the missing functionality is. A DSE is designed and built to accommodate this evolution. As a result, a preliminary incarnation of a DSE likely contains fewer lines of code.

The implementation of the GEMS framework is based on a collection of five core components: user interface, visualization, execution, monitoring, and data management. The GEMS graphical user interface provides a point-and-click interface to system functionality. Through extensive and iterative prototyping, the GEMS interface reflects the user's view of environmental modeling with tools like Map View, Chemical Lab, and Population. Rather than a bulky and inconvenient afterthought, visualization is an integral part of the interface used for both specification and results analysis.

The execution component facilitates access to the underlying environmental models; it is not the model itself [BRRM95]. The goal is to allow the user to choose and interact with the model. To this end, the execution component uses an event-based system to handle the communication between the computational model and the other components of the framework. GEMS targets a metacomputing environment in which the computationally intense models run on supercomputers or networks of workstations, visualization services utilize graphics displays, databases reside on geographically distributed servers, and the user interface appears on the users workstation. An event-based system provides a loose, yet flexible, coupling among GEMS components. It also limits the amount of GEMS-specific code that must be inserted into the models themselves to make them compatible with the rest of the system [BRRM95].

Earlier, we suggested that one of the reasons DSE code size might be significantly less than PSEs was because DSEs often rely on existing systems. The GEMS visualization capabilities provides a good example of this by using a commercial off-the-shelf visualization system (PV-Wave). The visualization component acts as an interface between the GEMS user interface and the PV-Wave rendering system. Not only does this reduce the code size, it offers better visualization capabilities and in a shorter amount of time than the developers could create on their own. Similar to the visualization component, the GEMS monitoring component integrates a pre-existing distributed monitoring system for identifying and debugging performance bottlenecks.

With respect to data management, the developers were attracted to the rich data model supported by object-oriented database management systems (OODBMS). They adopted this paradigm because it showed "considerable promise in terms of ease of use and... flexibility" [BRRM95]. However, meeting the nonfunctional requirements of usability and flexibility came at the expense of performance:

The largest difficulty has been the performance penalty incurred with an OODBMS.... [and] we have not yet found a package that adequately meets the requirements of our system in practice.... [The] use of an OODBMS to manage the objects being displayed leads to unacceptable response times for an interactive system.

In summary, the GEMS framework provides a PSE-like user experience but addresses a much broader range of nonfunctional requirements essential to exploratory computational science. Key to accomplishing this is the design of a loosely coupled framework that can incorporate existing systems (interoperability), accommodate new functionality (extensibility), and allow the environment to be tailored to user needs (programmability). As such, GEMS is an excellent example of a domain-specific environment.

A Domain-Specific Environment For Seismic Tomography

The goal of the Tomographic Imaging Environment for Ridge Research and Analysis (TIERRA) project is to provide "a computational environment for tomographic image analysis for marine seismologists studying the formation and structure of volcanic mid-ocean ridges" [CDHH96]. The scientists are concerned with areas of the ocean floor where magma rises up from the Earth's mantle to form volcanoes. To understand the magmatic, hydrothermal, and tectonic processes, the scientists use a method called "seismic tomography" (which is similar to medical CAT scans) to construct 3D models and images of the ocean floor. The models and images are created by extensive computations on seismic wave data collected by seismometers positioned on the seafloor.

The goal of the environment is to support the general problem-solving process that the scientists use in their analysis; the computation of models and images is only one part of this process. As Cuny et al. [CDHH96] note, "the full analysis requires extensive, domain-specific input from a geoscientist."

Again, the similarity to PSEs is evident in this description. Cuny et al. [CDHH96] recognize this similarity and point to the same important difference proposed earlier: "the leading-edge science applications we are concerned about are exploratory and experimental in nature.... [and] demand domain-specific support that can adapt to changing requirements." Seismic tomography is characterized by a significant amount of interaction with, and intervention by, the scientist. Creating a sea floor model is not merely a task of specifying a few parameters, providing a dataset, and running a computation. The scientist actually plays an integral role in the validation, optimization, and convergence of the final model [CDHH96]. This is reflective of the poorly defined and poorly understood nature of this computational domain, which, in turn, makes it unsuitable as a PSE target.

Since the scientist plays such a central role in the problem-solving process, it is imperative that the environment fully consider their requirements. To this end, TIERRA adopted a collaborative process for design and implementation [CDHH96]:

The application scientists must establish requirements for problem investigation and evaluate the environment as its implementation proceeds. The computer scientists must fashion available technologies to address the requirements and possibly even develop new infrastructure to complete the environment.

Design. The TIERRA design process employed an informal method for identifying the requirements of the environment. After scientists described their overall problem-solving process, the group partitioned it into a series of seven steps. From each step, a list of domain-specific requirements was generated. The steps in the problem-solving process are diverse and include data processing, verification and validation, and testing and optimization. The corresponding computational requirements include tools and support for data manipulation; sophisticated, interactive visualization; improved performance; and program interaction and steering.

However, not all of these requirements were apparent ab initio; in some cases, they evolved over the course of the collaboration. The need for program (computational) steering is such a requirement. Far and away the most critical requirement for the geoscientists at the beginning of the collaboration was performance. Their previous computational support required approximately 6 hours per iteration of the main computation. The large amount of time required to generate models resulted in a decoupled and poorly integrated problem-solving process. The preliminary, trivial parallelization of the main computation, however, reduced the iteration time to about 25 minutes. This performance improvement "allowed more rapid analysis and the consideration of considerably larger data sets" [CDHH96]. The interesting side effect was that as a result of the performance increase, the computational analysis bottleneck shifted from model generation to model evaluation. Suddenly, it became possible to view the problem-solving process not as a "series of isolated computer runs, but as a series of steering operations on a single (long) computation" [CDHH96]. Thus, program interaction and computational steering became a requirement of the evolving environment.

A similar process led to the requirement of more sophisticated, interactive, 3D visualization. Scientists were originally content with simple, 2D plots created offline. However, model representations depicted in three dimensions were more easily understood, and since model evaluation had become more of a bottleneck, the scientists "became more convinced of their utility" [CDHH96].

Implementation. The features of programmability, extensibility, and interoperability are primarily evident in the two main tool components of the TIERRA framework. One component, DAQV (for Distributed Array Query and Visualization) [HM96], handles program interaction and computational steering. The other component, Viz [HHHM96], is a visualization programming system. Both systems embody all three of the desirable DSE features.

DAQV is a software library through which a parallel program with distributed data makes its data available to external tools. Tools, on the other hand, are presented with a logical, global view of program data and can make simple, high-level requests for data. DAQV accesses distributed data through a library of programmable data access functions. In addition, the library of access functions can be extended to perform preprocessing, reductions, or compositions on the distributed data before it is delivered to external tools. Finally, interoperability is achieved by using an open client/server protocol based on standard, socket-based, interprocess communication [CDHH96, HM96]. It is interesting to note that in its original form, DAQV did not support computational steering. However, because of its extensible design and open communication protocol, DAQV was easily extended to support this capability [CDHH96].

For visualization support, TIERRA relies on Viz, a highly programmable and extensible visualization toolkit.3 Visualizations are created through an interpreted, high-level language that provides access to an object-oriented, 3D graphics library. The robust language features and graphics model accessible through Viz enable a high degree of programmability and extensibility. Interoperability and extensibility are also enhanced by the ability to incorporate and access external libraries through "foreign function interfaces" [CDHH96].

Whereas Bruegge et al. [BRRM95] present GEMS as a functionally complete system, TIERRA continues to evolve:

The requirements for our environment are driven by the scientific discovery process. As we provide increasing support for the activities of the seismologist, the way in which they use the environment-and thus the requirements for the environment-change. Given the experimental nature of our application domain, this evolution will continue.

The researchers point to several future improvements, including new visualizations, better support for data management, improved performance, more flexible and interactive program control, and portability.

In summary, Cuny et al. propose the term domain-specific environment, which we have also adopted, to describe an environment suitable for exploratory, ill-defined, and poorly understood problems and their associated computational requirements. However, the goal of their work, as well as that of Bruegge et al., is not to develop a theory of DSEs. To the contrary, these reports are "experiential"; the researchers did not seek general principles, nor did they fully discover any [CDHH96].

Conclusion

The emergence of problem-solving environments, multidisciplinary problem-solving environments, and domain-specific environments strongly suggests a need for improved computational environment support. As Cuny et al. [CDHH96] point out, performance used to be the sole concern for application scientists. But now, while performance is still important, that need is more often balanced with a demand for "more robust, integrated support from computational tools of diverse types" [CDHH96]. The tendency for PSEs, MPSEs, and DSEs to support a range of problem-solving capabilities is indicative of this trend.

Just as scientists are engaged in different types of research, these environments address different needs. PSEs provide depth of support for a particular well-defined and well-understood computational problem. If the science is expressible as that type of problem, then a scientist can apply a PSE to their work. MPSEs seek to bring several individual PSEs to bear on more complex phenomena. But again, each part of the whole problem must be expressible in a form that can be addressed by a component PSE. Finally, DSEs fill the gap left by PSEs and MPSEs by addressing exploratory computational science problems that lack a precise definition, exhibit several loosely coupled stages of analysis, and evolve over time.

Domain-specific environments, like the very problems they address, are not a well-understood concept; there are no hard and fast rules for design or implementation. However, we identify some general characteristics of DSEs that at least distinguish them from problem-solving environments. In particular, DSEs are characterized by an intense, if mostly informal, design and development collaboration between computer scientists and application scientists that can, if desired, result in a loosely coupled framework capable of high degrees of extensibility, programmability, and interoperability. Collaboration by no means guarantees this result; designers and implementers must make conscious decisions to avoid creating ad hoc systems. DSEs do not take a formal approach to achieving domain-specificity like domain-specific software architectures (DSSAs); rather, domain-specificity results as a natural side-effect of the collaboration between scientist and software engineer. More generally, DSEs do not universally place a great emphasis on software engineering; rather, software engineering techniques are employed where convenient or necessary, and where they may directly result in meeting important nonfunctional requirements (e.g., extensibility, interoperability, portability, etc.). Finally, by way of the collaborative process, DSEs end up addressing the needs of specific scientists as opposed to scientists in general. It is perhaps this characteristic more than any other that holds particular promise with respect to the usefulness and usability of the resultant environments. We envision a similar approach to building domain-specific metacomputing environments for computational science.


[Prev] [Up] [Next] Domain-Specific Metacomputing for Computational Science:
Achieving Specificity Through Abstraction
By Steven Hackstadt

Last modified: Wed Nov 5 08:15:21 1997
Steven Hackstadt / hacks@cs.uoregon.edu
http://www.cs.uoregon.edu/~hacks/