Since commercial component models are targeted mostly to serial computing environments, the design of performance methods and metrics for these software systems are not likely to account for critical requirements important to high-performance scientific computing such as memory hierarchy performance, data locality, or floating point operation. Likewise, the distributed frameworks/component models (e.g., DCOM and CORBA) use commodity networking to connect components together, and lack consideration for high-performance network communication required for HPC. In a distributed environment, metrics like round trip time and network latency are often considered useful, while quantities like bisection bandwidth, message passing latencies and synchronization cost, which form the basis of much of the research in scientific performance evaluation, are left unaddressed.
However, despite the different semantics, several research efforts in these standards offer viable strategies in measuring performance. A performance monitoring system for the Enterprise Java Beans standard is described in [16]. For each component to be monitored, a proxy is created using the same interface as the component. The proxy intercepts all method invocations and notifies a monitor component before forwarding the invocation to the component. The monitor handles the notifications and selects the data to present, either to a user or to another component (e.g., a visualizer component). The goal of this monitoring system is to identify hot spots or components that do not scale well.
The Wabash tool [17,18] is designed for pre-deployment testing and monitoring of distributed CORBA systems. Because of the distributed nature, Wabash groups components into regions based on the geographical location. An interceptor is created in the same address space of each server object (i.e., a component that provides services) and manages all incoming and outgoing requests to the server. A manager component is responsible for querying the interceptor for data retrieval and event management.
In the work done by the Parallel Software Group at the Imperial
College of Science in London [19,20], the research is
focused on grid-based component computing. However, the performance is
also measured through the use of proxies. Their performance system is
designed to automatically select the optimal implementation of the
application based on performance models and available resources. With
components, each having
implementations, there is a total
of
implementations to choose from. The
performance characteristics and a performance model for each component
is constructed by the component developer and stored in the component
repository. Their approach is to use the proxies to simulate an
application in order to determine the call-path. This simulation skips
the implementation of the components by using the proxies. Once the
call-path is determined, a recursive composite performance model is
created by examining the behavior of each method call in the
call-path. In order to ensure that the composite model is
implementation-independent, a variable is used in the model whenever
there is a reference to an implementation. To evaluate the model, a
specific implementation's performance model replaces the variables and
the composite model returns an estimated execution time or estimated
cost (based on some hardware resources model). The implementation with
the lowest execution time or lowest cost is then selected and a
execution plan is created for the application.