next up previous
Next: Multi-Threaded Systems and Up: Selected Scenarios Previous: Selected Scenarios

Distributed Systems and MPI

A common approach to enabling instrumentation in libraries is to define a wrapper library that encapsulates the functionality of the underlying library by inserting instrumentation calls before and after calls to the native routines. The MPI Profiling Interface [6] is a good example of this approach. This interface allows a tool developer to interface with MPI calls without modifying the application source code, and in a portable manner that does not require a vendor to supply the proprietary source code of the library implementation. A performance tool can provide an interposition library layer that intercepts calls to the native MPI library by defining routines with the same name (such as MPI_Send). These routines can then call the name-shifted native library routines provided by the MPI profiling interface (such as PMPI_Send). Wrapped around the call is performance instrumentation. The exposure of routine arguments allows the tool developer to track the size of messages, identify message tags or invoke other native library routines, for example, to track the sender and the size of a received message, within a wild-card receive call.

Requiring that such profiling hooks be provided in the standardized library before an implementation is considered "compliant", forms the basis of an excellent model for developing portable performance profiling tools for the library. TAU and several other tools (e.g., Upshot [1] and Vampir [8]) use the MPI profiling interface for tracing. However, TAU can also utilize a rich set of measurement modules that allow profiles to be captured with various types of performance data, including system and hardware data. In addition, TAU's performance grouping capabilities allows MPI event to be presented with respect to high-level categories such as send and receive types.



Sameer Suresh Shende
Fri Apr 21 16:40:11 PDT 2000