next up previous
Next: Caveats Up: TAU Overhead Compensation Prototype Previous: Asynchronous Operations

Collective Operations

Consider the class of collective operations supported by MPI. Let us first examine the MPI_Gather call where each process in a given communicator provides a single data item to MPI. The process designated with the rank of root gathers all the data in an array. It is important to communicate the local delays from each process to the root process. To do this, we form a message with the piggyback delay value and call a single MPI_Gather call. At the receiving end, we receive a single contiguous buffer where the application data and the delay values are put together in a single buffer. We extract the piggyback values out of this buffer and construct the application buffer with the rest. Once we get an array of the delay values from each process we compute the minimum delay value from the group of processes. Since the collective operation cannot complete without the message with the minimum delay, it must adjust its waiting time based on this value. So, the collective operation reduces to the case where the receiver gets a message from one process that has the least delay in the communicator. We can now apply the performance overhead compensation model as described in the previous section.

When broadcasting a message from one task to several, MPI_Bcast is modeled based on the two process overhead compensation model (see (17)). We create a new datatype, on the root process, that embeds the original message and the local delay value. This message is sent to all other members of the group. Each receiver compares the remote delay with its local delay and makes adjustments to the waiting time and local overhead, as if it had received a single message from the remote task. We use the model described earlier to do this.

To model MPI_Scatter, which distributes a distinct message to all members of the group, we create a new datatype that includes the overhead from the root process. This is similar to the MPI_Gather operation. After the operation is completed, each receiver examines the remote overhead and treats it as if it had received a single message from the root node, applying our previous scheme for compensating for perturbation.

MPI_Barrier requires all tasks to block until all processes invoke this routine. MPI_Barrier is implemented as a combination of two operations: MPI_Gather and MPI_Bcast, sending the local delay from each task to the root task (arbitrarily selected as the process with the least rank in the communicator). This task examines the local delay and compares it with the task with the least delay, adjusts its wait time and then sends the new local delay to all tasks using the MPI_Bcast operation. This mechanism preserves the efficiencies that the underlying MPI substrate may provide in implementing a collective operation. By mapping one MPI routine to another, we exploit those efficiencies.


next up previous
Next: Caveats Up: TAU Overhead Compensation Prototype Previous: Asynchronous Operations
Scott Biersdorff 2007-02-02