next up previous
Next: Modeling Summary and General Up: Three Process Parallel Models Previous: Three Process Parallel Models

Scenario: Two Receive.

When more than two processes are communicating, it is not hard to find a scenario that raises unpleasant issues in our ability to correct overhead intrusion under a different set of receive assumptions. These issues are brought on by the effect of intrusion on message sequencing. The Two Receive scenario exposes the problem. Here one process, P2, receives messages from two other processes. There are four cases to consider depending on the relatives sizes of overheads and waiting times. Figures 2 and 3 show two of the cases. For simplicity, we return to looking only at the first messages being sent and received on each process, and consider the initial overheads (not the delays values) in the analysis.

In Figure 2, a two-part approximated execution is shown, with part one (top) giving the state after the first message is processed and part two (bottom) showing the result after the second message is processed. The analysis follows the approach we used before, with new waiting values ($w'$ and $y'$) being calculated and P2's delay value ($x2$) updated. In this case, no waiting time would have occurred, and no adjustment to waiting time is necessary. Otherwise, nothing particularly strange stands out in the approximated result.

Figure 2: Three-Process, Two Receive - Models and Analysis (Case 1)
\resizebox{3.3in}{!} {\includegraphics{Figures-new/3-process-2-receive-final.eps}}

Figure 3: Three-Process, Two Receive - Models and Analysis (Case: 2)
\resizebox{3.3in}{!} {\includegraphics{Figures-new/3-process-2-receive-d-final.eps}}

What would be a surprising result? If the overhead analysis resulted in a reordering of send events in time, between the measured execution and the approximated execution, then there would be concerns of performance perturbation. In Figure 3, we see the send events changing order in time in the approximated execution, with P3's send taking place before P1's send. As with the other cases, our analysis reflects a message-by-message processing algorithm. In the rational reconstruction, we assume the message communication is explicit and pairs a particular sender and receiver. Under this assumption, the order of messages received by P2 must be maintained in the approximated execution. In this case, is the time reordering of send messages in Figure 3 a problem? In fact, no. It is certainly possible that a process (P2) will first receive a message from a process (P1) sent after another process (P3) sends a message to the receiving process. This just reflects the strict order of P2 receives. However, if we consider receive operations that can match any send, the send reordering exposes a problem with overhead compensation, since the message from P3 should have been received first in the ``real'' execution.

The application of our overhead compensation models to programs using receive operations that can match any send message results in profile analysis constrained to message orderings as they are observed in the measured execution. These message orderings are affected by intrusion and, thus, may not be the message orderings that occur in the absence of measurement. However, while it is actually possible to detect reordering occurrences (i.e., measured versus approximated orderings), it is not possible to correct for reordering during online overhead analysis and compensation. Why? There are two reasons. First, our analysis is unable to determine if it is correct to associate a receive event with a different send event. That is, the performance analysis does not know what type of receive is being performed, one that is for a specific sender or one that can accept any sender. Second, even if we know the type of receive operation, it is not possible to know whether changing receive order will affect future receive events. Therefore, the models must, in general, enforce message receive ordering.

next up previous
Next: Modeling Summary and General Up: Three Process Parallel Models Previous: Three Process Parallel Models
Sameer Shende 2005-05-30