In complex systems where deterministic re-execution is impossible or prohibitively expensive, the ability to collect information about the execution history of a program is vital to effective debugging. Reverse execution systems attempt to solve this problem by checkpointing the program during execution and allowing the programmer to ``reverse'' execution to a previously saved checkpoint.
Since the effects of a program error are usually not detected until some time after the execution of the erroneous code section, debugging is inherently a ``backward'' process. Flowback analysis[Cho89] is a strategy combining reverse execution with dataflow analysis to provide the debugger with causality information, enabling the programmer to ask questions such as ``How was the current value of variable x calculated?'' Although more expensive than simple reverse execution, this approach is generally more useful.
In Choi's work, flowback analysis is applied to parallel programs. Three stages of debugging are identified: compilation, execution, and debugging. Partial information is gathered at each stage of the process, thereby minimizing the overhead imposed on any one stage. A minimal static analysis of the program is done at compile time, to determine possible data and branch dependencies, as well as to build a program database of identifier definitions and uses. Partial checkpoints are generated during execution, which enable the system to restart the program by compiling some of the checkpoints. Then during the debugging phase (which begins when control of the program is taken by the debugger, either through a breakpoint or an error), the controller presents information about recent control flow of the program; if insufficient data is available, the controller uses an emulation package to generate further trace data.