## ParaDucks Research Group Annotated Bibliography

Conferences and Workshops
Journals
Theses and Dissertations
Technical Reports
Talks and Presentations
Other Publications

### Conferences and Workshops

[acmonr91]
Allen D. Malony, Daniel A. Reed, "Models for Performance Perturbation Analysis," Workshop on Parallel and Distributed Debugging, Proc. 1991 ACM/ ONR workshop on Parallel and Distributed Debugging, pp. 15-25, 1991
Keywords: performance perturbation, performance measurement

When performance measurements are made of program operation actual execution behavior can be perturbed. In general, the degree of perturbation depends on the intrusiveness and frequency of the instrument ation. If the perturbation effects of the instrumentation cannot be quantified by a perturbation model (and subsequently removed during perturbation analysis), detailed performance measurements could be inaccurate. Developing models of time and event perturbations that can recover actual execution performance from perturbed performance measurements is the topic of this paper. Time-based models can accurately capture execution time perturbations for sequential computations and concurrent computations with simple fork-join behavior. However, the performance of parallel computations generally depends on the relative ordering of dependent events and the assignment of computational resources. Event-based models must be used to quantify instrumentation perturbation in parallel performance measurements. The measurement and subsequent analysis of synchronization operations (e.g., barrier, semaphore, and advance/await synchronization) can produce accurate approximations to actual performance behavior. Unfortunately, event-based models are limited in their ability to fully capture perturbation effects in nondeterministic executions.
[chasm_lacsi01]
C. Rasmussen, K. Lindlan, B. Mohr, J. Striegnitz, "CHASM: Static Analysis and Automatic Code Generation for Improved Fortran90 and C++ Interoperability," Proceedings of LACSI Symposium, 2001.
Keywords: CHASM, PDT, SILOON, F90, C++ interoperability

The relative simplicity and design of the Fortran 77 language allowed for reasonable interoperability with C and C++. Fortran 90, on the other hand, introduces several new and complex features to the language that severely degrade the ability of a mixed Fortran and C++ development environment. Major new items added to Fortran are user-defined types, pointers, and several new array features. Each of these items introduce difficulties because the Fortran 90 procedure calling convention was not designed with interoperability as an important design goal. For example, Fortran 90 arrays are passed by array descriptor, which is not specified by the language and therefore depends on a particular compiler implementation. This paper describes a set of software tools that parses Fortran 90 source code and produces mediating interface functions which allow access to Fortran 90 libraries from C++.
[cluster03]
H. Brunst, W. E. Nagel, and A. D. Malony, "A Distributed Performance Analysis Architecture for Clusters," In Proc. IEEE International Conference on Cluster Computing (Cluster 2003), IEEE Computer Society, pp. 73-83, Dec. 2003.
Keywords: Parallel Computing, Performance Analysis, Profiling, Tracing, Clusters, VNG, Vampir

The use of a cluster for distributed performance analysis of parallel trace data is discussed. We propose an analysis architecture that uses multiple cluster nodes as a server to execute analysis operations in parallel and communicate to remote clients where performance visualization and user interactions occur. The client-server system developed, VNG, is highly configurable and is shown to perform well for traces of large size, when compared to leading trace visualization systems.
[cluster06]
A. Nataraj, A. Malony, S. Shende, A. Morris, "Kernel-Level Measurement for Integrated Parallel Performance Views: the KTAU Project," In Proc. Cluster 2006, IEEE Computer Society, 2006.
Keywords: kernel mesurment, KTAU, TAU

The effect of the operating system on application performance is an increasingly important consideration in high performance computing. OS kernel measurement is key to understanding the performance influences and the interrelationship of system and user-level performance factors. The KTAU (Kernel TAU) methodology and Linux-based framework provides parallel kernel performance measurement from both a kernel-wide and process-centric perspective. The first characterizes overall aggregate kernel performance for the entire system. The second characterizes kernel performance when it runs in the context of a particular process. KTAU extends the TAU performance system with kernel-level monitoring, while leveraging TAUs measurement and analysis capabilities. We explain the rational and motivations behind our approach, describe the KTAU design and implementation, and show working examples on multiple platforms demonstrating the versatility of KTAU in integrated system / application monitoring.oped. Minimally, such an approach will require OS kernel performance monitoring.
[compframe05]
N. Trebon, A. Morris, J. Ray, S. Shende, and A. Malony, "Performance Modeling of Component Assemblies with TAU," Proc. Workshop on Component Models and Frameworks in High Performance Computing (CompFrame 2005).
Keywords: CCA, TAU, CFRFS, Proxy components, performance modeling

The Common Component Architecture (CCA) is a component-based methodology for developing scientific simu- lation codes. This architecture consists of a framework which enables components, (embodiments of numerical algorithms and physical models) to work together. Components publish their interfaces and use interfaces published by others. Com- ponents publishing the same interface and with the same func- tionality (but perhaps implemented via a different algorithm or data structure) may be transparently substituted for each other in a code or a component assembly. Components are compiled into shared libraries and are loaded in, instantiated and composed into a useful code at runtime. Details regarding CCA can be found in [1], [2]. An analysis of the process of decomposing a legacy simulation code and re-synthesizing it as components can be found in [3], [4]. Actual scientific results obtained from this toolkit can be found in [5], [6].
[conpar94]
B. Mohr, D. Brown, A. Malony, TAU: A Portable Parallel Program Analysis Environment for pC++, Proceedings of CONPAR 94 - VAPP VI, University of Linz, Austria, LNCS 854, September 1994, pp. 29-40.
Keywords: TAU, tuning and analysis utilities, integrated tools, portable programanalysis, parallel object-oriented language, pC++, Sage++

The realization of parallel language systems that offer high-level programming paradigms to reduce the complexity of application development, scalable runtime mechanisms to support variable size problem sets, and portable compiler platforms to provide access to multiple parallel architectures, places additional demands on the tools for program development and analysis. The need for integration of these tools into a comprehensive programming environment is even more pronounced and will require more sophisticated use of the language system technology (i.e., compiler and runtime system). Furthermore, the environment requirements of high-level support for the programmer, large-scale applications, and portable access to diverse machines also apply to the program analysis tools.

In this paper, we discuss (TAU, Tuning and Analysis Utilities), a first prototype for an integrated and portable program analysis environment for pC++, a parallel object-oriented language system. TAU is integrated with the pC++ system in that it relies heavily on compiler and transformation tools (specifically, the Sage++ toolkit) for its implementation. This paper describes the design and functionality of TAU and shows its application in practice.

[cug06]
S. Shende, A. D. Malony, A. Morris, and P. Beckman, "Performance and Memory Evaluation Using TAU," In Proc. for Cray User's Group Conference (CUG 2006), 2006.
Keywords: TAU, PDT, Memory Headroom Analysis, MFIX

The TAU performance system is an integrated performance instrumentation, measurement, and analysis toolkit offering support for profiling and tracing modes of measurement. This paper introduces memory introspection capabilities of TAU featured on the Cray XT3 Catamount compute node kernel. TAU supports examining the memory headroom, or the amount of heap memory available, at routine entry, and correlates it to the programs callstack as an atomic event.
[dapsys2k]
A. Malony and S. Shende, "Performance Technology for Complex Parallel and Distributed Systems," Proc. Third Austrian-Hungarian Workshop on Distributed and Parallel Systems, DAPSYS 2000, "Distributed and Parallel Systems: From Concepts to Applications," (Eds. G. Kotsis and P. Kacsuk)Kluwer, Norwell, MA, pp. 37-46, 2000.
Keywords: performance tools, complex systems, instrumentation, measurement, analysis, TAU

The ability of performance technology to keep pace with the growing complexity of parallel and distributed systems will depend on robust performance frameworks that can at once provide system-specific performance capabilities and support high-level performance problem solving. The TAU system is offered as an example framework that meets these requirements. With a flexible, modular instrumentation and measurement system, and an open performance data and analysis environment, TAU can target a range of complex performance scenarios. Examples are given showing the diversity of TAU application.
[dodugc04]
Daniel M. Pressel, David Cronk, and Sameer Shende, "PENVELOPE: A New Approach to Rapidly Predicting the Performance of Computationally Intensive Scientific Applications on Parallel Computer Architectures," Proc. 2004 DOD Users Group Conference, Williamsburg, Virginia, IEEE Computer Society, pp. 314-318, 2004.
Keywords: PENVELOPE, Performance Prediction, TAU

A common complaint when dealing with the performance of computationally intensive scientific applications on parallel computers is that programs exist to predict the performance of radar systems, missiles and artillery shells, drugs, etc., but no one knows how to predict the performance of these applications on a parallel computer. Actually, that is not quite true. A more accurate statement is that no one knows how to predict the performance of these applications on a parallel computer in a reasonable amount of time. PENVELOPE is an attempt to remedy this situation. It is an extension to Amdahls Law/ Gustafsons work on scaled speedup that takes into account the cost of interprocessor communication and operating system overhead, yet is simple enough that it was implemented as an Excel spreadsheet.
[epvmmpi05.a]
S. Shende, A. D. Malony, A. Morris, and F. Wolf, "Performance Profiling Overhead Compensation for MPI Programs," in Proc. EuroPVM/MPI 2005 Conference, (eds. B. Di. Martino et. al.), LNCS 3666, Springer, pp. 359-367, 2005.
Keywords: TAU, Performance measurement and analysis, parallel computing, profiling, message passing, overhead compensation

Performance profiling of MPI programs generates overhead during execution that introduces error in profile measurements. It is possible to track and remove overhead online, but it is necessary to communicate execution delay be- tween processes to correctly adjust their interdependent timing. We demonstrate the first implementation of a onlne measurement overhead compensation system for profiling MPI programs. This is implemented in the TAU performance sys- tems. It requires novel techniques for delay communication in the use of MPI. The ability to reduce measurement error is demonstrated for problematic test cases and real applications.
[epvmmpi05.b]
S. Moore, F. Wolf, J. Dongarra, S. Shende, A. Malony, and B. Mohr, "A Scalable Approach to MPI Application Performance Analysis," in Proc. of EuroPVM/MPI 2005, (eds. B. Di Martino) LNCS 3666, Springer, pp. 309-316, 2005.
Keywords: TAU, scalability, Performance measurement and analysis, parallel computing, profiling, tracing, KOJAK

A scalable approach to performance analysis of MPI applications is presented that includes automated source code instrumentation, low overhead generation of profile and trace data, and database management of performance data. In addition, tools are described that analyze large-scale parallel profile and trace data. Analysis of trace data is done using an automated pattern-matching ap- proach. Examples of using the tools on large-scale MPI applications are presented.
[etpsc92]
B. Mohr, Standardization of Event Traces Considered Harmful or Is an Implementation of Object-Independent Event Trace Monitoring and Analysis Systems Possible?, Proceedings of the CNRS-NSF Workshop on Environments and Tools For Parallel Scientific Computing, St. Hilaire du Touvet, France, Elsevier, Advances in Parallel Computing, Vol. 6, September 1992, pp. 103-124.
Keywords: event trace, analysis tools, monitoring, objects, object-independentmonitoring, standardization of event trace formats, access interfaces

Programming non-sequential computer systems is hard! Many tools and environments have been designed and implemented to ease the use and programming of such systems. The majority of the analysis tools is event-based and uses event traces for representing the dynamic behavior of the system under investigation, the object system. Most tools can only be used for one special object system, or a specific class of systems such as distributed shared memory machines. This limitation is not obvious because all tools provide the same basic functionality.

This article discusses approaches to implementing object-independent event trace monitoring and analysis systems. The term object-independent means that the system can be used for the analysis of arbitrary (non-sequential) computer systems, operating systems, programming languages and applications. Three main topics are addressed: object-independent monitoring, standardization of event trace formats and access interfaces and the application-independent but problem-oriented implementation of analysis and visualization tools. Based on these approaches, the distributed hardware monitor system ZM4 and the SIMPLE event trace analysis environment were implemented, and have been used in many 'real-world' applications throughout the last three years. An overview of the projects in which the ZM4/SIMPLE tools were used is given in the last section.

[etpsc94]
Darryl I. Brown, Steven T. Hackstadt, Allen D. Malony, Bernd Mohr, Program Analysis Environments for Parallel Language Systems: The TAU Environment, Proc. of the Workshop on Environments and Tools For Parallel Scientific Computing, Townsend, TN, May 1994, pp. 162-171.
Keywords: parallel tool, pC++, integrated tool framework

In this paper, we discuss TAU (Tuning and Analysis Utilities), the first prototype of an integrated and portable program analysis environment for pC++, a parallel object-oriented language system. TAU is unique in that it was developed specifically for pC++ and relies heavily on pC++'s compiler and transformation tools (specifically, the Sage++ toolkit) for its implementation. This tight integration allows TAU to achieve a combination of portability, functionality, and usability not commonly found in high-level language environments. The paper describes the design and functionality of TAU, using a new tool for breakpoint-based program analysis as an example of TAU's capabilities
[etpsc96]
Janice Cuny, Robert Dunn, Steven T. Hackstadt, Christopher Harrop, Harold H. Hersey, Allen D. Malony, and Douglas Toomey, Building Domain-Specific Environments for Computational Science: A Case Study in Seismic Tomography, International Journal of Supercomputing Applications and High Performance Computing, Vol. 11, No. 3, Fall 1997. Also appearing in the Proceedings of the Workshop on Environments and Tools For Parallel Scientific Computing, Lyon, France, August 1996.
Keywords: computational science, domain-specific environments, seismic tomography, visualization, distributed data access

We report on our experiences in building a computational environment for tomographic image analysis for marine seismologists studying the structure and evolution of mid-ocean ridge volcanism. The computational environment is determined by an evolving set of requirements for this problem domain and includes needs for high-performance parallel computing, large data analysis, model visualization, and computation interaction and control. Although these needs are not unique in scientific computing, the integration of techniques for seismic tomography with tools for parallel computing and data analysis into a computational environment was (and continues to be) an interesting, important learning experience for researchers in both disciplines. For the geologists, the use of the environment led to fundamental geologic discoveries on the East Pacific Rise, the improvement of parallel ray tracing algorithms, and a better regard for the use of computational steering in aiding model convergence. The computer scientists received valuable feedback on the use of programming, analysis, and visualization tools in the environment. In particular, the tools for parallel program data query (DAQV) and visualization programming (Viz) were demonstrated to be highly adaptable to the problem domain. We discuss the requirements and the components of the environment in detail. Both accomplishments and limitations of our work are presented.
[europar03]
R. Bell, A. D. Malony, and S. Shende, "A Portable, Extensible, and Scalable Tool for Parallel Performance Profile Analysis", Proc. EUROPAR 2003 conference, LNCS 2790, Springer, Berlin, pp. 17-26, 2003.
Keywords: ParaProf, TAU, jracy, Profile Browser, scalable, visualization

This paper presents the design, implementation, and application of ParaProf, a portable, extensible, and scalable tool for parallel performance profile analysis. ParaProf attempts to offer best of breed'' capabilities to performance analysts -- those inherited from a rich history of single processor profilers and those being pioneered in parallel tools research. We present ParaProf as a parallel profile analysis framework that can be retargeted and extended as required. ParaProf's design and operation is discussed, and its novel support for large- scale parallel analysis demonstrated with a 512-processor application profile generated using the TAU performance system.
[europar04]
A. D. Malony, and S. S. Shende, "Overhead Compensation in Performance Profiling," Proc. Europar 2004 Conference, LNCS 3149, Springer, pp. 119-132, 2004.
Keywords: Performance measurement and analysis, parallel computing, profiling, intrusion, overhead compensation

Measurement-based profiling introduces intrusion in program execution. Intrusion effects can be mitigated by compensating for measurement overhead. Techniques for compensation analysis in performance profiling are presented and their implementation in the TAU performance system described. Experimental results on the NAS parallel benchmarks demonstrate that overhead compensation can be effective in improving the accuracy of performance profiling.
[europar05]
Allen D. Malony and Sameer Shende, "Models for On-the-Fly Compensation of Measurement Overhead in Parallel Performance Profilin," in Proc. EuroPar 2005 Conference, (eds. J. C. Cuna and Pd. Medeiros), LNCS 3648, Springer, pp. 72-82, 2005.
Keywords: Performance measurement and analysis, parallel computing, profiling, intrusion, overhead compensation, TAU

Performance profiling generates measurement overhead during parallel program execution. Measurement overhead, in turn, introduces intrusion in a program's runtime performance behavior. Intrusion can be mitigated by controlling instrumentation degree, allowing a tradeoff of accuracy for detail. Alternatively, the accuracy in profile results can be improved by reducing the intrusion error due to measurement overhead. Models for compensation of measurement overhead in parallel performance profiling are described. An approach based on rational reconstruction is used to understand properties of compensation solutions for different parallel scenarios. From this analysis, a general algorithm for on-the-fly overhead assessment and compensation is derived.
[europar07]
A. Nataraj, M. Sottile, A. Morris. A.D. Malony, S. Shende. "TAUoverSupermon: Low-overhead Online Parallel Performance Monitoring." Presented at EuroPar 2007.
Keywords: Online performance measurement, cluster monitoring, TAU, supermon

Online application performance monitoring allows tracking performance characteristics during execution as opposed to doing so post-mortem. This opens up several possibilities otherwise unavailable such as real-time visualization and application performance steering that can be useful in the context of long-running applications. As HPC sys- tems grow in size and complexity, the key challenge is to keep the online performance monitor scalable and low overhead while still providing a useful performance reporting capability. Two fundamental components that constitute such a performance monitor are the measurement and transport systems. We adapt and combine two existing, mature systems - TAU and Supermon - to address this problem. TAU performs the mea- surement while Supermon is used to collect the distributed measurement state. Our experiments show that this novel approach leads to very low- overhead application monitoring as well as other benefits unavailable from using a transport such as NFS.
[europar08]
A. Morris, W. Spear, A. D. Malony and S. Shende. "Observing Performance Dynamics using Parallel Profile Snapshots," European Conference on Parallel Processing (EuroPar 2008). August 2008
Keywords: TAU, Performance Snapshots, Performance Data, Performance Analysis

Performance analysis tools are only as useful as the data they collect. Not just accuracy of performance data, but accessibility, is necessary for performance analysis tools to be used to their full effect. The diversity of performance analysis and tuning problems calls for more flexible means of storing and representing performance data. The development and maintenance cycles of high performance programs, in particular, stand to benefit from exploration of and expansion of the means used to record and describe program execution behavior. We describe a means of representing program performance data via a time or event delineated series of performance profiles, or profile snapshots, implemented in the TAU performance analysis system. This includes an explanation of the profile snapshot format and means of snapshot analysis.
[europar08b]
K. Huck, W. Spear, A. Malony, S. Shende, and A. Morris, Parametric Studies in Eclipse with TAU and PerfExplorer, Workshop on Productivity and Performance (PROPER 2008), EuroPar 2008, Las Palmas de Gran Canaria, Spain, August, 2008.
Keywords: TAU, PerfExplorer, Eclipse, Parametric

With support for C/C++, Fortran, MPI, OpenMP, and performance tools, the Eclipse integrated development environment (IDE) is a serious contender as a programming environment for parallel applications. There is interest in adding capabilities in Eclipse for conducting workﬂows where an application is executed under different scenarios and its outputs are processed. For instance, parametric studies are a requirement in many benchmarking and performance tuning efforts, yet there was no experiment management support available for the Eclipse IDE. In this paper, we describe an extension of the Parallel Tools Platform (PTP) plugin for the Eclipse IDE. The extension provides a graphical user interface for selecting experiment parameters, launches build and run jobs, manages the performance data, and launches an analysis application to process the data. We describe our implementation, and discuss three experiment examples which demonstrate the experiment management support.
[europar96]
Steven T. Hackstadt and Allen D. Malony, Distributed Array Query and Visualization for High Performance Fortran, Proc. of Euro-Par '96, Lyon, France, August 1996, pp. 55-63. Also available as University of Oregon, Department of Computer and Information Science, Technical Report CIS-TR-96-02, February 1996.
Keywords: visualization, distributed data access, hpf, parallel tool

This paper describes the design and implementation of the Distributed Array Query and Visualization (DAQV) system for High Performance Fortran, a project sponsored by the Parallel Tools Consortium. DAQV's implementation leverages the HPF language, compiler, and runtime system to address the general problem of providing high-level access to distributed data structures. DAQV supports a framework in which visualization and analysis clients connect to a distributed array server (i.e., the HPF application with DAQV control) for program-level access to array values. Implementing key components of DAQV in HPF itself has led to a robust and portable solution in which clients do not need to know how the data is distributed.
[europar99]
Matthew Sottile and Allen Malony, INTERLACE: An Interoperation and Linking Architecture for Computational Engines, Proceedings of EuroPar 99 Conference, LNCS 1685, Springer, Berlin, pp.135-138, 1999.
Keywords: heterogeneous computing, reusability, computational servers, client/server, distributed objects, MatLab

To aid in building high-performance computational environments, INTERLACE offers a framework for linking reusable computational engines in a heterogeneous distributed system. The INTERLACE model provides clients with access to computational servers which interface with "wrapped" computational engines. The wrappers implement mechanisms to translate client requests to engine actions and to move data across the server interface. These mechanisms are programmable, allowing engines of different type to be integrated. The framework takes advantage of the HPC++ runtime system to access servers through distributed object operations. The INTERLACE framework has been demonstrated by building a distributed computational environment with MatLab engines.
[europara06]
A. Nataraj, A. Malony, A. Morris, S. Shende, "Early Experiences with KTAU on the IBM BG/L," Proc. EUROPAR 2006 Conference, Springer, LNCS 4128, pp. 99-110, 2006.
Keywords: TAU, KTAU, IBM BG/L, kernel profiling, zeptoOS, performance analysis

The influences of OS and system-specific effects on applica- tion performance are increasingly important in high performance com- puting. In this regard, OS kernel measurement is necessary to under- stand the interrelationship of system and application behavior. This can be viewed from two perspectives: kernel-wide and process-centric. An integrated methodology and framework to observe both views in HPC systems using OS kernel measurement has remained elusive. We demon- strate a new tool called KTAU (Kernel TAU) that aims to provide paral- lel kernel performance measurement from both perspectives. KTAU ex- tends the TAU performance system with kernel-level monitoring, while leveraging TAUs measurement and analysis capabilities. As part of the ZeptoOS scalable operating systems pro ject, we report early experiences using KTAU in ZeptoOS on the IBM BG/L system.
[europara06c]
L. Li, A. Malony, "Model-Based Performance Diagnosis of Master-Worker Parallel Computations," Euro-Par 2006 Parallel Processing Conference September 2006 (LNCS 4128). Pages 35-46.
Keywords: Performance diagnosis, parallel models, master-worker, measurement, analysis.

Parallel performance tuning naturally involves a diagnosis process to locate and explain sources of program inefficiency. Proposed is an approach that exploits parallel computation patterns (models) for diagnosis discovery. Knowledge of performance problems and inference rules for hypothesis search are engineered from model semantics and analysis expertise. In this manner, the performance diagnosis process can be automated as well as adapted for parallel model variations. We demonstrate the implementation of model-based performance diagnosis on the classic Master-Worker pattern. Our results suggest that pattern- based performance knowledge can provide effective guidance for locating and explaining performance bugs at a high level of program abstraction.
[europvm06]
K. Huck, A. Malony, S. Shende and A. Morris. "TAUg: Runtime Global Performance Data Access using MPI." EuroPVM/MPI Conference, LNCS 4192, pp. 313-321, Springer, September 2006.
Keywords: TAU, TAUg, global data acsess, performance monitoring, online performance adaption

To enable a scalable parallel application to view its global performance state, we designed and developed TAUg, a portable runtime framework layered on the TAU parallel performance system. TAUg leverages the MPI library to communicate between application processes, creating an abstraction of a global performance space from which profile views can be retrieved. We describe the TAUg design and implementation and show its use on two test benchmarks up to 512 processors. Overhead evaluation for the use of TAUg is included in our analysis. Future directions for improvement are discussed.
[ewomp01]
B. Mohr, A. D. Malony, S. Shende, and F. Wolf, "Towards a Performance Tool Interface for OpenMP: An Approach Based on Directive Rewriting," Proceedings of EWOMP'01 Third European Workshop on OpenMP, Sept. 2001.
Keywords: OpenMP, directive rewriting, instrumentation interface, TAU, EXPERT

In this article we propose a standard'' performance tool interface for OpenMP, similar in spirit to the MPI profiling interface in its intent to define a clear and portable API that makes OpenMP execution events visible to performance libraries. When used together with the MPI profiling interface, it also allows tools to be built for hybrid applications that mix shared and distributed memory programming. We describe an instrumentation approach based on OpenMP directive rewriting that generates calls to the interface and passes context information (e.g., source code locations) in a portable and efficient way. Our proposed OpenMP performance API further allows user functions and arbitrary code regions to be marked and performance measurement to be controlled using new proposed OpenMP directives. The directive transformations we define are implemented in a source-to-source translation tool called OPARI. We have used it to integrate the TAU performance analysis framework and the automatic event trace analyzer EXPERT with the proposed OpenMP performance interface. Together, these tools show that a portable and robust solution to performance analysis of OpenMP and hybrid applications is possible.
[ewomp02]
A Performance Monitoring Interface for OpenMP By Bernd Mohr , Allen D. Malony, Hans-Christian Hoppe, Frank Schlimbach, Grant Haab, Jay Hoeflinger, and Sanjiv Shah . Persented at EWOMP 2002
Keywords: OpenMP, Performance Monitoring, OMPI, POMP
[fmpc88]
Allen D. Malony, "Regular Processor Arrays," Proc., 2nd Symposium on the Frontiers of Massively Parallel Computation, IEEE, pp. 499-502, 1988.
Keywords: regularity, processor arrays, emulation, interconnection networks.

Regular is an often used term to suggest simple and unifrom structure of a parallel processor's organization or a parllel algorithm's operation. However, a strict definitiion is long overdue. In this paper, we define regularity for processor array structures in two dimensions and enumerate the eleven distinct regular topologies. Space and time emulation schemes among the regular processor arrays are constructured to compare their geometric and performance characteristics. The hexagonal array is shown to have the most efficient emulation capabilities.
[front95]
J. Kundu and J. E. Cuny, A Scalable, Visual Interface for Debugging with Event-Based Behavioral Abstraction, Frontiers of Massively Parallel Computing, 1995, pp. 472-479.
[hcca89]
A. D. Malony, D. A. Reed, J. W. Arendt, R. A. Aydt, D. Grabas, and B. K. Totty, "An Integrated Performance Data Collection Analysis, and Visualization System," Proc. Fourth Conferenceon Hypercube Concurrent Computers and Applications, Mar. 1989. Also appears as Technical Report UIUCDCS-R-89-1504, Center for Supercomputing Research and Development, U. of Ill., March 1989.
Keywords: Intel iPSC/2, Performance Analysis

The lack of tools to observe the operation and performance of message-based parallel architectures limits the user's ability to e ectively optimize application and system performance. Performance data collection, analysis, and visualization tools are needed to manage the complexity and quantity of performance data. Furthermore, these tools must be integrated with the machine hardware, the system software, and the applications support software if they are to nd pervasive use in program development and experimentation. In this paper, we describe an integrated performance environment being developed for the Intel iPSC/2 hypercube. The data collection components of the environment include software event tracing at the operating system and program levels plus a hardware-based performance monitoring system used to unobtrusively capture software events. A visualization system, based on the X window system, permits the performance analyst to browse and explore interesting data components by dynamically interconnecting new performance displays and data analysis tools.
[hipc95]
D. Brown, A. Malony, B. Mohr, Language-based Parallel Program Interaction: the Breezy Approach, Proceedings of the International Conference on High Performance Computing (HiPC'95), India, December 1995.
Keywords: runtime interaction, data-parallel program, language integration,language-based tools

This paper presents a general architecture for runtime interaction with a data-parallel program. We have applied this architecture in the development of the Breezy tool for the pC++ language. Breezy grants application programs convenient and efficient access to higher-level external services (e.g., databases, visualization systems, and distributed resources) and allows external access to the application's state (e.g., for program state display or computational steering). Although such support can be developed on an ad-hoc basis for each application, a general approach to the problem of parallel program interaction is preferred. A general approach makes tools more portable and retargetable to different language systems.

There are two main conclusions from this work. First, interaction support should be integrated with a language system facilitating an implementation of a model that is consistent with the language design. This aids application developers or the tool builders that require this interaction. Second, as the implementation of Breezy shows, the development of interaction support can leverage off the language itself as well as its compiler and runtime systems.

[hpcc05]
F. Wolf, A. D. Malony, S. Shende, and A. Morris, "Trace-Based Parallel performance Overhead Compensation," in Proc. of HPCC 2005 Conference, (eds. L. T. Yang, et. al.), LNCS 3726, Springer, pp. 617-628, 2005.
Keywords: KOJAK, Performance measurement, analysis, parallel computing, tracing, message passing, overhead compensation

Tracing parallel programs to observe their performance introduces intrusion as the result of trace measurement overhead. If post-mortem trace analysis does not compensate for the overhead, the intrusion will lead to errors in the performance results. We show that measurement overhead can be accounted for during trace analysis and intrusion modeled and removed. Algorithms developed in our earlier work are reimplemented in a more robust and modern tool, KOJAK, allowing them to be applied in large-scale parallel programs. The ability to reduce trace measurement error is demonstrated for a Monte-Carlo simulation based on a master/worker scheme. As an additional result, we visualize how local perturbation propagates across process boundaries and alters the behavioral char- acteristics of non-local processes.
[hpcc06]
W. Spear, A. Malony, A. Morris, S. Shende, "Integrating TAU with Eclipse: A Performance Analysis System in a Integrated Development Environment," High Performance Computing and Communications (HPCC) Conference. September 2006 (LNCS 4208). Pages 230-239.
Keywords: Eclipse, Integrated Development Environment, IDE, TAU

The Eclipse platform offers Integrated Development Environment support for a diverse and growing array of programming applications and languages. There is an increasing call for programming tools to support various development tasks from within Eclipse. This includes tools for testing and analyzing program performance. We describe the high-level synthesis of the Eclipse platform with the TAU parallel performance analysis system. By leveraging Eclipse's modularity and extensibility with TAU's robust automated performance analysis mechanisms we produce an integrated, GUI controlled performance analysis system for Java, C/C++ and High Performance Computing development within Eclipse.
[hpcc06b]
L. Li, A. D. Malony, and K. Huck, "Model-Based Relative Performance Diagnosis of Wavefront Parallel Computations", in International Conference on High Performance Computing and Communications (HPCC2006), (Munich, Germany), 2006.
Keywords: performance diagnosis, parallel models, wavefront, relative analysis

Parallel performance diagnosis can be improved with the use of performance knowledge about parallel computation models. The Hercule diagnosis system applies model-based methods to automate performance diagnosis processes and explain performance problems from highlevel computation semantics. However, Hercule is limited by a single experiment view. Here we introduce the concept of relative performance diagnosis and show how it can be integrated in a model-based diagnosis framework. The paper demonstrates the effectiveness of Hercules approach to relative diagnosis of the well-known Sweep3D application based on aWavefront model. Relative diagnoses of Sweep3D performance anomalies in strong and weak scaling cases are given.
[hpccbse04]
B. Norris, J. Ray, R. Armstrong, L. C. McInnes. D. E. Bernholdt. W. R. Elwasif, A. D. Malony and S. Shende, "Computational Quality of Service for Scientific Components," Proceedings of the International Symposium on Component-Based Software Engineering (CBSE7), Edinburgh, Scotland, LNCS 3054, Springer, pp. 264-271, May 2004. Also available as Argonne National Laboratory preprint ANL/MCS-P1131-0204.
Keywords: QoS, Components, CCA, TAU

Scientific computing on massively parallel computers presents unique challenges to component-based software engineering (CBSE). While CBSE is at least as enabling for scientific computing as it is for other arenas, the requirements are different. We briefly discuss how these requirements shape the Common Component Architecture, and we describe some recent research on quality-of-service issues to address the computational performance and accuracy of scientific simulations.
[hpcn99]
A. Malony, J. Skidmore, and M. Sottile. Computational Experiments using Distributed Tools in a Web-based Electronic Notebook Environment, Proceedings of HPCN Europe '99, LNCS 1593, Springer, Berlin, pp. 381 -390, April 1999.
Keywords:

Computational environments used by scientists should provide high-level support for scientific processes that involve the integrated and systematic use of familiar abstractions from a laboratory setting, including notebooks, instruments, experiments, and analysis tools. However, doing so while hiding the complexities of the underlying computational platform is a challenge. ViNE is a web-based electronic notebook that implements a high-level interface for applying computational tools in scientific experiments in a location- and platform-independent manner. Using ViNE, a scientist can specify data and tools, and construct experiments that apply them in well-defined procedures. ViNE's implementation of the experiment abstraction offers the scientist easy-to-understand framework for building scientific processes. This paper discusses how ViNE implements computational experiments in distributed, heterogeneous computing environments.
[hpdc98]
Steven T. Hackstadt, Christopher W. Harrop, and Allen D. Malony, A Framework for Interacting with Distributed Programs and Data, Proceedings of the Seventh IEEE International Symposium on High Performance Distributed Computing (HPDC7), Chicago, IL, July 28-31, 1998, pp. 206-214. Also available as University of Oregon, Department of Computer and Information Science, Technical Report CIS-TR-98-02, June 1998.
Keywords: parallel tools, distributed arrays, visualization, computational steering, model coupling, runtime interaction, data access, Fortran 90

The Distributed Array Query and Visualization (DAQV) project aims to develop systems and tools that facilitate interacting with distributed programs and data structures. Arrays distributed across the processes of a parallel or distributed application are made available to external clients via well-defined interfaces and protocols. Our design considers the broad issues of language targets, models of interaction, and abstractions for data access, while our implementation attempts to provide a general framework that can be adapted to a range of application scenarios. The paper describes the second generation of DAQV work and places it in the context of the more general distributed array access problem. Current applications and future work are also described.
[iccs03]
J. Dongarra, A. D. Malony, S. Moore, P. Mucci, and S. Shende, "Performance Instrumentation and Measurement for Terascale Systems," Proc. International Conference on Computational Science (ICCS 2003), LNCS 2660, Springer, Berlin, pp. 53-62, 2003.
Keywords: TAU, PAPI, Perfometer, instrumentation, measurement, performance analysis, terascale

As computer systems grow in size and complexity, tool support is needed to facilitate the efficient mapping of large-scale applications onto these systems. To help achieve this mapping, performance analysis tools must provide robust performance observation capabilities at all levels of the system, as well as map low-level behavior to high-level program constructs. Instrumentation and measurement strategies, developed over the last several years, must evolve together with performance analysis infrastructure to address the challenges of new scalable parallel systems.
[iccs03b]
Michael O. McCracken, Allan Snavely, Allen Malony, "Performance Modeling for Dynamic Algorithm Selection," Proc. International Conference on Computational Science (ICCS'03), LNCS 2660, Springer, Berlin, pp. 749-758, 2003.

Adaptive algorithms are an important technique to achieve portable high Performance. They choose among solution methods and optimizations according to expected performance on a particular machine. Grid environments make the adaptation problem harder, because the optimal decision may change across runs and even during runtime. Therefore, the performance model used by an adaptive algorithm must be able to change decisions without high overhead. In this paper, we present work that is modifying previous research into rapid performance modeling to support adaptive grid applications through sampling and high granularity modeling. We also outline preliminary results that show the ability to predict differences in performance among algorithms in the same program.
[iccs05]
Adnan Salman, Sergei Turovets, Allen Malony, Jeff Eriksen, and Don Tucker, "Computational Modeling of Human Head Conductivity." Presented at International Conference on Computational Science.
Keywords: Computational Modeling, alternating direction implicit, algorithm, ADI, OpenMP

The computational environment for estimation of unknown regional electrical conductivities of the human head, based on realistic geometry from seg- mented MRI up to 256 resolution, is described. A finite difference alternating di- rection implicit (ADI) algorithm, parallelized using OpenMP, is used to solve the forward problem describing the electrical field distribution throughout the head given known electrical sources. A simplex search in the multi-dimensional para- meter space of tissue conductivities is conducted in parallel using a distributed system of heterogeneous computational resources. The theoretical and computa- tional formulation of the problem is presented. Results from test studies are pro- vided, comparing retrieved conductivities to known solutions from simulation. Performance statistics are also given showing both the scaling of the forward problem and the performance dynamics of the distributed search.
[iccs08]
W. Spear, A. Malony, A. Morris, S. Shende, "Performance Tool Workflows," International Conference on Computational Science 2008 (ICCS 2008). Krakow Poland, June 2008
Keywords: TAU, Eclipse, Performance Workflow, Performance Analysis

Using the Eclipse platform we have provided a centralized resource and unified user interface for the encapsulation of existing command-line based performance analysis tools. In this paper we describe the user-definable tool workflow system provided by this performance framework. We discuss the frameworks implementation and the rationale for its design. A use case featuring the TAU performance analysis system demonstrates the utility of the workflow system with respect to conventional performance analysis procedures.
[iccs09]
H. Jagode, J. Dongarra, S. Alam, J. Vetter, W. Spear, A. Malony. A Holistic Approach for Performance Measurement and Analysis for Petascale Applications. International Conference on Computational Science (ICCS 2009), Baton Rouge, LA, 2009
Keywords: TAU, Performance Analysis, Performance Tools, Proﬁling, Tracing, Trace ﬁles, Petascale Applications, Petascale Systems

Contemporary high-end Terascale and Petascale systems are composed of hundreds of thousands of commodity multi-core processors interconnected with high-speed custom networks. Performance characteristics of applications executing on these systems are a function of system hardware and software as well as workload parameters. Therefore, it has become increasingly challenging to measure, analyze and project performance using a single tool on these systems. In order to address these issues, we propose a methodology for performance measurement and analysis that is aware of applications and the underlying system hierarchies. On the application level, we measure cost distribution and runtime dependent values for different components of the underlying programming model. On the system front, we measure and analyze information gathered for unique system features, particularly shared components in the multi-core processors. We demonstrate our approach using a Petascale combustion application called S3D on two high-end Teraﬂops systems, Cray XT4 and IBM Blue Gene/P, using a combination of hardware performance monitoring, proﬁling and tracing tools.
[iccs09b]
M. Geimer, S. Shende, A. D. Malony, F. Wolf, "A Generic and Configurable Source-Code Instrumentation Component". Proceedings of the International Conference on Computational Science 2009. pp. 696-705, LNCS 5545.
Keywords: TAU, instrumentation, source preprocessing

A common prerequisite for a number of debugging and performance- analysis techniques is the injection of auxiliary program code into the application under investigation, a process called instrumentation. To accomplish this task, source-code preprocessors are often used. Unfortunately, existing preprocessing tools either focus only on a very specific aspect or use hard-coded commands for instrumentation. In this paper, we examine which basic constructs are required to specify a user-defined routine entry/exit instrumentation. This analysis serves as a basis for a generic instrumentation component working on the source-code level where the instructions to be inserted can be ﬂexibly conﬁgured. We evaluate the identified constructs with our prototypical implementation and show that these are sufficient to fulfill the needs of a number of todays performance-analysis tools.
[iccs11]
J. Enkovaara, N. A. Romero, S. Shende, J. J. Mortensen, "GPAW - massively parallel electronic structure calculations with Python-based software." International Conference on Computational Science 2011.
Keywords: Python, Numpy, MPI, Density-functional theory

Electronic structure calculations are a widely used tool in materials science and large consumer of supercomputing resources. Traditionally, the software packages for these kind of simulations have been implemented in compiled languages, where Fortran in its different versions has been the most popular choice. While dynamic, interpreted languages, such as Python, can increase the efficiency of programmer, they cannot compete directly with the raw performance of compiled languages. However, by using an interpreted language together with a compiled language, it is possible to have most of the productivity enhancing features together with a good numerical performance. We have used this approach in implementing an electronic structure simulation software GPAW using the combination of Python and C programming languages. While the chosen approach works well in standard workstations and Unix environments, massively parallel supercomputing systems can present some challenges in porting, debugging and profiling the software. In this paper we describe some details of the implementation and discuss the advantages and challenges of the combined Python/C approach. We show that despite the challenges it is possible to obtain good numerical performance and good parallel scalability with Python based software.
[icpp05]
K. A. Huck, A. D. Malony, R. Bell, and A. Morris, "Design and Implementation of a Parallel Performance Data Management Framework," Proc. International Conference on Parallel Processing (ICPP 2005), IEEE Computer Society, 2005.
Keywords: TAU, PerfDMF, ParaProf, Performance Data Management Framework

Empirical performance evaluation of parallel systems and applications can generate significant amounts of performance data and analysis results from multiple experiments as performance is investigated and problems diagnosed. Hence, the management of performance information is a core component of performance analysis tools. To better support tool integration, portability, and reuse, there is a strong motivation to develop performance data management technology that can provide a common foundation for performance data storage, access, merging, and analysis. This paper presents the design and implementation of the Performance DataManagement Framework (PerfDMF). PerfDMF addresses objectives of performance tool integration, interoperation, and reuse by providing common data storage, access, and analysis infrastructure for parallel performance profiles. PerfDMF includes an extensible parallel profile data schema and relational database schema, a profile query and analysis programming interface, and an extendible toolkit for profile import/export and standard analysis. We describe the PerfDMF objectives and architecture, give detailed explanation of the major components, and show examples of PerfDMF application.
[icpp09]
S. Biersdorff, C. W. Lee, A. Malony, L. V. Kale, "Integrated Performance Views in Charm++: Projections Meets TAU." International Conference on Parallel Processing, September 2009.
Keywords: Charm++, TAU, Projections, NAMD

The Charm++ parallel programming system provides a modular performance interface that can be used to extend its performance measurement and analysis capabilities. The interface exposes execution events of interest representing Charm++ scheduling operations, application methods/routines, and communication events for observation by alternative performance modules conﬁgured to implement different measurement features. The paper describes the Charm++s performance interface and how the Charm++ Projections tool and the TAU Performance System can provide integrated trace-based and proﬁle-based performance views. These two tools are complementary, providing the user with different performance perspectives on Charm++ applications based on performance data detail and temporal and spatial analysis. How the tools work in practice is demonstrated in a parallel performance analysis of NAMD, a scalable molecular dynamics code that applies many of Charm++s unique features.
[icpp10]
A. Morris, A. Malony, S. Shende, and K. Huck "Design and Implementation of a Hybrid Parallel Performance Measurement System." International Conference on Parallel Processing September 2010. pages 492-501
Keywords: parallel, performance, measurement, analysis,sampling, tracing, proﬁling

Modern parallel performance measurement systems collect performance information either through probes inserted in the application code or via statistical sampling. Probe-based techniques measure performance metrics directly using calls to a measurement library that execute as part of the application. In contrast, sampling-based systems interrupt program execution to sample metrics for statistical analysis of performance. Although both measurement approaches are represented by robust tool frameworks in the performance community, each has its strengths and weaknesses. In this paper, we investigate the creation of a hybrid measurement system, the goal being to exploit the strengths of both systems and mitigate their weaknesses. We show how such a system can be used to provide the application programmer with a more complete analysis of their application. Simple example and application codes are used to demonstrate its capabilities. We also show how the hybrid techniques can be combined to provide real cross-language performance evaluation of an uninstrumented run for mixed compiled/interpreted execution environments (e.g., Python and C/C++/Fortran).
[icpp11]
A. D. Malony, S. Biersdorff, S. Shende, H. Jagode, S. Tomov, G. Juckeland, R. Dietrich, D. Poole and C. Lamb, "Parallel Performance Measurement of Heterogeneous Parallel Systems with GPUs." Presented at International Conference on Parallel Processing Sept 2011.
Keywords: TAU, PAPI, Vampir, GPU, CUDA, Heterogeneous systems

The power of GPUs is giving rise to heterogeneous parallel computing, with new demands on programming environments, runtime systems, and tools to deliver high-performing applications. This paper studies the problems associated with performance measurement of heterogeneous machines with GPUs. A heterogeneous computation model and alternative host-GPU measurement approaches are discussed to set the stage for reporting new capabilities for heterogeneous parallel performance measurement in three leading HPC tools: PAPI, Vampir, and the TAU Performance System. Our work leverages the new CUPTI tool support in NVIDIA’s CUDA device library. Heterogeneous benchmarks from the SHOC suite are used to demonstrate the measurement methods and tool support.
[icpp86]
Walid Abu-Sufah, Allen D. Malony, "Vector Processing on the Alliant FX/8 Multiprocessor," Proc. of ICPP 1986, pp. 559-566, 1986.
Keywords: Alliant, Vector processing, Cedar, performance measurement.Alliant, Vector processing, Cedar, performance measurement.Alliant, Vector processing, Cedar, performance measurement.Alliant, Vector processing, Cedar, performance measurement.Alliant, Vector processing, Cedar, performance measurement.

The Alliant FX/8 multiprocessor implements several high-speed computation ideas in software and hardware. Each of the 8 computational elements (CSs) has vector capabilities and multiprocessor support. Generally, the FX/8 delivers its highest processing rates when executing vector loops concurrently. In this paper, we present extensive empirical performance results for vector processing on the FX/8. The vector kernels of LANL BMK8a1 benchmark are used in the experiments.
[icpp87]
Allen D. Malony, Daniel A. Reed, Patrick J. McGuire,"MPF: A Portable Message Passing Facility for Shared Memory Multiprocessors," Proc. ICPP 1987: pp. 739-741, 1987.
Keywords: message passing, shared memory

A message passing facility (MPF) for shared memory multiprocessors is presented. MPF is based on a message passing model conceptually similar to conversations. The message passing primitives for this model are implemented as a portable library of C function calls. The performance of interprocess communication benchmark programs and two parallel applications are given.
[icpp95]
J. Kundu and J. E. Cuny, The Integration of Event- and State-Based Debugging in Ariadne, Proceedings of the International Conference on Parallel Processing (ICPP '95), August 1995, pp. II 130-134.
Keywords: event-based debugging, state-based debugging, Ariadne
[ics10]
A. Malony, S. Biersdorff, W. Spear, S. Mayanglambam. "An Experimental Approach to Performance Measurement of Heterogeneous Parallel Applications using CUDA." Presented at International Conference on Supercomputing, Tsukuba, Japan 2010.
Keywords: Performance tools, GPGPU, profiling, tracing

Heterogeneous parallel systems using GPU devices for ap- plication acceleration have garnered signiﬁcant attention in the supercomputing community. However, to realize the full potential of GPU computing, application developers will re- quire tools to measure and analyze accelerator performance with respect to the parallel execution as a whole. A per- formance measurement technology for the NVIDIA CUDA platform has been developed and integrated with the TAU parallel performance system. The design of the TAUcuda package is based on an experimental NVIDIA CUDA driver and associated runtime and device libraries. In any envi- ronment where the CUDA experimental driver is installed, TAUcuda can provide detailed performance information re- garding the execution of GPU kernels and the interactions with the parallel program without any modiﬁcation to the program source or executable code. The paper describes the TAUcuda technology and how it is integrated with the TAU measurement framework to provide integrated performance views. Various examples of TAUcuda use are presented, in- cluding CUDA SDK examples, a GPU version of the Linpack benchmark, and a scalable molecular dynamics application, NAMD.
[ics89]
Kyle Gallivan, William Jalby, Allen Malony, Harry Wijshoff, "Performance Prediction of Loop Constructs on Multiprocessor Hierarchical-Memory Systems," Proc. 3rd International Conference on Supercomputing (ICS'86), pp. 433-442, 1986.
Keywords: Supercomputers, Concurrent programming structures

In this paper we discuss the performance prediction of Fortran constructs commonly found in numerical scientific computing. Although the approach is applicable to multi-processors in general, within the scope of the paper we will concentrate on the Alliant FX/8 multiprocessor. The techniques proposed involve a combination of empirical observations, architectural models and analytical techniques, and exploits earlier work on data locality analysis and empirical characterization of the behavior of memory systems. The Lawrence Livermore Loops are used as a test-case to verify the approach.
[ics90]
Allen D. Malony, Daniel A. Reed, "A Hardware-based Performance Monitor for the Intel iPSC/2 Hypercube, Proc. 4th International Conference on Supercomputing (ICS'90), pp. 213-226, 1990.
Keywords: performance monitoring, Hypermon

The complexity of parallel computer systems makes a priori performance prediction difficult and experimental performance analysis crucial. A complete characterization of software and hardware dynamics, needed to understand the performance of high-performance parallel systems, requires execution time performance instrumentation. Although software recording of performance data suffices for low frequency events, capture of detailed, high-frequency performance data ultimately requires hardware support if the performance instrumentation is to remain efficient and unobtrusive. This paper describes the design of HYPERMON, a hardware system to capture and record software performance traces generated on the Intel iPSC/2 hypercube. HYPERMON represents a compromise between fully-passive hardware monitoring and software event tracing; software generated events are extracted from each node, timestamped, and externally recorded by HYPERMON. Using an instrumented version of the iPSC/2 operating system and several application programs, we present a performance analysis of an operational HYPERMON prototype and assess the limitations of the current design. Based on these results, we suggest design modifications that should permit capture of event traces from the coming generation of high-performance distributed memory parallel systems.
[ics99]
S. Vajracharya, S. Karmesin, P. Beckman, J. Crotinger, A. Malony, S. Shende, R. Oldehoeft, and S. Smith, "SMARTS: Exploiting Temporal Locality and Parallelism through Vertical Execution," Proceedings of ACM International Conference on Supercomputing (ICS '99), pp. 302-310, 1999.
Keywords: SMARTS, asynchronous, threads, profiling, tracing, vertical executiondata-parallelism, dependence-driven execution, runtime system, barriersynchronization, TAU

In the solution of large-scale numerical problems, parallel computing is becoming simultaneously more important and more difficult. The complex organization of today's multiprocessors with several memory hierarchies has forced the scientific programmer to make a choice between simple but unscalable code and scalable but extremely complex code that does not port to other architectures.

This paper describes how the SMARTS runtime system and the POOMA C++ class library for high-performance scientific computing work together to exploit data parallelism in scientific applications while hiding the details of managing parallelism and data locality from the user. We present innovative algorithms, based on the macro-dataflow model for detecting data parallelism and efficiently executing data-parallel statements on shared-memory multiprocessors. We also describe how these algorithms can be implemented on clusters of SMPs.

[ipdps03]
S. Shende, A. D. Malony, C. Rasmussen, M. Sottile, "A Performance Interface for Component-Based Applications," Proc. International Workshop on Performance Modeling, Evaluation, and Optimization of Parallel and Distributed Systems, IPDPS'03, IEEE Computer Society, 278, 2003.
Keywords: TAU, Component Interface, SIDL, CCAFFEINE, PDT, Babel

This work targets the emerging use of software component technology for high-performance scientific parallel and distributed computing. While component software engineering will benefit the construction of complex science applications, its use presents several challenges to performance optimization. A component application is composed of a set of components, thus, application performance depends on the interaction (possibly non-linear) of the component set. Furthermore, a component is a binary unit of composition'' and the only information users have is the interface the component provides to the outside world. An interface for component performance measurement and query is presented to address optimization issues. We describe the performance component design and an example demonstrating its use for runtime performance tuning.
[ipdps04]
J. Ray, N. Trebon, R. C. Armstrong, S. Shende, and A. Malony, "Performance Measurement and Modeling of Component Applications in a High Performance Computing Environment: A Case Study," Proc. 18th International Parallel and Distributed Processing Symposium (IPDPS'04), IEEE Computer Society, 2004.
Keywords: CCA, Performance modeling, CFRFS Combustion, TAU

We present a case study of performance measurement and modeling of a CCA (Common Component Architecture) component-based application in a high performance computing environment. Component-based HPC applications allow the possibility of creating component-level performance models and synthesizing them into application performance models. However, they impose the restriction that performance measurement/monitoring needs to be done in a non-intrusive manner and at a fairly coarse-grained level. We propose a performance measurement infrastructure for HPC based loosely on recent work done for Grid environments. A prototypical implementation of the infrastructure is used to collect data for three components in a scientific application and construct their performance models. Both computational and message-passing performance are addressed.
[ipps94]
A. Malony, B. Mohr, P. Beckman, D. Gannon, S. Yang, F. Bodin, Performance Analysis of pC++: A Portable Data-Parallel Programming System for Scalable Parallel Computers, Proceedings of the 8th International Parallel Processing Symbosium (IPPS), Cancn, Mexico, April 1994, pp. 75-85.
Keywords: parallel C++, portability, scalability, SPMD, runtime system,concurrency and communication primitives, performance

pC++ is a language extension to C++ designed to allow programmers to compose distributed data structures with parallel execution semantics. These data structures are organized as concurrent aggregate'' collection classes which can be aligned and distributed over the memory hierarchy of a parallel machine in a manner consistent with the High Performance Fortran Forum (HPF) directives for Fortran 90. pC++ allows the user to write portable and efficient code which will run on a wide range of scalable parallel computers.

In this paper, we discuss the performance analysis of the pC++ programming system. We describe the performance tools developed and include scalability measurements for four benchmark programs: a "nearest neighbor" grid computation, a fast Poisson solver, and the "Embar" and "Sparse" codes from the NAS suite. In addition to speedup numbers, we present a detailed analysis highlighting performance issues at the language, runtime system, and target system levels.

[ipps95]
B. Helm. A. D. Malony, S. P. Fickas, "Capturing and Automating Performance Diagnosis: the Poirot approach," Proc. 9th International Parallel Processing Symposium (IPPS'95), pp.
Keywords: software performance evaluation, program debugging, diagnostic expert systems, performance diagnosis, Poirot, parallel programming, diagnosis methods, performance tools, performance debugging, knowledge-based diagnosis, software engineering

Performance diagnosis, the process of finding and explaining performance problems, is an important part of parallel programming. Effective performance diagnosis requires that the programmer plan an appropriate method, and manage the experiments required by that method. This paper presents Poirot, an architecture to support performance diagnosis. It explains how the architecture helps automatically, adaptably plan and manage the diagnosis process. The paper evaluates the generality and practicality of Poirot, by reconstructing diagnosis methods found in several published performance tools.
[iscope99]
T. Sheehan, A. Malony, S. Shende, "A Runtime Monitoring Framework for the TAU Profiling System", Proceedings of the Third International Symposium on Computing in Object-Oriented Parallel Environments (ISCOPE'99), LNCS 1732, Springer, Berlin, pp. 170-181, December 1999.
Keywords: monitor, runtime data access, performance monitoring,parallel execution, performance tools, runtime interaction, Java,TAU, multi-threaded

Applications executing on complex computational systems provide a challenge for the development of runtime performance monitoring software. We discuss a computational model, application monitoring, data access models, and profiler functionality. We define data consistency within and across threads as well as across contexts and nodes. We describe the TAU runtime monitoring framework which enables on-demand, low-interference data access to TAU profile data and provides the flexibility to enforce data consistency at the thread, context or node level. We present an example of a Java-based runtime performance monitor utilizing the framework.
[ishpc02]
J. D. de St. Germain, A. Morris, S. G. Parker, A. D. Malony, and S. Shende, "Integrating Performance Analysis in the Uintah Software Development Cycle," Proceedings of the ISHPC'02 conference, LNCS 2327,Springer, Berlin, pp. 190-206, 2002.
Keywords: Uintah, TAU, MPI, SCIRun, XPARE

Technology for empirical performance evaluation of parallel programs is driven by the increasing complexity of high performance computing environments and programming methodologies. This paper describes the integration of the TAU and XPARE tools in the Uintah computational framework. Performance mapping techniques in TAU relate low-level performance data to higher levels of abstraction. XPARE is used for specifying regression testing benchmarks that are evaluated with each periodically scheduled testing trial. This provides a historical panorama of the evolution of application performance. The paper concludes with a scalability study that shows the benefits of integrating performance technology in the development of large-scale parallel applications.
[ishpc03]
Holger Brunst, Allen D. Malony, Sameer S. Shende, and Robert Bell, "Online Remote Trace Analysis of Parallel Applications on High-Performance Clusters", Proceedings of ISHPC'03 Conference, LNCS 2858, Springer, Berlin, pp. 440-449,2003.
Keywords: Parallel Computing, Performance Analysis, Performance Steering, Tracing, Parallel Computing, Performance Analysis, Performance Steering, Tracing, VNG, TAU

The paper presents the design and development of an online remote trace measurement and analysis system. The work combines the strengths of the TAU performance system with that of the VNG distributed parallel trace analyzer. Issues associated with online tracing are discussed and the problems encountered in system implementation are analyzed in detail. Our approach should port well to parallel platforms. Future work includes testing the performance of the system on large-scale machines.
[istspie95]
Steven T. Hackstadt and Allen D. Malony, Case Study: Applying Scientific Visualization to Parallel Performance Visualization, Proc. of the IST&T/SPIE symposium on Electronic Imaging: Science and Technology, Conference on Visual Data Exploration and Analysis, San Jose, CA, February 1995, pp. 238-247.
Keywords: parallel performance visualization, case study, data explorer, scientific visualization

The complexity of parallel programs make them more difficult to analyze for correctness and efficiency, in part because of the interactions between multiple processors and the volume of data that can be generated. Visualization often helps the programmer in these tasks. This paper focuses on the development of a new technique for constructing, evaluating, and modifying sophisticated, application-specific visualizations for parallel programs and performance data. While most existing tools offer predetermined sets of simple, two-dimensional graphical displays, this environment gives users a high degree of control over visualization development and use, including access to three-dimensional graphics, which remain relatively unexplored in this context.

We have developed an environment that uses the IBM Visualization Data Explorer system to allow new visualizations to be prototyped rapidly, often taking only a few hours to construct totally new views of parallel performance trace data. Yet, access to a robust library of sophisticated graphical techniques is preserved. The burdensome task of explicitly programming the visualizations is completely avoided, and the iterative design, evaluation, and modification of new displays is greatly facilitated.

[iwomp05]
Adnan Salman , Sergei Turovets, Allen Malony, and Vasily Volkov, "Multi-Cluster, Mixed-Mode Computational Modeling of Human Head Conductivity." Presented at IWOMP 2005
Keywords: MPI, OpenMPI, Multi-Cluster, Computational Modeling

A multi-cluster computational environment with mixed-mode (MPI + OpenMP) parallelism for estimation of unknown regional electrical conductiv- ities of the human head, based on realistic geometry from segmented MRI up to 256 voxels resolution, is described. A finite difference multi-component al- ternating direction implicit (ADI) algorithm, parallelized using OpenMP, is used to solve the forward problem calculation describing the electrical field distribu- tion throughout the head given known electrical sources. A simplex search in the multi-dimensional parameter space of tissue conductivities is conducted in par- allel across a distributed system of heterogeneous computational resources. The theoretical and computational formulation of the problem is presented. Results from test studies based on the synthetic data are provided, comparing retrieved conductivities to known solutions from simulation. Performance statistics are also given showing both the scaling of the forward problem and the performance dy- namics of the distributed search.
[iwomp06]
A. Morris, A. D. Malony, S. Shende, "Supporting Nested OpenMP Parallelism in the TAU Performance System," (to appear) Proceedings of the IWOMP 2006 Conference, Springer, LNCS, 2007.
Keywords: TAU, OpenMP, Nested Parallelism

Nested OpenMP parallelism allows an application to spawn teams of nested threads. This hierarchical nature of thread creation and usage poses problems for performance measurement tools that must determine thread context to properly maintain per-thread performance data. In this paper we describe the problem and a novel solution for identifying threads uniquely. Our approach has been implemented in the TAU performance system and has been successfully used in profiling and tracing OpenMP applications with nested parallelism. We also describe how extensions to the OpenMP standard can help tool developers uniquely identify threads.
[javagrande01]
S. Shende and A. D. Malony, "Integration and Application of the TAU Performance System in Parallel Java Environments," Proceedings of the Joint ACM Java Grande - ISCOPE 2001 Conference, pp. 87-96, June 2001.
Keywords: TAU, Profiling, Tracing, Java, MPI, JVMPI, Instrumentation, Measurement

Parallel Java environments present challenging problems for performance tools because of Javas rich language system and its multi-level execution platform combined with the integration of native-code application libraries and parallel runtime software. In addition to the desire to provide robust performance measurement and analysis capabilities for the Java language itself, the coupling of different software execution contexts under a uniform performance model needs careful consideration of how events of interest are observed and how cross-context parallel execution information is linked. This paper relates our experience in extending the TAU performance system to a parallel Java environment based on mpiJava. We describe the complexities of the instrumentation model used, how performance measurements are made, and the overhead incurred. A parallel Java application simulating the game of Life is used to show the performance systems capabilities.
[javaics2k]
S. Shende, and A. D. Malony, "Performance Tools for Parallel Java Environments," Proc. Second Workshop on Java for High Performance Computing, ICS 2000, May 2000.
Keywords: parallel, mpiJava, TAU, performance profiling, tracing, MPI

Parallel Java environments present challenging problems for performance tools because of Java's rich language system and its multi-level execution platform combined with the integration of native-code application libraries and parallel runtime software. In addition to the desire to provide robust performance measurement and analysis capabilities for the Java language itself, the coupling of different software execution contexts under a uniform performance model needs careful consideration of how events of interest are observed and how cross-context parallel execution information is linked. This paper relates our experience in extending the TAU performance system to a parallel Java environment based on mpiJava. We describe the instrumentation model used, how performance measurements are made, and the overhead incurred. A parallel Java application simulating the game of life is used to show the performance system's capabilities.
[lacsi01]
B. Mohr, A. D. Malony, S. Shende, and F. Wolf, "Design and Prototype of a Performance Tool Interface for OpenMP," Proceedings of the LACSI Symposium, 2001.
Keywords: OpenMP, API, POMP, TAU, EXPERT, Performance Tool Interface

This paper proposes a performance tools interface for OpenMP, similar in spirit to the MPI profiling interface in its intent to define a clear and portable API that makes OpenMP execution events visible to runtime performance tools. We present our design using a source-level instrumentation approach based on OpenMP directive rewriting. Rules to instrument each directive and their combination are applied to generate calls to the interface consistent with directive semantics and to pass context information (e.g., source code locations) in a portable and efficient way. Our proposed OpenMP performance API further allows user functions and arbitrary code regions to be marked and performance measurement to be controlled using new OpenMP directives. To prototype the proposed OpenMP performance interface, we have developed compatible performance libraries for the EXPERT automatic event trace analyzer and the TAU performance analysis framework. The directive instrumentation transformations we define are implemented in a source-to-source translation tool called OPARI. Application examples are presented for both EXPERT and TAU to show the OpenMP performance interface and OPARI instrumentation tool in operation. When used together with the MPI profiling interface (as the examples also demonstrate), our proposed approach provides a portable and robust solution to performance analysis of OpenMP and mixed-mode (OpenMP + MPI) applications.
[linux99]
S. Shende, Profiling and Tracing in Linux, Proceedings of the Extreme Linux Workshop #2, USENIX, Monterey CA, June 1999.
Keywords: profiling, performance, tracing, linux, clusters, TAU

Profiling and tracing tools can help make application parallelization more effective and identify performance bottlenecks. Profiling presents summary statistics of performance metrics while tracing highlights the temporal aspect of performance variations, showing when and where in the code performance is achieved. A complex challenge is the mapping of performance data gathered during execution to high-level parallel language constructs in the application source code. Presenting performance data in a meaningful way to the user is equally important. This paper presents a brief overview of profiling and tracing tools in the context of Linux - the operating system most commonly used to build clusters of workstations for high performance computing.
[mmb95]
K. Shanmugam, A. Malony, B. Mohr, Speedy: An Integrated Performance Extrapolation Tool for pC++ Programs, Proceedings of the Joint Conference PERFORMANCE TOOLS '95 and MMB '95, September 1995, Heidelberg, Germany.
Keywords: performance prediction, extrapolation, object-parallel programming,trace-driven simulation, performance debugging tools, modeling

Performance extrapolation is the process of evaluating the performance of a parallel program in a target execution environment using performance information obtained for the same program in a different environment. Performance extrapolation techniques are suited for rapid performance tuning of parallel programs, particularly when the target environment is unavailable. This paper describes one such technique that was developed for data-parallel C++ programs written in the pC++ language. In pC++, the programmer can distribute a collection of objects to various processors and can have methods invoked on those objects execute in parallel. Using performance extrapolation in the development of pC++ applications allows tuning decisions to be made in advance of detailed execution measurements. The pC++ language system includes TAU, an integrated environment for analyzing and tuning the performance of pC++ programs. This paper presents speedy, a new addition to TAU, that predicts the performance of pC++ programs on parallel machines using extrapolation techniques. Speedy applies the existing instrumentation support of TAU to capture high-level event traces of a n-thread pC++ program run on a uniprocessor machine together with trace-driven simulation to predict the performance of the program run on a target n-processor machine. We describe how speedy works and how it is integrated into TAU. We also show how speedy can be used to evaluate a pC++ program for a given target environment.
[mmb95a]
Alois Ferscha and Allen D. Malony, "Performance-Oriented Development of Irregular, Unstructured and Unbalanced Parallel Applications in the N-MAP Environment, " Proc. 8th GI/ITG Conference on Measuring, Modeling and Evaluating Computing and Communication Systems, MMB '95, LNCS 977, Springer, Berlin, pp. 340-356, 1995.
Keywords: Performance Prediction, Parallel Programming, Task Level Parallelism, Irregular Problems, Parallel Simulation, Time Warp, CM-5, Cluster Computing, N-MAP

Performance prediction methods and tools based on analytical models often fail in forecasting the performance of real systems due to inappropriateness of model assumptions, irregularities in the problem structure that cannot be described within the modeling formalism, unstructured execution behavior that leads to unforeseen system states, etc. Prediction accuracy and tractability is acceptable for systems with deterministic operational characteristics, for static, regularly structured problems, and non-changing environments.
[mttcpe94]
Allen D. Malony, Vassilis Mertsiotakis, Andreas Quick, "Automatic Scalability Analysis of Parallel Programs Based on Modeling Techniques," In G. Haring and G. Kotsis, editors, Proc. 7th International Conference on Modeling Techniques and Tools for Computer Performance Evaluation, LNCS, Springer, 1994.
Keywords: scalability analysis, performance modeling, PDL, PEPP

When implementing parallel programs for parallel computer systems the performance scalability of these programs should be tested and analyzed on different computer configurations and problem sizes. Since a complete scalability analysis is too time consuming and is limited to only existing systems, extensions of modeling approaches can be considered for analyzing the behavior of parallel programs under different problem and system scenarios. In this paper, a method for automatic scalability analysis using modeling is presented. Initially, we identify the important problems that arise when attempting to apply modeling techniques to scalability analysis. Based on this study, we define the Parallelization Description Language (PDL) that is used to describe parallel execution attributes of a generic program workload. Based on a parallelization description, stochastic models like graph models or Petri net models can be automatically generated from a generic model to analyze performance for scaled parallel systems as well as scaled input data. The complexity of the graph models produced depends significantly on the type of parallel computation described. We present several computation classes where tractable graph models can be generated and then compare the results of these automatically scaled models with their exact solutions using the PEPP modeling tool.
[para06a]
S. Shende, A. Malony, A. Morris, "Optimization of Instrumentation in Parallel Performance Evaluation Tools," in Proc. PARA 2006 Conference, Springer, LNCS, 2006.
Keywords: Instrument optimization, selective instrumentation, measurement, Performance measurement and analysis, parallel computing

Tools to observe the performance of parallel programs typically employ profiling and tracing as the two main forms of event-based measurement models. In both of these approaches, the volume of performance data generated and the corresponding perturbation encountered in the program depend upon the amount of instrumentation in the program. To produce accurate performance data, tools need to control the granularity of instrumentation. In this paper, we describe our experiences in the TAU performance system for improving the accuracy of performance data by limiting the amount of instrumentation. A range of options are provided to optimize instrumentation based on the structure of the program, event generation rates, and historical performance data gathered from prior executions.
[para06b]
S. Shende, A. Malony, A. Morris, "Workload Characterization using the TAU Performance System," in Proc. of PARA 2006 Conference, Springer, LNCS, 2006.
Keywords: Performance mapping, measurement, instrumentation, performance evaluation, workload characterization

Workload characterization is an important technique that helps us understand the performance of parallel applications and the de-mands they place on the system. Each application run is profiled using instrumentation at the MPI library level. Characterizing the performance of the MPI library based on the sizes of messages helps us understand how the performance of an application is affected based on messages of different sizes. Partitioning of the time spent in MPI routines based on the type of MPI operation and the message size involved requires a two level mapping of performance data. This paper describes how performance mapping is implemented in the TAU performance system to support workload characterization.
[para98]
Sameer Shende, Steven T. Hackstadt, and Allen D. Malony, "Dynamic Performance Callstack Sampling: Merging TAU and DAQV-II," Proceedings of the Fourth International Workshop on Applied Parallel Computing (PARA98), June 14-17, 1998, LNCS 1541, Springer, Berlin, pp. 515-520, 1998.
Keywords: monitoring, performance, callstack, sampling, profiling, TAU, DAQV, parallel execution, performance tools, runtime interaction, C++

Observing the performance of an application at runtime requires economy in what performance data is measured and accessed, and flexibility in changing the focus of performance interest. This paper describes the performance callstack as an efficient performance view of a running program which can be retrieved and controlled by external analysis tools. The performance measurement support is provided by the TAU profiling library whereas tool-program interaction support is available through the DAQV framework. How these systems are merged to provide dynamic performance callstack sampling is discussed.
[parco03]
A. D. Malony, S. Shende, and R. Bell, "Online Performance Observation of Large-Scale Parallel Applications," Proc. Parco 2003 Symposium, in "Parallel Computing: Software Technology, Algorithms, Architectures and Applications," (Eds. G. R. Joubert, W. E. Nagel, F. J. Peters, and W. V. Walter), Advances in Parallel Computing, Vol. 13, Elsevier B.V., pp. 761 -768, 2004.
Keywords: Paraprof, Parvis, TAU, performance analysis, large-scale, parallel computing

Parallel performance tools offer insights into the execution behavior of an application and are a valuable component in the cycle of application development, deployment, and optimization. However, most tools do not work well with large-scale parallel applications where the performance data generated comes from upwards of thousands of processes. As parallel computer systems increase in size, the scaling of performance observation infrastructure becomes an important concern. In this paper, we discuss the problem of scaling and perfomance observation, and the ramifications of adding online support. A general online performance system architecture is presented. Recent work on the TAU performance system to enable large-scale performance observation and analysis is discussed. The paper concludes with plans for future work.
[parco03b]
H. Brunst, W. Nagel, "Scalable Performance Analysis of Parallel Systems: Concepts and Experiences," Proc. PARCO 2003 Conference, in (J. Joubert, W. Nagel, F. Peters, W. Walter eds.), Parallel Computing: Software Technology, Algorithms, Architectures and Applications, Advances in Parallel Computing 13 Elsevier 2004, pp. 737-744, 2004.
Keywords: Parallel Computing, Performance Analysis, Tracing, Profiling, Clusters, VampirServer, VNG, OTF

We have developed a distributed service architecture and an integrated parallel analysis engine for scalable trace based performance analysis. Our combined approach permits to handle very large performance data volumes in real-time. Unlike traditional analysis tools that do their job sequentially on an external desktop platform, our approach leaves the data at its origin and seamlessly integrates the time consuming analysis as a parallel job into the high performance production environment.
[parco05]
A. D. Malony, S. Shende, and A. Morris, "Phase-Based Parallel Performance Profiling," (to appear) Proc. of PARCO 2005 conference.
Keywords: TAU, Performance measurement and analysis, parallel computing, profiling, phases

Parallel scientific applications are designed based on structural, logical, and numerical models of computation and correctness. When studying the performance of these applications, especially on large-scale parallel systems, there is a strong preference among developers to view performance information with respect to their mental model of the application, formed from the model semantics used in the program. If the developer can relate performance data measured during execution to what they know about the application, more effective program optimization may be achieved. This paper considers the concept of phases and its support in parallel performance measurement and analysis as a means to bridge the gap between high- level application semantics and low-level performance data. In particular, this problem is studied in the context of parallel performance profiling. The implementation of phase-based parallel profiling in the TAU parallel performance system is described and demonstrated for the NAS parallel benchmarks and MFIX application.
[parco07b]
K. A. Huck, A. D. Malony, S. Shende, and A. Morris, "Scalable, Automated Performance Analysis with TAU and PerfExplorer", in Parallel Computing (ParCo2007), (Aachen, Germany), 2007.
Keywords: performance evaulation, TAU, PerfExplorer, performance scripting, metadata

Scalable performance analysis is a challenge for parallel development tools. The potential size of data sets and the need to compare results from multiple experiments presents a challenge to manage and process the information, and to characterize the performance of parallel applications running on potentially hundreds of thousands of processor cores. In addition, many exploratory analysis processes represent potentially repeatable processes which can and should be automated. In this paper, we will discuss the current version of PerfExplorer, a performance analysis framework which provides dimension reduction, clustering and correlation analysis of individual trails of large dimensions, and can perform relative performance analysis between multiple application executions. PerfExplorer analysis processes can be captured in the form of Python scripts, automating what would otherwise be time-consuming tasks. We will give examples of large-scale analysis results, and discuss the future development of the framework, including the encoding and processing of expert performance rules, and the increasing use of performance metadata.
[parco11]
S. Shende, A. D. Malony, W. Spear, and K. Schuchardt, "Characterizing I/O Performance Using the TAU Performance System." Presented at ICPP Parco 2011 conference Exascale Mini-symposium.
Keywords: POSIX I/O, MPI-IO, TAU, Instrumentation, GCRM

TAU is an integrated toolkit for performance instrumentation, measurement, and analysis. It provides a flexible, portable, and scalable set of technologies for performance evaluation on extreme-scale HPC systems. This paper describes alternatives for I/O instrumentation provided by TAU and the design and implementation of a new tool, tau_gen_wrapper, to wrap external libraries. It describes three instrumentation techniques - preprocessor based substitution, linker based instrumentation, and library preloading based replacement of routines. It demonstrates this wrapping technology in the context of intercepting the POSIX I/O library and its application to profiling I/O calls for the Global Cloud Resolution Model (GCRM) application on the Cray XE6 system. This scheme allows TAU to track I/O using linker level instrumentation for statically linked executables and attribute the I/O to specific code regions. It also addresses issues encountered in collecting the performance data from large core counts and representing this data to correctly identify sources of poor I/O performance.
[parle94]
Steven T. Hackstadt and Allen D. Malony, Next-Generation Parallel Performance Visualization: A Prototyping Environment for Visualization Development, Proc. of the Parallel Architectures and Languages Europe (PARLE) Conference, Athens, Greece, July 1994, pp. 192-201. Also available as University of Oregon, Department of Computer and Information Science, Technical Report CIS-TR-93-23, October 1993.
Keywords: parallel performance visualization, scientific visualization, visualization prototyping

A new design process for the development of parallel performance visualizations that uses existing scientific data visualization software is presented. Scientific visualization tools are designed to handle large quantities of multi-dimensional data and create complex, three-dimensional, customizable displays which incorporate advanced rendering techniques, animation, and display interaction. Using a design process that leverages these tools to prototype new performance visualizations can lead to drastic reductions in the graphics and data manipulation programming overhead currently experienced by performance visualization developers. The process evolves from a formal methodology that relates performance abstractions to visual representations. Under this formalism, it is possible to describe performance visualizations as mappings from performance objects to view objects, independent of any graphical programming. Implementing this formalism in an existing data visualization system leads to a visualization prototype design process consisting of two components corresponding to the two high-level abstractions of the formalism: a trace transformation (i.e., performance abstraction) and a graphical transformation (i.e., visual abstraction). The trace transformation changes raw trace data to a format readable by the visualization software, and the graphical transformation specifies the graphical characteristics of the visualization. This prototyping environment also facilitates iterative design and evaluation of new and existing displays. Our work examines how an existing data visualization tool, IBM's Data Explorer in particular, can provide a robust prototyping environment for next-generation parallel performance visualization.
[pcfd05]
S. Shende, A. Malony, A. Morris, S. Parker, J. Davison de St. Germain, "Performance Evaluation of Adaptive Scientific Applications using TAU," Parallel Computational Fluid Dynamics: Theory and Applications, Proceedings of the 2005 International Conference on Parallel Computational Fluid Dynamics, May 24-27.
Keywords: TAU, Performance Evaluation, CFD, Uintah Computational Framework, Phases

Fueled by increasing processor speeds and high speed interconnection networks, advances in high performance computer architectures have allowed the development of increasingly complex large scale parallel systems. For computational scientists, programming these systems efficiently is a challenging task. Understanding the performance of their parallel applications is equally daunting. To observe and comprehend the performance of parallel applications that run on these systems, we need performance evaluation tools that can map the performance abstractions to the user's mental models of application execution. For instance, most parallel scientific applications are iterative in nature. In the case of CFD applications, they may also dynamically adapt to changes in the simulation model. A performance measurement and analysis system that can differentiate the phases of each iteration and characterize performance changes as the application adapts will enable developers to better relate performance to their application behavior. In this paper, we present new performance measurement techniques to meet these needs. In section 2, we describe our parallel performance system, TAU. Section 3 discusses how new TAU profiling techniques can be applied to CFD applications with iterative and adaptive characteristics. In section 4, we present a case study featuring the Uintah computational framework and explain how adaptive computational fluid dynamics simulations are observed using TAU. Finally, we conclude with a discussion of how the TAU performance system can be
[pdpta01]
S. Shende, A. D. Malony, R. Ansell-Bell, "Instrumentation and Measurement Strategies for Flexible and Portable Empirical Performance Evaluation," Proceedings Tools and Techniques for Performance Evaluation Workshop, PDPTA'01, CSREA, Vol. 3, pp. 1150-1156, June 2001.
Keywords: TAU, instrumentation, measurement, DyninstAPI, MPI

Flexibility and portability are important concerns for productive empirical performance evaluation. We claim that these features are best supported by robust instrumentation and measurement strategies, and their integration. Using the TAU performance system as an exemplar performance toolkit, a case study in performance evaluation is considered. Our goal is both to highlight flexibility and portability requirements and to consider how instrumentation and measurement techniques can address them. The main contribution of the paper is methodological, in its advocation of a guiding principle for tool development and enhancement. Recent advancements in the TAU system are described from this perspective.
[pmvpmvps93]
Allen D. Malony, Gregory V. Wilson, "Future directions in parallel performance environments", Proceedings of the workshop on performance measurement and visualization on Performance measurement and visualization of parallel systems, Elsevier Science Publishers, B.V., Amsterdam, pp. 331-351, 1993.
Keywords: processor architectures, parallel programming, performance measurement

The increasing complexity of parallel computing systems has brought about a crisis in parallel performance evaluation and tuning. Although there have been important advances in performance tools in recent years, we believe that future parallel performance environments will move beyond these tools by integrating performance instrumentation with compilers for architecture-independent languages, by formalizing the relationship between performance views and the data they represent, and by automating some aspects of performance interpretation. This paper describes these directions from the perspective of research projects that have been recently undertaken.
[ppopp91]
Allen D. Malony, "Event-based Performance Perturbation: a Case Study," Proc. third ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPOPP'91), pp. 201-212, 1991.
Keywords: performance perturbation, performance measurement, event-based analysis, uncertainty principle

Determining the performance behavior of parallel computations requires some form of intrusive tracing measurement. The greater the need for detailed performance data, the more intrusion the measurement will cause. Recovering actual execution performance jfrom perturbed performance measurements using eventbased perturbation analysis is the topic of this paper. We show that the measurement and subsequent analysis of synchronization operations (particularly, advance and await) can produce, in practice, accurate approximations to actual performance behavior. We use as testcases three Lawrence Livermore loops that execute as parallel DOACROSS loops on an Alliant FX/80. The results of our experiments suggest that a systematic application of performance perturbation analysis techniques will allow more detailed, accurate instrumentation than traditionally believed possible.
[ppopp93]
Sekhar R. Sarukkai, Allen D. Malony, "Perturbation Analysis of High Level Instru mentation for SPMD Programs," Proc. fourth ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP'93), pp. 44-53, 1993.
Keywords: perturbation analysis, performance measurement

The process of instrumenting a program to study its behavior can lead to perturbations in the program's execution. These perturbations can become severe for large parallel systems or problem sizes, even when one captures only high level events. In this paper, we address the important issue of eliminating execution perturbations caused by high-level instrumentation of SPMD programs. We will describe perturbation analysis techniques for common computation and communication measurements, and show examples which demonstrate the effectiveness of these techniques in practice.
[proper10]
C. W. Lee, A. D. Malony, and A. Morris, "TAUmon: Scalable Online Performance Data Analysis in TAU", in 3rd Workshop on Productivity and Performance (PROPER 2010), 2010.
Keywords: TAU, scalability, performance analysis tools, online monitoring

In this paper, we present an update on the scalable online support for performance data analysis and monitoring in TAU. Extend- ing on our prior work with TAUoverSupermon and TAUoverMRNet, we show how online analysis operations can also be supported directly and scalably using the parallel infrastructure provided by an MPI application instrumented with TAU. We also report on e orts to streamline and up- date TAUoverMRNet. Together, these approaches form the basis for the investigation of online analysis capabilities in a TAU monitoring frame- work TAUmon. We discuss various analysis operations and capabilities enabled by online monitoring and how operations like event uni cation enable merged pro les to be produced with greatly reduced data vol- ume just prior to the end of application execution. Scaling results with PFLOTRAN on the Cray XT5 and BG/P are presented along with a look at some initial performance information generated from FLASH and PFLOTRAN through our TAUmon prototype frameworks.
[proper11]
W. Spear, A. D. Malony, C. W. Lee, S. Biersdorff, S. Shende. "An Approach to Creating Performance Visualizations in a Parallel Profile Analysis Tool." Presented at the Workshop on Productivity and Performance (PROPER 2011), August 2011.
Keywords: TAU, ParaProf, Topology, Performance visualization

With increases in the scale of parallelism the dimensionality and complexity of parallel performance measurements has placed greater challenges on analysis tools. Performance visualization can assist in understanding performance properties and relationships. However, the creation of new visualizations in practice is not supported by existing parallel profiling tools. Users must work with presentation types provided by a tool and have limited means to change its design. Here we present an approach for creating new performance visualizations within an existing parallel profile analysis tool. The approach separates visual layout design from the underlying performance data model, making custom visualizations such as performance over system topologies straightforward to implement and adjust for various use cases.
[psc94]
A. Malony, B. Mohr, P. Beckman, D. Gannon, Program Analysis and Tuning Tools for a Parallel Object Oriented Language: An Experiment with the TAU System, Proceedings of the Workshop on Parallel Scientific Computing, Cape Cod, MA, October 1994.
Keywords:
[sc00]
K. A. Lindlan, J. Cuny, A. D. Malony, S. Shende, B. Mohr, R. Rivenburgh, C. Rasmussen. "A Tool Framework for Static and Dynamic Analysis of Object-Oriented Software with Templates." Proceedings of SC2000: High Performance Networking and Computing Conference, Dallas, November 2000.
Keywords: Program Database Toolkit, PDT, static analysis, dynamic analysis, object-oriented, templates, IL Analyzer, DUCTAPE, TAU, SILOON

The developers of high-performance scientific applications often work in complex computing environments that place heavy demands on program analysis tools. The developers need tools that interoperate, are portable across machine architectures, and provide source-level feedback. In this paper, we describe a tool framework, the Program Database Toolkit (PDT), that supports the development of program analysis tools meeting these requirements. PDT uses compile-time information to create a complete database of high-level program information that is structured for well-defined and uniform access by tools and applications. PDT's current applications make heavy use of advanced features of C++, in particular, templates. We describe the toolkit, focussing on its most important contribution -- its handling of templates -- as well as its use in existing applications.
[sc05]
K. A. Huck, and A. D. Malony, "PerfExplorer: A Performance Data Mining Framework for Large- Scale Parallel Computing," in Proc. of SC 2005 Conference, ACM, 2005.
Keywords: performance data mining, PerfExplorer, TAU, PerfDMF, R, Weka

Parallel applications running on high-end computer systems manifest a complexity of performance phenomena. Tools to observe parallel performance attempt to capture these phenomena in measurement datasets rich with information relating multiple performance metrics to execution dynam- ics and parameters specific to the application-system exper- iment. However, the potential size of datasets and the need to assimilate results from multiple experiments makes it a daunting challenge to not only process the information, but discover and understand performance insights. In this pa- per, we present PerfExplorer, a framework for parallel per- formance data mining and knowledge discovery. The frame- work architecture enables the development and integration of data mining operations that will be applied to large-scale parallel performance profiles. PerfExplorer operates as a client-server system and is built on a robust parallel per- formance database (PerfDMF) to access the parallel profiles and save its analysis results. Examples are given demon- strating these techniques for performance analysis of ASCI applications.
[sc05b]
Knowledge Engineering for Model-based Parallel Performance Diagnosis (Poster) By Li Li and Allen D. Malony Computer and Information Science Department, University of Oregon, Eugene, OR
Keywords: performance knowledge, performance diagosis
[sc05c]
K. Karavanic, J. May, K. Mohror, B. Miller, K. Huck, R. Knapp, and B. Pugh, "Integrating Database Technology with Comparison-Based Parallel Performance Diagnosis: The Perftrack Performance Experiment Management Tool", in International Conference for High Performance Computing, Networking, Storage and Analysis (SC'05), (Washington, DC, USA), IEEE Computer Society, 2005.
Keywords: TAU, Perftrack, experiment management, performance diagnosis

PerfTrack is a data store and interface for managing performance data from large-scale parallel applications. Data collected in different locations and formats can be compared and viewed in a single performance analysis session. The underlying data store used in PerfTrack is implemented with a database management system (DBMS). PerfTrack includes interfaces to the data store and scripts for automatically collecting data describing each experiment, such as build and platform details. We have implemented a prototype of PerfTrack that can use Oracle or PostgreSQL for the data store. We demonstrate the prototype's functionality with three case studies: one is a comparative study of an ASC purple benchmark on high-end Linux and AIX platforms; the second is a parameter study conducted at Lawrence Livermore National Laboratory (LLNL) on two high end platforms, a 128 node cluster of IBM Power 4 processors and BlueGene/L; the third demonstrates incorporating performance data from the Paradyn Parallel Performance Tool into an existing PerfTrack data store.
[sc07]
A. Nataraj, A. Morris, A. Malony, M. Sottile, P. Beckman, "The Ghost in the Machine: Observing the Effects of Kernel Operation on Parallel Application Performance." Supercomputing Conference 2007.
Keywords: kernel, operating system noise, interference, integrated measurement, KTAU, TAU, compensation, tracing

The performance of a parallel application on a scalable HPC system is determined by user-level execution of the application code and system-level (OS kernel) operations. To understand the influences of system-level factors on application performance, the measurement of OS kernel activities is key. We describe a technology to observe kernel actions and make this information available to application-level performance measurement tools. The benefits of merged application and OS performance information and its use in parallel performance analysis are demonstrated, both for profiling and tracing methodologies. In particular, we focus on the problem of kernel noise assessment as a stress test of the approach. We show new results for characterizing noise and introduce new techniques for evaluating noise interference and its effects on application execution. Our kernel measurement and noise analysis technologies are being developed as part of Linux OS environments for scalable parallel systems.
[sc08]
K. A. Huck, O. Hernandez, V. Bui, S. Chandrasekaran, B. Chapman, A. D. Malony, L. C. McInnes, and B. Norris, "Capturing Performance Knowledge for Automated Analysis", in International Conference for High Performance Computing, Networking, Storage and Analysis (SC'08), 2008.
Keywords: TAU, performance knowledge, automaticed analysis, compiler basied instrumentation, compiler optimizations

Automating the process of parallel performance experimentation, analysis, and problem diagnosis can enhance environments for performance-directed application development, compilation, and execution. This is especially true when parametric studies, modeling, and optimization strategies require large amounts of data to be collected and processed for knowledge synthesis and reuse. This paper describes the integration of the PerfExplorer performance data mining framework with the OpenUH compiler infrastructure. OpenUH provides autoinstrumentation of source code for performance experimentation and PerfExplorer provides automated and reusable analysis of the performance data through a scripting interface. More importantly, PerfExplorer inference rules have been developed to recognize and diagnose performance characteristics important for optimization strategies and modeling. Three case studies are presented which show our success with automation in OpenMP and MPI code tuning, parametric characterization, and power modeling. The paper discusses how the integration supports performance knowledge engineering across applications and feedback-based compiler optimization in general.
[sc2001]
H. Truong, T. Fahringer, G. Madsen, A. Malony, H. Moritsch, and S. Shende, "On Using SCALEA for Performance Analysis of Distributed and Parallel Programs," Proceedings of SC'2001 conference, Nov. 2001.
Keywords: Performance analysis, performance overhead classification, distributed and parallel systems, SCALEA, TAU, OpenMP

In this paper we give an overview of SCALEA, which is a new performance analysis tool for OpenMP, MPI, HPF, and mixed parallel/distributed programs. SCALEA instruments, executes and measures programs and computes a variety of performance overheads based on a novel overhead classification. Source code and HW-profiling is combined in a single system which significantly extends the scope of possible overheads that can be measured and examined, ranging from HW-counters, such as the number of cache misses or floating point operations, to more complex performance metrics, such as control or loss of parallelism. Moreover, SCALEA uses a new representation of code regions, called the dynamic code region call graph, which enables detailed overheads analysis for arbitrary code regions. An instrumentation description file is used to releate performance information to code regions of the input program and to reduce instrumentation overhead. Several experiments with realistic codes that cover MPI, OpenMP, HPF and mixed OpenMP/MPI codes demonstrate the usefulness of SCALEA.
[sc2003.poster]
Sophia Lefantzi, Jaideep Ray, and Sameer Shende, "Strong Scalability Analysis and Performance Evaluation of a SAMR CCA-based Reacting Flow Code," Poster, SC2003 Conference, Nov. 2003.
Keywords: CCA, Performance modeling, CFRFS Combustion, TAU

Simulations on structured adaptively refined meshes (SAMR) pose unique problems in the context of performance evaluation and modeling. Adaptively refined meshes aim to concentrate grid points in regions of interest while leaving the bulk of the domain sparsely tessellated. Structured adaptively refined meshes achieve this by having overlaid grids of different refinement. Numerical algorithms employing explicit multi-rated time- stepping methods apply a computational "kernel" to the finer meshes at a higher frequency than at the coarser meshes. Each application of the kernel at a given level of refinement is followed up by a communication step where data is exchanged with neighboring subdomains. The SAMR approach is adaptive, i.e. its characteristics change as the simulation evolves in time. Thus, scalability depends on the number of processors and the time-integrated effect of the physics of the problem. The time-integrated effect renders the estimation of a general metric of scalability difficult and often impossible. Generally, as reported in the literature, for realistic problems and configurations, SAMR simulations do not scale well. For this work we analyzed two different hydrodynamic problems and present how communication costs scale with various aspects of the domain decomposition. Approach: The codes that we analyzed solve PDEs to simulate reactive flows and flows with shock waves. The codes were run until the incremental decrease in run times (with increasing processors) approached zero. It was found that the nature of the problem changed vastly during the run - even runs which showed poor scaling had periods of evolution where the domain decomposition showed "good" scaling characteristics, i.e compute loads were higher than communication loads. The computational load was found to be evenly balanced across the processors - the lack of scalability was due to the dominance of communication and synchronization costs over computational costs. We identified and analyzed phases in the evolution of the problem where the simulation exhibited good and bad scaling. Communication costs were analyzed with respect to the levels of refinement of the grid as well as the data-exchange radius for each of the runs. This is a thorough performance analysis of SAMR hydrodynamics codes, performed for the first time in CCA-compliant codes, tackling the time-dependent nature of the communication overheads. Both the codes that we analyzed employ the Common Component Architecture (CCA) paradigm and were run within the CCAFFEINE framework. The adaptive mesh package used (that performs the bulk of the communications) was GrACE (Rutgers, The State University of New Jersey). The measurements were performed using the CCA version of TAU (Tuning and Analysis Utilities). The tests were performed on "platinum" at NCSA (University of Illinois, Urbana Champaign), a Linux cluster of dual-node Pentium III 1 GHz processors, connected via a Myrinet interconnect. Visual: As a part of the visual presentation, we will present a color poster with our performance analysis results and hold a demonstration of the composition and execution of CCA codes. Animations of the adaptively refined grid will also be shown.
[sc90]
Sanjay Sharma, Allen D. Malony, Michael W. Berry, Priyamvada Sinvhal- Sharma, "Run-time monitoring of concurrent programs on the Cedar multiprocessor ," Proc. 1990 conference on Supercomputing, pp. 784-793, 1990
Keywords: Cedar, Tracing, processor architectures

The ability to understand the behavior of concurrent programs depends greatly on the facilities available to monitor execution and present the results to the user. Beyond the basic profiling tools that collect data for post-mortem viewing, explorative use of multiprocessor computer systems demands a dynamic monitoring environment capable of providing run-time access to program performance. A prototype of such an environment has been built for the Cedar multiprocessor. This paper describes the design of the infrastructure enabling run-time monitoring of parallel Cedar applications and the communication of execution data among physically distributed machines. An application for matrix visualization is used to highlight important aspects of the system.
[sc90b]
Allen D. Malony, John L. Larson, Daniel A. Reed, "Tracing Application Program Execution on the Cray X-MP and Cray 2," Proc. of the 1990 conference on Supercomputing, pp. 60-73, 1990.
Keywords: Tracing

Important insights into program operation can be gained by observing dynamic execution behavior. Unfortunately, many high-performance machines provide execution profile summaries as the only tool for performance investigation. We have developed a tracing library for the Cray X-MP and Cray 2 supercomputers that supports the low-overhead capture of execution events for sequential and multitasked programs. This library has been extended to use the automatic instrumentation facilities on these machines, allowing trace data from routine entry and exit, and other program segments, to be captured. To assess the utility of the trace-based tools, three of the Perfect Benchmark codes have been tested in scalar and vector modes with the tracing instrumentation. In addition to computing summary execution statistics from the traces, interesting execution dynamics appear when studying the trace histories. It is also possible to compare codes across the two architectures by correlating the event traces. Our conclusion is that adding tracing support in Cray supercomputers can have significant returns in improved performance characterization and evaluation.
[sc92]
Allen D. Malony, "Supercomputing Around the World," Proc. 1992 ACM/IEEE Conference on Supercomputing (mini symposium), pp. 126-129, 1992.
Keywords: Supercomputing

Supercomputing is rapidly becoming a global phenomenon. In keeping with the Voyages of Discovery theme of the Supercomputing 92 conference, representatives of supercompuiing endeavors from around the wor!d meet in this mini-symposium to speak on national and international supercomputing activities.
[sc93]
F. Bodin, P. Beckman, D. Gannon, S. Yang, S. Kesavan, A. Malony, B. Mohr, Implementing a Parallel C++ Runtime System for Scalable Parallel Systems, Proceedings of the 1993 Supercomputing Conference, Portland, Oregon, November 1993, pp. 588-597.
Keywords: parallel C++, portability, scalability, SPMD, runtime system,concurrency and communication primitives, performance

pC++ is a language extension to C++ designed to allow programmers to compose "concurrent aggregate" collection classes which can be aligned and distributed over the memory hierarchy of a parallel machine in a manner modeled on the High Performance Fortran Forum (HPFF) directives for Fortran 90. pC++ allows the user to write portable and efficient code which will run on a wide range of scalable parallel computer systems. The first version of the compiler is a preprocessor which generates Single Program Multiple Data (SPMD) C++ code. Currently, it runs on the Thinking Machines CM-5, the Intel Paragon, the BBN TC2000, the Kendall Square Research KSR-1, and the Sequent Symmetry. In this paper we describe the implementation of the runtime system, which provides the concurrency and communication primitives between objects in a distributed collection. To illustrate the behavior of the runtime system we include a description and performance results on four benchmark programs.
[sc98a]
Christopher W. Harrop, Steven T. Hackstadt, Janice E. Cuny, Allen D. Malony, and Laura S. Magde, Supporting Runtime Tool Interaction for Parallel Simulations, Proceedings of Supercomputing '98 (SC98), Orlando, FL, November 7-13, 1998 (Best Student Paper Finalist).
Keywords: runtime interaction, computational steering, matlab

Scientists from many disciplines now routinely use modeling and simulation techniques to study physical and biological phenomena. Advances in high-performance architectures and networking have made it possible to build complex simulations with parallel and distributed interacting components. Unfortunately, the software needed to support such complex simulations has lagged behind hardware developments. We focus here on one aspect of such support: runtime program interaction. We have developed a runtime interaction framework and we have implemented a specific instance of it for an application in seismic tomography. That instance, called TierraLab, extends the geoscientists' existing (legacy) tomography code with runtime interaction capabilities which they access through a MATLAB interface. The scientist can stop a program, retrieve data, analyze and visualize that data with existing MATLAB routines, modify the data, and resume execution. They can do this all within a familiar MATLAB-like environment without having to be concerned with any of the low-level details of parallel or distributed data distribution. Data distribution is handled transparently by the Distributed Array Query and Visualization (DAQV) system. Our framework allows scientists to construct and maintain their own customized runtime interaction system.

[sc98b]
Jenifer L. Skidmore, Matthew J. Sottile, Janice E. Cuny, and Allen D. Malony, A Prototype Notebook-Based Environment for Computational Tools, Proceedings of Supercomputing '98, Orlando, FL, November 1998.
Keywords: electronic notebook, distributed computing, computational science,heterogeneous, tools, World Wide Web, collaboration

The Virtual Notebook Environment (ViNE) is a platform-independent, web-based interface designed to support a range of scientific activities across distributed, heterogeneous computing platforms. ViNE provides scientists with a web-based version of the common paper-based lab notebook, but in addition, it provides support for collaboration and management of computational experiments. Collaboration is supported with the web-based approach, which makes notebook material generally accessible and with a hierarchy of security mechanisms that screen that access. ViNE provides uniform, system-transparent access to data, tools, and programs throughout the scientist's computing infrastructure. Computational experiments can be launched from ViNE using a visual specification language. The scientist is freed from concerns about inter-tool connectivity, data distribution, or data management details. ViNE also provides support for dynamically linking analysis results back into the notebook content.

In this paper we present the ViNE system architecture and a case study of its use in neuropsychology research at the University of Oregon. Our case study with the Brain Electrophysiology Laboratory (BEL) addresses their need for data security and management, collaborative support, and distributed analysis processes. The current version of ViNE is a prototype system being tested with this and other scientific applications.

[scidac05]
P. Worley, J. Candy, L. Carrington, K. Huck, T. Kaiser, G. Mahinthakumar, A. Malony, S. Moore, D. Reed, P. Roth, H. Shan, S. Shende, A. Snavely, S. Sreepathi, F. Wolf, and Y. Zhang, "Performance Analysis of GYRO: A Tool Evaluation," Poster, Scientific Discovery through Advanced Computing Conference, (SciDAC 2005), 2005.
Keywords: Performance evaluation, PERC, TAU, SvPablo, Kojak, HPM, PMaC

The performance of the Eulerian gyrokinetic-Maxwell solver code GYRO is analyzed on five high performance computing systems. First, a manual approach is taken, using custom scripts to analyze the output of embedded wallclock timers, floating point operation counts collected using hardware performance counters, and traces of user and communication events collected using the profiling interface to Message Passing Interface (MPI) libraries. Parts of the analysis are then repeated or extended using a number of sophisticated performance analysis tools: IPM, KOJAK, SvPablo, TAU, and the PMaC modeling tool suite. The paper briefly discusses what has been discovered via this manual analysis process, what performance analyses are inconvenient or infeasible to attempt manually, and to what extent the tools show promise in accelerating or significantly extending the manual performance analyses.
[shpcc94]
Steven T. Hackstadt, Allen D. Malony, and Bernd Mohr, Scalable Performance Visualization for Data-Parallel Programs, Proc. of the Scalable High Performance Computing Conference (SHPCC), Knoxville, TN, May 1994, pp. 342-349. Also available as University of Oregon, Department of Computer and Information Science, Technical Report CIS-TR-94-09, March 1994.
Keywords: scalable performance visualization, scientific visualization, pC++, data-parallel programming

Developing robust techniques for visualizing the performance behavior of parallel programs that can scale in problem size and/or number of processors remains a challenge. In this paper, we present several performance visualization techniques based on the context of data-parallel programming and execution that demonstrate good visual scalability properties. These techniques are a result of utilizing the structural and distribution semantics of data-parallel programs as well as sophisticated three-dimensional graphics. A categorization and examples of scalable performance visualizations are given for programs written in Dataparallel C and pC++.
[sigmetrics87]
Daniel A. Reed, Allen D. Malony, Bradley D. McCredie, "Parallel Discrete Event Simulation: a Shared Memory Approach," Proc. 1987 ACM SIGMETRICS conference on Measurement and Modeling of Computer Systems, 15(1), pp. 36- 38, 1987.
Keywords: discrete event simulation

The inherently sequential nature of event list manipulation limits the potential parallelism of standard simulation models. Although techniques for performing event list manipulation and event simulation in parallel have been suggested, large scale performance increases seem unlikely. Only by eliminating the event list, in its traditional form, can additional parallelism be obtained; this is the goal of distributed simulation. Several distributed simulation techniques have been proposed. In the remainder of this abstract, we present the Chandy-Misra distributed simulation algorithm and the results of an extensive study of its performance on a shared memory parallel processor when simulating queueing network models.
[sigmetrics95]
Allen D. Malony, "Data Interpretation and Experiment Planning in Performance Tools," Joint International Conference on Measurement and Modeling of Computer Systems, Proceedings of the 1995 ACM SIGMETRICS joint international conference on Measurement and Modeling of Computer Systems, pp. 62-63, 1995.
Keywords: performance measurement

The parallel scientific computing community is placing increasing emphasis on portability and scalability of programs, languages, and architectures. This creates new challenges for developers of parallel performance analysis tools, who will have to deal with increasing volumes of performance data drawn from diverse platforms. One way to meet this challenge is to incorporate sophisticated facilities for data interpretation and experiment planning within the tools themselves, giving them increased flexibility and autonomy in gathering and selecting performance data. This panel discussion brings together four research groups that have made advances in this direction.
[spdt96]
S. Shende, J. Cuny, L. Hansen, J. Kundu, S. McLaughry and O. Wolf, Event and State-Based Debugging in TAU: A Prototype, Proceedings of ACM SIGMETRICS Symposium on Parallel and Distributed Tools (SPDT '96), May, 1996, pp. 21-30.
Keywords: event-based debugging, state-based debugging, pC++, TAU, Ariadne

Parallel programs are complex and often require a multilevel debugging strategy that combines both event- and state-based debugging. We report here on preliminary work that combines these approaches within the TAU program analysis environment for pC++. This work extends the use of event-based modeling to object-parallel languages, provides an alternative mechanism for establishing meaningful global breakpoints in object-oriented languages, introduces the TAU program interaction and control infrastructure, and provides an environment for the assessment of mixed event- and state-based strategies.
[spdt98a]
S. Shende, A. D. Malony, J. Cuny, K. Lindlan, P. Beckman and S. Karmesin, Portable Profiling and Tracing for Parallel Scientific Applications using C++, Proceedings of ACM SIGMETRICS Symposium on Parallel and Distributed Tools (SPDT '98), August, 1998, pp. 134-145.
Keywords: performance, profiling, tracing, C++, parallel, TAU

Performance measurement of parallel, object-oriented (OO) programs requires the development of instrumentation and analysis techniques beyond those used for more traditional languages. Performance events must be redefined for the conceptual OO programming model, and those events must be instrumented and tracked in the context of OO language abstractions, compilation methods, and run-time execution dynamics. In this paper, we focus on the profiling and tracing of C++ applications that have been written using a rich parallel programming framework for high-performance, scientific computing. We address issues of class-based profiling, instrumentation of templates, runtime function identification, and polymorphic (type-based) profiling. Our solutions are implemented in the TAU portable profiling package which also provides support for profiling groups and user-level timers. We demonstrate TAU's C++ profiling capabilities for real parallel applications, built from components of the ACTS toolkit. Future directions include work on runtime performance data access, dynamic instrumentation, and higher-level performance data analysis and visualization that relates object semantics with performance execution behavior.
[spdt98b]
K. Lindlan, A. Malony, J. Cuny, S. Shende, and P. Beckman, An IL Converter and Program Database for Analysis Tools, Proceedings of ACM SIGMETRICS Symposium on Parallel and Distributed Tools (SPDT '98), August 1998, pp. 153.
Keywords: IL Analyzer, PDT, static analysis

Developers of static and dynamic analysis tools for C++ programs need access to information on functions, classes, templates, and macros in parsed C++ code. Existing tools, such as the EDG display tool, provide that access, but in an unsuitable format. We built a converter that prunes and reorganizes the information into the appropriate format. The converter provides the information needed for our TAU (Tuning and Analysis Utilities) tools and, in more general terms, provides C++ developers considerable opportunities for automating software development.
[tools]
A. D. Malony, S. Shende, A. Morris, S. Biersdorff, W. Spear, K. A. Huck and A. Nataraj. "Evolution of a Parallel Performance System," Second International Workshop on Tools for High Performance Computing. July 2008
Keywords: TAU

The TAU Performance System is an integrated suite of tools for instrumentation, measurement, and analysis of parallel programs targeting large-scale, high-performance computing (HPC) platforms. Representing over fifteen calendar years and fifty person years of research and development effort, TAUs driving concerns have been portability, flexibility, interoperability, and scalability. The result is a performance system which has evolved into a leading framework for parallel performance evaluation and problem solving. This paper presents the current state of TAU, overviews the design and function of TAUs main features, discusses best practices of TAU use, and outlines future development.
[ugc09]
W. Spear, S. Shende, A. Malony, R. Portillo, P. Teller, D. Cronk, S. Moore, D. Terpstra. Making Performance Analysis Tuning Part of the Software Development Cycle. UGC 2009, San Diego, CA, June 15-18, 2009.
Keywords: TAU, eclipse, CDT, PTP, SCALASCA, VampirTrace

Although there are a number of performance tools available to DoD users, the process of performance analysis and tuning has yet to become an integral part of the DoD software development cycle. Instead, performance analysis and tuning is the domain of a small number of experts who cannot possibly address all the codes that need attention. We believe the main reasons for this are a lack of knowledge about these tools, the real or perceived steep learning curve required to use them, and the absence of a centralized method that incorporates their use in the software development cycle. This paper presents ongoing efforts to enable a larger number of DoD HPCMP users to benefit from available performance analysis tools by integrating them into the Eclipse Parallel Tools Platform (Eclipse/PTP), an integrated development environment for parallel programs.
[vis92]
J. E. Cuny, A. Hough, and J. Kundu. Logical Time in Visualizations Produced by Parallel Programs, Proceedings of Visualization '92, 1992, pp. 186-193.
Keywords: parallel, visualization, logical time

Visualization tools that display data as it is manipulated by a parallel, MIMD computation must contend with the effects of asynchronous execution. We have developed techniques that manipulate logical time in order to produce coherent animations of parallel program behavior despite the presence of asynchrony. Our techniques interpret'' program behavior in light of user-defined abstractions and generate animations based on a logical rather than a physical view of time. If this interpretation succeeds, the resulting animation is easily understood; if it fails, the programmer can be assured that the failure was not an artifact of the visualization. Here we demonstrate that these techniques can be generally applied to enhance visualizations of a variety of types of data as it is produced by parallel, MIMD computations.

### Journals

[CaC11]
"Performance Characterization of Global Address Space Applications: A Case Study with NWChem." J. R. Hammond, S. Krishnamoorthy, S. Shende, N. A. Romero, A. D. Malony. To appear in Concurrency and Computation: Practice and Experience 2010
Keywords: NWChem, TAU, Global Address Space, PGAS

The use of global address space languages and one-sided communication for complex applications is gaining attention in the parallel computing community. However, lack of good evaluative methods to observe multiple levels of performance makes it difficult to isolate the cause of performance deficiencies and to understand the fundamental limitations of system and application design for future improvement. NWChem is a popular computational chemistry package which depends on the Global Arrays / ARMCI suite for partitioned global address space functionality to deliver high-end molecular modeling capabilities. A workload characterization methodology was developed to support NWChem performance engineering on large-scale parallel platforms. The research involved both the integration of performance instrumentation and measurement in the NWChem software, as well as the analysis of one-sided communication performance in the context of NWChem workloads. Scaling studies were conducted for NWChem on Blue Gene/P and on two large-scale clusters using different generation Infiniband interconnects and x86 processors. The performance analysis and results show how subtle changes in the runtime parameters related to the communication subsystem could have significant impact on performance behavior. The tool has successfully identified several algorithmic bottlenecks which are already being tackled by computational chemists to improve NWChem performance.
[STHEC08]
A. Nataraj, A. Malony, A. Morris, D. Arnold, B. Miller, "A Framework for Scalable, Parallel Performance Monitoring" published in Concurrency and Computation: Practice and Experience, Special Issue from STHEC'08 Workshop.
Keywords: performance, monitoring, tree-based, overlay, TAU, MRNet

Performance monitoring of HPC applications offers opportunities for adaptive optimization based on dynamic performance behavior, unavailable in purely post-mortem performance views. However, a parallel performance monitoring system must have low overhead and high efficiency to make these opportunities tangible. We describe a scalable parallel performance monitor called TAUoverMRNet (ToM), created from the integration of the TAU performance system and the Multicast Reduction Network (MRNet). The integration is achieved through a plug-in architecture in TAU that allows selection of different transport substrates to offload online performance data. A method to establish the transport overlay structure of the monitor from within TAU, one that requires no added support from the job manager or application, is presented. We demonstrate the distribution of performance analysis from the sink to the overlay nodes and the reduction in large-scale profile data that could otherwise overwhelm any single sink. Results show low perturbation and significant savings accrued from reduction at large processor-counts.
[cbhpc08]
V. Bui, B. Norris, K. Huck, L. C. McInnes, L. Li, O. Hernandez, and B. Chapman, "A Component Infrastructure for Performance and Power Modeling of Parallel Scientific Applications", in Component-Based High Performance Computing (CBHPC 2008), 2008.
Keywords: power modeling, performance modeling, Common Component Architecture, CCA

Characterizing the performance of scientific applications is essential for effective code optimization, both by compilers and by high-level adaptive numerical algorithms. While maximizing power efficiency is becoming increasingly important in current high-performance architectures, little or no hardware or software support exists for detailed power measurements. Hardware counter-based power models are a promising method for guiding software-based techniques for reducing power. We present a component-based infrastructure for performance and power modeling of parallel scientific applications. The power model leverages on-chip performance hardware counters and is designed to model power consumption for modern multiprocessor and multicore systems. Our tool infrastructure includes application components as well as performance and power measurement and analysis components. We collect performance data using the TAU performance component and apply the power model in the performance and power analysis of a PETSc-based parallel fluid dynamics application by using the PerfExplorer component.
[cca_cpe04]
A. Malony, S. Shende, N. Trebon, J. Ray, R. Armstrong, C. Rasmussen, and M. Sottile, "Performance Technology for Parallel and Distributed Component Software," Concurrency and Computation: Practice and Experience, Vol. 17, Issue 2-4, pp. 117-141, John Wiley & Sons, Ltd., Feb - Apr, 2005.
Keywords: component software, performance, parallel, distributed, optimization, CCA, TAU

This work targets the emerging use of software component technology for high-performance scientific parallel and distributed computing. While component software engineering will benefit the construction of complex science applications, its use presents several challenges to performance measurment, analysis, and optimization. The performance of a component application depends on the interaction (possibly non-linear) of the composed component set. Furthermore, a component is a "binary unit of composition" and the only information users have is the interface the component provides to the outside world. A performance engineering methodology and development approach is presented to address evaluation and optimization issues in high-performance component environments. We describe a prototype implementation of a performance measurement infrastructure for the Common Component Architecture (CCA) system. A case study demonstrating the use of this technology for integrated measurement, monitoring, and optimization in CCA component-based applications is given.
[ccpe]
"Knowledge Engineering for Automatic Parallel Performance Diagnosis " By L. Liand A. D. Malony. Department of Computer and Information Science, University of Oregon, U.S.A
Keywords: Paral lel program, Performance diagnosis, Patterns, Knowledge engineering; Divide-and-Conquer

Scientific parallel programs often undergo significant performance tuning before meeting their performance expectation. Performance tuning naturally involves a diagnosis process locating performance bugs that make a program inefficient and explaining them in terms of high-level program design. We present a systematic approach to generating performance knowledge for automatically diagnosing parallel programs. Our approach exploits program semantics and parallelism found in parallel programming patterns to search and explain bugs. The approach addresses how to extract the expert knowledge required for performance diagnosis from parallel patterns and represents the knowledge in a manner such that the diagnosis process can be automated. We demonstrate the effectiveness of our knowledge engineering approach through a case study. Our experience diagnosing Divide-and-Conquer programs shows that pattern- based performance knowledge can provide effective guidance for locating and explaining performance bugs at a high level of program abstraction.
[ccpe_cframe06]
N. Trebon, A. Morris, J. Ray, S. Shende, and A. D. Malony, "Performance Modeling of Component Assemblies," Concurrency and Computation: Practice and Experience, CPE 1076, Special issue Compframe 2005, John Wiley, 2006.
Keywords: component, performance, TAU

A parallel component environment places constraints on performance measurement and modeling. For instance, it must be possible to instrument the application without access to the source code. In addition, a component may admit multiple implementations, based on the choice of algorithm, data structure, parallelization strategy, etc., posing the user with the problem of having to choose the correct implementation and achieve an optimal (fastest) component assembly. Under the assumption that an empirical performance model exists for each implementation of each component, simply choosing the optimal implementation of each component does not guarantee an optimal component assembly since components interact with each other. An optimal solution may be obtained by evaluating the performance of all of the possible realizations of a component assembly given the components and all of their implementations, but the exponential complexity renders the approach unfeasible as the number of components and their implementations rise. This paper describes a non-intrusive, coarse- grained performance monitoring system that allows the user to gather performance data through the use of proxies. In addition, a simple optimization library that identifies a nearly optimal configuration is proposed. Finally, some experimental results are presented that illustrate the measurement and optimization strategies.
[cfd06]
S. Shende, A. D. Malony, A. Morris, S. Parker, J. de St. Germain, "Performance Evaluation of Adaptive Scientific Applications using TAU," chapter, Parallel Computational Fluid Dynamics - Theory and Applications, (eds - A. Deane et. al.), pp. 421-428, Elsevier B.V., 2006.
Keywords: CFD, Uintah, Phases, TAU, performance evaluation

Fueled by increasing processor speeds and high speed interconnection networks, advances in high performance computer architectures have allowed the development of increasingly complex large scale parallel systems. For computational scientists, programming these systems efficiently is a challenging task. Understanding the performance of their parallel applications is equally daunting. To observe and comprehend the performance of parallel applications that run on these systems, we need performance evaluation tools that can map the performance abstractions to the user's mental models of application execution. For instance, most parallel scientific applications are iterative in nature. In the case of CFD applications, they may also dynamically adapt to changes in the simulation model. A performance measurement and analysis system that can differentiate the phases of each iteration and characterize performance changes as the application adapts will enable developers to better relateperformance to their application behavior. In this paper, we present newperformance measurement techniques to meet these needs. In section 2, we describe our parallel performance system, TAU. Section 3 discusses how new TAU profiling techniques can be applied to CFD applications with iterative and adaptive characteristics. In section 4, we present a case study featuring the Uintah computational framework and explain how adaptive computational fluid dynamics simulations are observed using TAU. Finally, we conclude with a discussion of how the TAU performance system can be broadly applied to other CFD frameworks and present a few examples of its usage in this field.
[chasm_cpe03]
C. E. Rasmussen, M. J. Sottile, S. S. Shende, and A. D. Malony, "Bridging the language gap in scientific computing: the Chasm approach," Concurrency and Computation: Practice and Experience,Volume 18, Issue 2 (February 2006), pp. 151-162, John Wiley & Sons, 2006.
Keywords: Fortran 95, C, C++, language interoperability, XML, compilers, PDT

Chasm is a toolkit providing seamless language interoperability between Fortran95 and C++. Language interoperability is important to scientific programmers because scientific applications are predominatly written in Fortran, while software tools are mostly written in C++. Two design features differentiate Chasm from other related tools. First, we avoid the problem of least common denominator' type systems and programming models, something found in most IDL-based interoperability systems. Chasm uses the intermediate representation generated by a compiler front-end for each supported language as its source of interface information instead of an IDL. Second, bridging code is generated for each pairwise language binding, removing the need for a common intermediate data representation and multiple levels of indirection between the caller and callee. These features make Chasm a simple system that performs well, requires minimal user intervention, and in most instances, bridging code generation can be performed automatically. Reliance on standards such as XML and industrial strength compiler technology reduces the complexity and scope of the Chasm toolset making it easily extensible and highly portable.
[cluster07]
A. Nataraj, A.Malony, S. Shende, A. Morris, "Integrated Parallel Performance Views." Appears in Cluster Computing published by Springer Netherlands
Keywords: Parallel performance, Kernel, Linux, Instrumentation, Measurement, Integrated

The influences of the operating system and system-specific effects on application performance are increasingly important considerations in high performance computing. OS kernel measurement is key to understanding the performance influences and the interrelationship of system and user-level performance factors. The KTAU (Kernel TAU) methodology and Linux-based framework provides parallel kernel performance measurement from both a /kernel-wide/ and /process-centric/ perspective. The first characterizes overall aggregate kernel performance for the entire system. The second characterizes kernel performance when it runs in the context of a particular process. KTAU extends the TAU performance system with kernel-level monitoring, while leveraging TAU~Rs measurement and analysis capabilities. We explain the rational and motivations behind our approach, describe the KTAU design and implementation, and show working examples on multiple platforms demonstrating the versatility of KTAU in integrated system/application monitoring.
[cluster08b]
P. Beckman, K. Iskra, K. Yoshii, S. Coghlan and A. Nataraj, "Benchmarking the effects of operating system interference on extreme-scale parallel machines", Appears in Cluster Computing 2008 (pg 3-16) published by Springer Netherlands 1386-7857 (Print) 1573-7543 (Online) Volume 11, Number 1 / March, 2008
Keywords: Microbenchmark, Noise, Petascale, Synchronicity

We investigate operating system noise, which we identify as one of the main reasons for a lack of synchronicity in parallel applications. Using a micro-benchmark, we measure the noise on several contemporary platforms and find that, even with a general-purpose operating system, noise can be limited if certain precautions are taken. We then inject artificially generated noise into a massively parallel system and measure its influence on the performance of collective operations. Our experiments indicate that on extreme-scale platforms, the performance is correlated with the largest interruption to the application, even if the probability of such an interruption on a single process is extremely small. We demonstrate that synchronizing the noise can significantly reduce its negative influence.
[computer95]
Michael T. Heath, Allen D. Malony, Diane T. Rover, "The Visual Display of Parallel Performance Data," IEEE Computer, 28(11), Nov. 1995, pp. 21-28, 1995.
Keywords: data visualization, tracing

Data visualization can help users decipher scientific and engineering data and better comprehend large, complex data sets. The authors present a high-level abstract model for performance visualization that relates behavior abstractions to visual representations in a structured way. This model is based on two principles: Displays of performance information are linked directly to parallel performance models, and performance visualizations are designed and applied in an integrated environment. The authors explain some advantages of adhering to these principles. They begin by establishing a context for users to clearly understand performance information, defining terms such as perspective, semantic context, and subview mapping. Next, they describe the techniques used to scale graphical views as data sets become very large. Finally, they discuss concepts such as user perception and interaction, comparisons and cross-correlations between related views or representations, and information extraction. On the basis of this conceptual foundation, the authors present examples of practical applications for the model. These case studies address topics such as concurrency and communication in data-parallel computation, access patterns for data distributions, and critical paths in parallel computation. The authors conclude by discussing the relationship between performance visualization and general scientific visualization.
[cpe02]
S. Shende, and A. D. Malony, "Integration and Application of TAU in Parallel Java Environments," Concurrency and Computation: Practice and Experience, Volume 15 (3-5), Mar-Apr 2003, Wiley, pp. 501-519, 2003.
Keywords: TAU, Parallel, Java, Performance Tools

Parallel Java environments present challenging problems for performance tools because of Java's rich language system and its multi-level execution platform combined with the integration of native-code application libraries and parallel runtime software. In addition to the desire to provide robust performance measurement and analysis capabilities for the Java language itself, the coupling of dierent software execution contexts under a uniform performance model needs careful consideration of how events of interest are observed and how cross-context parallel execution information is linked. This paper relates our experience in extending the TAU performance system to a parallel Java environment based on mpiJava. We describe the complexities of the instrumentation model used, how performance measurements are made, and the overhead incurred. A parallel Java application simulating the game of Life is used to show the performance system's capabilities.
[cpe05]
L. Li, and A. D. Malony, "Knowledge Engineering for Automatic Parallel Performance Diagnosis," (submitted to) Concurrency and Computation: Practice and Experience, John Wiley & Sons, 2005.
Keywords: Performance, Diagnosis, Knowledge Engineering, Parallel, TAU

Scientific parallel programs often undergo significant performance tuning before meeting their performance expectation. Performance tuning naturally involves a diagnosis process locating performance bugs that make a program inefficient and explaining them in terms of high-level program design. We present a systematic approach to generating performance knowledge for automatically diagnosing parallel programs. Our approach exploits program semantics and parallelism found in computational models to search and explain bugs. We first identify categories of expert knowledge required for performance diagnosis and describe how to extract the knowledge from computational models. Second, we represent the knowledge in such a way that diagnosis can be carried out in an automatic manner. Finally, we demonstrate the effectiveness of our knowledge engineering approach through a case study. Our experience diagnosing Master-Worker programs show that model-based performance knowledge can provide effective guidance for locating and explaining performance bugs at a high level of program abstraction.
[fgcs01]
Allen D. Malony, B. Robert Helm, "A theory and architecture for automating performance diagnosis," Future Generation Computer Systems, Vol 18, Issue 1, Elsevier Science Publishers, Amsterdam, pg. 189-200, Sept. 2001.
Keywords: Performance, Diagnosis, Parallel

This prospectus describes research to simplify programing of parallel computers. It focuses specifically on performance diagnosis, the process of finding and explaining sources of inefficiency in parallel programs. Considerable research already has been done to simplify performance diagnosis, but with mixed success. Two elements are missing from existing research: 1. There is no general theory of how expert programers do performance diagnosis. As a result, it is difficult for researchers to compare existing work or fit their work to programers. It is difficult for programers to locate products of existing research that meet their needs. 2. There is no automated, adaptable software to help programers do performance diagnosis. Existing software is either automated but limited to very specific circumstances, or in general, not automated for most tasks. The research described here addresses both of these issues. The research will develop and validate a theory of performance diagnosis, based on general models on diagnostic problem-solving. It will design and evaluate a computer program (called Poirot) that employs the theory to automatically, adaptably support performance diagnosis.
[handbook00]
A. D. Malony, "Tools for Parallel Computing: A Performance Evaluation Perspective," in J. Blazewicz et. al. (Editors), Handbook on Parallel and Distributed Processing, Springer Verlag, pp. 342-363, 2000.
Keywords: parallel performance environments, performance evaluation, performance diagnosis, perturbation, observability, measurement, prediction, parallel tools

To make effective use of parallel computing environments, users have come to expect a broad set of tools that augment parallel programming and execution infrastructure with capabilities such as performance evaluation, debugging, runtime program control, and program interaction. The rich history of parallel tool research and development reflects both fundamental issues in concurrent processing and a progressive evolution of tool implementations, targeting current and future parallel computing goals. The state of tools for parallel computing is discussed from a perspective of performance evaluation. Many of the challenges that arise in parallel performance tools are common to other tool areas. I look at four such challenges: modeling, observability, diagnosis, and perturbation. The need for tools will always be present in parallel and distributed systems, but the emphasis on tool support may change. The discussion given is intentionally high-level, so as not to exclude the many important ideas that have come from parallel tool projects. Rather, I attempt to present viewpoints on the challenges that I think would be of concern in any performance tool design.
[ieeecga95]
Steven T. Hackstadt and Allen D. Malony, Visualizing Parallel Programs and Performance, IEEE Computer Graphics and Applications, Vol. 15, No. 4, July 1995, pp. 12-14.
Keywords: parallel performance visualization, scientific visualization, data explorer

Performance visualization uses graphical display techniques to analyze performance data and improve understanding of complex performance phenomena. Current parallel performance visualizations are predominantly two-dimensional. A primary goal of our work is to develop new methods for rapidly prototyping multidimensional performance visualizations. By applying the tools of scientific visualization, we can prototype these next-generation displays for performance visualization -- if not implement them as end-user tools -- using existing software products and graphical techniques that physicists, oceanographers, and meteorologists have used for several years.
[ieeecomp95]
Michael T. Heath, Allen D. Malony, and Diane T. Rover, The Visual Display of Parallel Performance Data, IEEE Computer, Vol. 28, No. 11, November 1995, pp. 21-28.
Keywords: parallel performance visualization, models, concepts, principles, scientific visualization, case studies
[ieeepdt95]
Michael T. Heath, Allen D. Malony, and Diane T. Rover, Parallel Performance Visualization: From Practice To Theory, IEEE Parallel and Distributed Technology, Vol. 3, No. 4, Winter 1995, pp. 44-60.
Keywords: parallel performance visualization, models, concepts, principles, scientific visualization, case studies
[ifpcs89]
Allen D. Malony, "Multiprocessor Instrumentation: Approaches for Cedar," Chapter, Instrumentation for Future Parallel Computing Systems (Eds: M. Simmons, R. Koskela, I. Bucher), ACM Press, NY, pp 1-33, 1989.
Keywords: Cedar, measurement techniques, modeling techniques, parallel processors, Complexity hierarchies

Parallel systems pose a unique challenge to performance measurement and instrumentation. The complexity of these sysems manifests itself as an increase in performance complexity as well as programming complexity. The complex interaction of the many architectural, hardware, and software features of these systems results in a significantly larger space of possible performance behavior and potential erformance bottlenecks. Programming parallel systems requires that users understand the performance characteristics of the machines and be able tomodify their programs and algorithms accordingly. The instrumentation problem, therefore, is to develop tools to aid the user in investigating performance problems and in determining the most effective way of exploiting the high performance capabilities of parallel systems. This paper gives observations on the parallel system instrumentation problem in the context of the Cedar multiprocessor. The Cedar system integrates several architectural, hardware, and software concepts for parallel operation. The combination makes Cedar a particularly interesting machine for investigating instrumentation issues and developing prototype tools. The different needs for performance evaluation on the Cedar machine define the instruementation requirements. The implementation of instrumentation tools, however, involves tradeoffs in design, resolution, and accuracy, and must be weighed against the payoff in better performance evaluation. This discussion of instrumentation tools targeted for Cedar considers these tradeoffs.
[ijhpca05.cca]
David E. Bernholdt, Benjamin A. Allan, Robert Armstrong, Felipe Bertrand, Kenneth Chiu, Tamara L. Dahlgren, Kostadin Damevski, Wael R. Elwasif, Thomas G. W. Epperly, Madhusudhan Govindaraju, Daniel S. Katz, James A. Kohl, Manoj Krishnan, Gary Kumfert, J. Walter Larson, Sophia Lefantzi, Michael J. Lewis, Allen D. Malony, Lois C. McInnes, Jarek Nieplocha, Boyana Norris, Steven G. Parker, Jaideep Ray, Sameer Shende, Theresa L. Windus, and Shujia Zhou, "A Component Architecture for High-Performance Scientific Computing," International Journal of High Performance Computing Applications, ACTS Collection Special Issue, SAGE Publications, 20(2):163 -- 202, Summer 2006.
Keywords: CCA

The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a \emph{plug-and-play} environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.
[ijhpca05.tau]
S. Shende and A. D. Malony, "The TAU Parallel Performance System," International Journal of High Performance Computing Applications, SAGE Publications, 20(2):287-331, Summer 2006
Keywords: TAU, Profiling, Tracing, PerfDMF, performance evaluation, instrumentation, measurement, analysis, Paraprof

The ability of performance technology to keep pace with the growing complexity of parallel and distributed systems depends on robust performance frameworks that can at once provide system-specific performance capabilities and support high-level performance problem solving. Flexibility and portability in empirical methods and processes are influenced primarily by the strategies available for instrumentation and measurement, and how effectively they are integrated and composed. This paper presents the TAU (Tuning and Analysis Utilities) parallel performance system and describe how it addresses diverse requirements for performance observation and analysis.
[ijhpca07]
A. D. Malony, S. Shende, A. Morris, and F. Wolf, "Compensation of Measurement Overhead in Parallel Performance Profiling," in International Journal of High Performance Computing Applications (IJHPCA), Vol 21, No. 2, pp. 174--194, Summer 2007.
Keywords: Performance measurement and analysis, parallel computing, profiling,intrusion, overhead compensation, TAU

Performance profiling generates measurement overhead during parallel program execution. Measurement overhead, in turn, introduces intrusion in a program's runtime performance behavior. Intrusion can be mitigated by controlling instrumentation degree, allowing a tradeoff of accuracy for detail. Alternatively, the accuracy in profile results can be improved by reducing the intrusion error due to measurement overhead. Models for compensation of measurement overhead in parallel performance profiling are described. An approach based on rational reconstruction is used to understand properties of compensation solutions for different parallel scenarios. From this analysis, a general algorithm for on-the-fly overhead assessment and compensation is derived.
[ijpdsn99]
Allen D. Malony and Steven T. Hackstadt, Performance of a System for Interacting with Parallel Applications, International Journal of Parallel and Distributed Systems and Networks, special issue on Measurement of Program and System Performance, M. H. Mickle, ed., Vol. 2, No. 3, 1999, Acta Press, Anaheim, CA, pp. 155-170.
Keywords: distributed data, program interaction, performance analysis, parallel tool, computational steering

A variety of systems have been developed to interact with parallel programs for purposes of debugging, monitoring, visualization, and computational steering. In addition to addressing different functional objectives, these systems have nonfunctional characteristics that are equally important for a user to know. Clearly, for most users, performance is an important nonfunctional requirement of a program interaction system. However, characterizing the performance of an interaction system for parallel programs is particularly challenging, especially in asynchronous, distributed environments. This paper presents a comprehensive performance analysis of the DAQV system. DAQV has been successfully applied in runtime data visualization, on-line performance monitoring, and computational steering environments. However, DAQV's suitability depends significantly on application context and requirements. By giving a full accounting of DAQV performance, we aim to provide application and environment developers with valuable information about DAQV's potential benefits, before an integration effort takes place. As DAQV's designers, this in-depth performance analysis has led to new insights, resulting in higher performing designs.
[ijpp03]
J. Davison de St. Germain, Alan Morris, Steven G. Parker, Allen D. Malony, Sameer Shende: Performance Analysis Integration in the Uintah Software Development Cycle. International Journal of Parallel Programming 31(1): 35-53 (2003)
Keywords: Uintah, TAU, XPARE, Performance Mapping, SEAA

The increasing complexity of high-performance computing environments and programming methodologies presents challenges for empirical performance evaluation. Evolving parallel and distributed systems require performance technology that can be flexibly configured to observe different events and associated performance data of interest. It must also be possible to integrate performance evaluation techniques with the programming paradigms and software engineering methods. This is particularly important for tracking performance on parallel software projects involving many code teams over many stages of development. This paper describes the integration of the TAU and XPARE tools in the Uintah Computational Framework (UCF). Discussed is the use of performance mapping techniques to associate low-level performance data to higher levels of abstraction in UCF and the use of performance regression testing to provides a historical portfolio of the evolution of application performance. A scalability study shows the benefits of integrating performance technology in building large-scale parallel applications.
[ijsahpc97]
Janice Cuny, Robert Dunn, Steven T. Hackstadt, Christopher Harrop, Harold H. Hersey, Allen D. Malony, and Douglas Toomey, Building Domain-Specific Environments for Computational Science: A Case Study in Seismic Tomography, International Journal of Supercomputing Applications and High Performance Computing, Vol. 11, No. 3, Fall 1997. Also appearing in the Proceedings of the Workshop on Environments and Tools For Parallel Scientific Computing, Lyon, France, August 1996.
Keywords: computational science, domain-specific environments, seismic tomography, visualization, distributed data access

We report on our experiences in building a computational environment for tomographic image analysis for marine seismologists studying the structure and evolution of mid-ocean ridge volcanism. The computational environment is determined by an evolving set of requirements for this problem domain and includes needs for high-performance parallel computing, large data analysis, model visualization, and computation interaction and control. Although these needs are not unique in scientific computing, the integration of techniques for seismic tomography with tools for parallel computing and data analysis into a computational environment was (and continues to be) an interesting, important learning experience for researchers in both disciplines. For the geologists, the use of the environment led to fundamental geologic discoveries on the East Pacific Rise, the improvement of parallel ray tracing algorithms, and a better regard for the use of computational steering in aiding model convergence. The computer scientists received valuable feedback on the use of programming, analysis, and visualization tools in the environment. In particular, the tools for parallel program data query (DAQV) and visualization programming (Viz) were demonstrated to be highly adaptable to the problem domain. We discuss the requirements and the components of the environment in detail. Both accomplishments and limitations of our work are presented.
[iwomp06journal]
A. Morris, A. D. Malony, S. Shende, "Supporting Nested OpenMP Parallelism in the TAU Performance System," (to appear) International Journal of Parallel Programming, Springer, LNCS, 2007.
Keywords: OpenMP, nested parallelism, TAU

Nested OpenMP parallelism allows an application to spawn teams of nested threads. This hierarchical nature of thread creation and usage poses problems for performance measurement tools that must determine thread context to properly maintain per-thread performance data. In this paper we describe the problem and a novel solution for identifying threads uniquely. Our approach has been implemented in the TAU performance system and has been successfully used in profiling and tracing OpenMP applications with nested parallelism. We also describe how extensions to the OpenMP standard can help tool developers uniquely identify threads.
[jop07]
D. Gunter, K. Huck, K. Karavanic, J. May, A. Malony, K. Mohror, S. Moore, A. Morris, S. Shende, V. Taylor, X. Wu, and Y. Zhang, "Performance database technology for SciDAC applications", Journal of Physics: Conference Series, vol. 78, June 2007.
Keywords:
[jp05]
P. Worley, J. Candy, L. Carrington, K. Huck, T. Kaiser, G. Mahinthakumar, A. Malony, S. Moore, D. Reed, P. Roth, H. Shan, S. Shende, A. Snavely, S. Sreepathi, F. Wolf, and Y. Zhang, "Performance analysis of GYRO: a tool evaluation", Journal of Physics: Conference Series 16 (2005) pp. 551-555, SciDAC 2005, Institute of Physics Publishing Ltd., 2005.
Keywords: IPM, KOJAK, TAU, SvPablo, PMaC, Gyro, Performance evaluation

The performance of the Eulerian gyrokinetic-Maxwell solver code GYRO is analyzed on five high performance computing systems. First, a manual approach is taken, using custom scripts to analyze the output of embedded wallclock timers, floating point operation counts collected using hardware performance counters, and traces of user and communication events collected using the profiling interface to Message Passing Interface (MPI) libraries. Parts of the analysis are then repeated or extended using a number of sophisticated performance analysis tools: IPM, KOJAK, SvPablo, TAU, and the PMaC modeling tool suite. The paper briefly discusses what has been discovered via this manual analysis process, what performance analyses are inconvenient or infeasible to attempt manually, and to what extent the tools show promise in accelerating or significantly extending the manual performance analyses.
[openmp0506]
M.S. Mueller, B. Chapman, B.R.d. Supinski, A.D. Malony, M. Voss (Eds), "OpenMP Shared Memory Parallel Programming". Proceedings of the International Workshop, IWOMP 2005 and IWOMP 2006.
Keywords: MPI, cluster, computing, collaborative, compiler, optimization, computational, science, data, clustering, distributed, systems, embedded, systems, grid, computing, hierarchical, thread, scheduling, high, performance, computing, hybrid, parallelization, memory, bandwidth, modeling, multi-core, multi-threaded, computing, nested, parallelism, numerical, computation, openMP, parallel, programming, parallelization, performance, evaluation, performance, optimizations, program, analysis, scalability, shared, memory, simulation

This book constitutes the thoroughly refereed post-workshop proceedings of the First and the Second International Workshop on OpenMP, IWOMP 2005 and IWOMP 2006, held in Eugene, OR, USA, and in Reims, France, in June 2005 and 2006 respectively. The first part of the book presents 16 revised full papers carefully reviewed and selected from the IWOMP 2005 program and organized in topical sections on performance tools, compiler technology, run-time environment, applications, as well as the OpenMP language and its evaluation. In the second part there are 19 papers of IWOMP 2006, fully revised and grouped thematically in sections on advanced performance tuning aspects of code development applications, and proposed extensions to OpenMP.
A. D. Malony, S. Shende, R. Bell, K. Li, L. Li, N. Trebon, "Advances in the TAU Performance System," Chapter, "Performance Analysis and Grid Computing," (Eds. V. Getov, M. Gerndt, A. Hoisie, A. Malony, B. Miller), Kluwer, Norwell, MA, pp. 129-144, 2003.
Keywords: Performance, tools, parallel, distributed, TAU

To address the increasing complexity in parallel and distributed systems and software, advances in performance technology towards more robust tools and broader, more portable implementations are needed. In doing so, new challenges for performance instrumentation, measurement, analysis, and visualization arise to address evolving requirements for how performance phenomena is observed and how performance data is used. This paper presents recent advances in the TAU performance system in four areas where improvements in performance technology are important: instrumentation control, performance
[pdcp02]
Allen D. Malony, Sameer Shende, "Performance Technology for Complex Parallel and Distributed Systems," in "Quality of Parallel and Distributed Programs and Systems," (Eds. Peter Kacsuk and Gabriele Kotsis), Nova Science Publishers, Inc., New York, pp. 25-41, 2003.
Keywords: Performance tools, complex systems, instrumentation, measurement, analysis, TAU

The ability of performance technology to keep pace with the growing complexity of parallel and distributed systems will depend on robust performance frameworks that can at once provide system-specific performance capabilities and support high-level performance problem solving. The TAU system is offered as an example framework that meets these requirements. With a flexible, modular instrumentation and measurement system, and an open performance data and analysis environment, TAU can target a range of complex performance scenarios. Examples are given showing the diversity of TAU application.
[pes88]
K.A. Gallivan, W. Jalby, A.D. Malony and P.C. Yew, "Performance Analysis on the Cedar System", Chapter, Performance Evaluation of Supercomputers, Edited by J. Martin, Elsevier Science Publishers B.V. (North-Holland), pp. 109-142, 1987.
Keywords: Cedar, performance measurement.

To understand the complex interactions of the many factors contributing to supercomputer performance, supercomputer designers and users must have access to an integrated performance analysis system capable of measuring, analyzing, modeling, and predicting performance across a hierarchy of details and goals. The performance analysis system being developed for the CEDAR multiprocessor supercomputer embodies these characteristics and is discussed in this paper.
[piv90]
Allen D. Malony, "JED: Just an Event Display," Chapter, Performance Instrumentation and Visualization, (Eds: M. Simmons, R. Koskela), ACM Press, NY, pp. 99-115, 1990.
Keywords: JED, interconnection architectures, performance attributes, performance measurement

Event tracing has become a popular form of gathering performance data on multiprocessor computer systems. Indeed, a performance measurement facility has been developed for the Cedar multiprocessor that uses taing as a back-end mechanism fr collecting several run-time measurements including count, time, virtual memory, and event data. Tools to study an event trace, however, are typically specialized according to the type of data collected. Usually various trace analyses and displays are developed based on some event interpretation model. Whereas this approach will give specific information about particular events and their occurences in a trace, it is not particularly easy to extend; new events often require new analysis and display techniques.
[scidac_cca07]
L. McInness, T. Dahlgren, J. Nieplocha, D. Bernholdt, B. Allan, R. Armstrong, D. Chavarria, W. Elwasif, I. Gorton, J. Kenny, M. Krishnan, A. Malony, B. Norris, J. Ray, and S. Shende, "Research Initiatives for Plug-and-play Scientific Computing", J. Physics: Conference Series Vol. 78 No. 012046, doi:10.1088/1742-6596/78/1/012046, Proc. SciDAC Conference, 2007.
Keywords: CCA, TAU, Component software, CQoS

This paper introduces three component technology initiatives within the SciDAC Center for Technology for Advanced Scientific Component Software (TASCS) that address ever-increasing productivity challenges in creating, managing, and applying simulation software to scientific discovery. By leveraging the Common Component Architecture (CCA), a new component standard for high-performance scientific computing, these initiatives tackle difficulties at different but related levels in the development of component-based scientific software: (1) deploying applications on massively parallel and heterogeneous architectures, (2) investigating new approaches to the runtime enforcement of behavioral semantics, and (3) developing tools to facilitate dynamic composition, substitution, and reconfiguration of component implementations and parameters, so that application scientists can explore tradeoffs among factors such as accuracy, reliability, and performance.
[scidac_qcd07]
Y. Zhang, R. Fowler, K. Huck, A. Malony, A. Porterfield, D. Reed, S. Shende, V. Taylor, and X. Wu, "US QCD Computational Performance Studies with PERI," J. Physics: Conference Series Vol. 78, No. 012083 doi:10.1088/1742-6596/78/1/012083, Proc. of SciDAC 2007 conference, 2007.
Keywords: QCD, USQCD, PERI, TAU, PerfDMF, PerfExplorer

We report on some of the interactions between two SciDAC projects: The National Computational Infrastructure for Lattice Gauge Theory (USQCD), and the Performance Engineering Research Institute (PERI). Many modern scientific programs consistently report the need for faster computational resources to maintain global competitiveness. However, as the size and complexity of emerging high end computing (HEC) systems continue to rise, achieving good performance on such systems is becoming ever more challenging. In order to take full advantage of the resources, it is crucial to understand the characteristics of relevant scientific applications and the systems these applications are running on. Using tools developed under PERI and by other performance measurement researchers, we studied the performance of two applications, MILC and Chroma, on several high performance computing systems at DOE laboratories. In the case of Chroma, we discuss how the use of C++ and modern software engineering and programming methods are driving the evolution of performance tools.
[sigmetrics89]
Kyle Gallivan, Dennis Gannon, William Jalby, Allen D. Malony, Harry A. G. Wijshoff, "Behavioral Characterization of Multiprocessor Memory Systems: a Case Study," ACM SIGMETRICS Performance Evaluation Review, Vol. 17, Issue 1, pp. 79-88, May 1989.
Keywords: cache memories, software engineering, array and vector processors, supercomputers, main memory

The speed and efficiency of the memory system is a key limiting factor in the performance of supercomputers. Consequently, one of the major concerns when developing a high- performance code, either manually or automatically, is determining and characterizing the influence of the memory system on performance in terms of algorithmic parameters. Unfortunately, the performance data available to an algorithm designer such as various benchmarks and, occasionally, manufacturer-supplied information, e.g. instruction timings and architecture component characteristics, are rarely sufficient for this task. In this paper, we discuss a systematic methodology for probing the performance characteristics of a memory system via a hierarchy of data-movement kernels. We present and analyze the results obtained by such a methodology on a cache-based multi-vector processor (Alliant FX/8). Finally, we indicate how these experimental results can be used for predicting the performance of simple Fortran codes by a combination of empirical observations, architectural models and analytical techniques.
[software89]
Vincent A. Guarna, Jr., Dennis Gannon, David Jablonowski, Allen D. Malony, Yogesh Gaur, "Faust: An Integrated Environment for Parallel Programming," IEEE Software Vol. 6, No. 4, July/August 1989, pp. 20-27, 1989.
Keywords: Faust, parallel programming, project-management tool, context editor, program database, performance-evaluation tools, functional integration, common data sets, Sigma, application code, dynamic call-graph tool, multiprocessor performance analysis, parallel programming, programming environments, project support environments

A description is given of Faust, an integrated environment for the development of large, scientific applications. Faust includes a project-management tool, a context editor that is interfaced to a program database, and performance-evaluation tools. In Faust, all applications work is done in the context of projects, which serve as the focal point for all tool interactions. A project roughly corresponds to an executable program. Faust achieves functional integration through operations on common data sets maintained in each project. Sigma, a Faust tool designed to help users of parallel supercomputers retarget and optimize application code, helps them either fine-tune parallel code that has been automatically generated or optimize a new parallel algorithm's design. Faust includes a dynamic call-graph tool and an integrated, multiprocessor performance analysis and characterization tool set.
[software91]
Allen D. Malony, David H. Hammerslag, David J. Jablonowski, "Traceview: A Trace Visualization Tool," IEEE Software, 8(5), pp 19-28, 1991.
Keywords: Traceview, I/O features, computer graphics, trace-management, trace-data analysis, trace visualization tool, trace-analysis systems

The design, development, and application of Traceview, a general-purpose trace-visualization tool that implements the trace-management and I/O features usually found in special-purpose trace-analysis systems, are described. The aspects of trace visualization that can be incorporated into a reusable tool are identified. The tradeoff in general-purpose design versus semantically based, detailed trace-data analysis is evaluated. Display methods and Traceview applications are discussed.
[sp08]
K. A. Huck, A. D. Malony, S. Shende, and A. Morris, "Knowledge Support and Automation for Performance Analysis with PerfExplorer 2.0", Large-Scale Programming Tools and Environments, special issue of Scientific Programming, vol. 16, no. 2-3, pp. 123--134. 2008.
Keywords: TAU, parallel performance analysis, data mining, scalability, scripting, metadata, knowledge supported analysis

The integration of scalable performance analysis in parallel development tools is difficult. The potential size of data sets and the need to compare results from multiple experiments presents a challenge to manage and process the information. Simply to characterize the performance of parallel applications running on potentially hundreds of thousands of processor cores requires new scalable analysis techniques. Furthermore, many exploratory analysis processes are repeatable and could be automated, but are now implemented as manual procedures. In this paper, we will discuss the current version of PerfExplorer, a performance analysis framework which provides dimension reduction, clustering and correlation analysis of individual trails of large dimensions, and can perform relative performance analysis between multiple application executions. PerfExplorer analysis processes can be captured in the form of Python scripts, automating what would otherwise be time-consuming tasks. We will give examples of large-scale analysis results, and discuss the future development of the framework, including the encoding and processing of expert performance rules, and the increasing use of performance metadata.
[tcs98]
Steven T. Hackstadt and Allen D. Malony, DAQV: Distributed Array Query and Visualization Framework, Journal of Theoretical Computer Science, special issue on Parallel Computing, Vol. 196, No. 1-2, April 1998, pp. 289-317.
Keywords: visualization, distributed data access, hpf, parallel tool

This paper describes the design and implementation of the Distributed Array Query and Visualization (DAQV) system for High Performance Fortran, a project sponsored by the Parallel Tools Consortium. DAQV's implementation leverages the HPF language, compiler, and runtime system to address the general problem of providing high-level access to distributed data structures. DAQV supports a framework in which visualization and analysis clients connect to a distributed array server (i.e., the HPF application with DAQV control) for program-level access to array values. Implementing key components of DAQV in HPF itself has led to a robust and portable solution in which clients do not need to know how the data is distributed.

(This paper is an expanded version of [europar96] and [tr9602].)

[tjs02]
B. Mohr, A. Malony, S. Shende, F.Wolf, "Design and Prototype of a Performance Tool Interface for OpenMP," The Journal of Supercomputing, 23, 105-128,2002 Kluwer Academic Publishers.
Keywords: OpenMP, OPARI, KOJAK, EXPERT, TAU

This paper proposes a performance tools interface for OpenMP, similar in spirit to the MPI profiling interface in its intent to define a clear and portable API that makes OpenMP execution events visible to runtime performance tools. We present our design using a source-level instrumentation approach based on OpenMP directive rewriting. Rules to instrument each directive and their combination are applied to generate calls to the interface consistent with directive semantics and to pass context information (e.g., source code locations) in a portable and efficient way. Our proposed OpenMP performance API further allows user functions and arbitrary code regions to be marked and performance measurement to be controlled using new OpenMP directives. To prototype the proposed OpenMP performance interface, we have developed compatible performance libraries for the EXPERT automatic event trace analyzer and the TAU performance analysis framework. The directive instrumentation transformations we define are implemented in a source-to-source translation tool called OPARI. Application examples are presented for both EXPERT and TAU to show the OpenMP performance interface and OPARI instrumentation tool in operation. When used together with the MPI profiling interface (as the examples also demonstrate), our proposed approach provides a portable and robust solution to performance analysis of OpenMP and mixed-mode (OpenMP + MPI) applications.
[tpds92]
A. D. Malony, D. A. Reed, H. A. G. Wijshoff, "Performance Measurement Intrusion and Perturbation Analysis," IEEE Transactions on Parallel and Distributed Systems," 3(4) July 1992, pp. 433-450, 1992.
Keywords: perturbation compensation, performance measurement

The authors study the instrumentation perturbations of software event tracing on the Alliant FX/80 vector multiprocessor in sequential, vector, concurrent, and vector-concurrent modes. Based on experimental data, they derive a perturbation model that can approximate true performance from instrumented execution. They analyze the effects of instrumentation coverage, (i.e., the ratio of instrumented to executed statements), source level instrumentation, and hardware interactions. The results show that perturbations in execution times for complete trace instrumentations can exceed three orders of magnitude. With appropriate models of performance perturbation, these perturbations in execution time can be reduced to less than 20% while retaining the additional information from detailed traces. In general, it is concluded that it is possible to characterize perturbations through simple models. This permits more detailed, accurate instrumentation than traditionally believed possible.
[tse88]
Daniel A. Reed, Allen D. Malony, Bradley McCredie, "Parallel Discrete Event Simulation Using Shared Memory," IEEE Transactions on Software Engineering, 14(4), April 1988, pp. 541-553, 1988
Keywords: Chandy-Misra algorithm, deadlock recovery, discrete event simulation, distributed simulation, parallel processing

With traditional event list techniques, evaluating a detailed discrete event simulation model can often require hours or even days of computation time. By eliminating the event list and maintaining only sufficient synchronization to ensure causality, parallel simulation can potentially provide speedups that are linear in the number of processors. We present a set of shared memory experiments using the Chandy-Misra distributed simulation algorithm to simulate networks of queues. Parameters of the study include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors.
[tse90]
K. Gallivan, D. Gannon, W. Jalby, A. Malony, H. Wijshoff , "Experimentally Characterizing the Behavior of Multiprocessor Memory Systems: A Case Study," IEEE Transactions on Software Engineering, 16(2), pp 216-223, 1990
Keywords: Characterization, memory systems, multiprocessor, performance

hough architectural improvements in memory organization of multiprocessor systems can increase effective data bandwidth, the actual performance achieved is highly dependent upon the characteristics of the memory address streams; e.g., the data access rate, and the temporal and spatial distributions. Accurately quantify- ing the performance behavior of a multiprocessor memory system across a broad range of algorithmic parameters is crucial if users (and restructuring compilers) are to achieve high-performance codes. In this paper, we demonstrate how the behavior of a cache-based multivector processor memory system can be systematically characterized and its performance experimentally correlated with key features of the ad- dress stream. The approach is based on the definition of a family of parameterized kernels used to explore specific aspects of the memory systems performance. The empirical results from this kernel suite provide the data from which architectural or algorithmic characteris- tics can be studied. The results of applying the approach to an Alliant FX/S are presented.

### Theses and Dissertations

Steven T. Hackstadt, Prototyping Advanced Parallel Program and Performance Visualizations, University of Oregon, Department of Computer and Information Science, Masters Thesis, June 1994. Also available as University of Oregon, Department of Computer and Information Science, Technical Report CIS-TR-95-15, June 1995.
Keywords: visualization prototyping, scientific visualization, parallel performance visualization, program visualization, case studies, scalable visualization, data distribution visualization

A new visualization design process for the development of parallel program and performance visualizations using existing scientific data visualization software can drastically reduce the graphics and data manipulation programming overheads currently experienced by visualization developers. Data visualization tools are designed to handle large quantities of multi-dimensional data and create complex, three-dimensional, customizable displays which incorporate advanced rendering techniques, animation, and display interaction. These capabilities can be used to improve performance visualization, but to be effective, they must be applied as part of a formal methodology relating performance data to visual representations. Under such a formalism, it is possible to describe performance visualizations as mappings from performance data objects to view objects, independent of any graphical programming. Through three case studies, this work examines how an existing scientific visualization tool, IBM's Data Explorer, provides a robust environment for prototyping next-generation parallel performance visualizations.
Steven T. Hackstadt, Domain-Specific Metacomputing for Computational Science: Achieving Specificity Through Abstraction, University of Oregon, Department of Computer and Information Science, Oral Comprehensive Exam Position Paper, September 1997. Available as University of Oregon, Department of Computer and Information Science, Technical Report CIS-TR-97-08, November 1997.
Keywords: metacomputing, heterogeneous computing, domain-specific environments, DSE, software architecutre,domain-specific software architecture, DSSA, computational science

A new area called domain-specific metacomputing for computational science is defined. This area cuts across the larger areas of parallel and distributed computing, computational science, and software engineering in search of techniques and technology that will better allow the creation of useful tools for computational scientists. The paper focuses on how metacomputing, domain-specific environments, and software architectures can be employed as key technologies to this end.
[huck09]
K. Huck, "Knowledge Support for Parallel Performance Data Mining," Ph.D. Dissertation, University of Oregon, March 2009.
Keywords: TAU, PerfExplorer, PerfDMF, data mining, performance knowledge support
[joydip96]
J. Kundu, Integrating Event- and State- Based Approaches to Debugging of Parallel Programs, PhD Thesis, University of Massachusetts, Amherst, MA 01003, 1996.
Keywords: Ariadne, event-based debugging, state-based debugging
[li_kai07]
K. Li "Neuroanatomical Segmentation in MRI Exploiting a priori Knowledge." Department of Computer and Information Science University of Oregon. March 2007
Keywords:

Neuroanatomical segmentation is a problem of extraction of a description of particular neuroanatomical structures of interest that reflects the morphometry (shape measurements) of the subject’s neuroanatomy from any image rendering the neuroanatomical structures of the subject. This dissertation presents a set of algorithms for automatic extraction of cerebral white mater (WM) and gray matter (GM) as well as reconstruction of cortical surfaces from T1-weighted MR images. Neuroanatomical segmentation presented in this dissertation is performed by an image analysis pipeline that steps through five major procedures: 1) the original MR image is processed by a new relative thresholding procedure and a new terrain analysis procedure such that all voxels are classified into one of the three types: WM, GM, and background; 2) the topology defects of the WM are eliminated by a new multiscale morphological topology correction algorithm; 3) cerebral WM is extracted from its superset with a new v procedure called cell-complex-based morphometric analysis; 4) cerebral GM is extracted based on the prior cerebral WM extraction with a set of morphological image analysis procedures; and 5) cortical surfaces are finally reconstructed preserving correct topology with an existing marching cube isosurface algorithm. In this dissertation, we evaluated our neuroanatomical segmentation tool both quantitatively and qualitatively on a set of MR images with groundtruth or manual segmentation, compared the results of our tool with those of four other tools, and demonstrated that the performance of our tool is highly accurate, robust, automatic and computationally efficient. The advantages of our tool are mainly attributed to extensive exploration of various structural, geometrical, morphological, and radiological a priori knowledge, which persists despite of image artifacts and inter- subject anatomical variations. By exploiting a priroi knowledge, we also demonstrated that performing voxel classification prior to brain extraction is a promising research direction, contrary to the traditional procedure of brain extraction followed by voxel classification. Finally, it’s worth noting that the algorithms of voxel classification and morphological image analysis presented in this dissertation for neuroanatomical segmentation can be potentially applied in wider areas in computer vision.
[li_li07]
L. Li, "Model-based Automatic Performance Diagnosis of Parallel Computations." Department of Computer and Information Science University of Oregon. Feburary 2007
Keywords:

Scientific parallel programs often undergo significant performance tuning before meeting their performance expectation. Performance tuning naturally involves a diagnosis process–locating performance bugs that make a program inefficient and explaining them in terms of high-level program design. Important performance measurement and analysis tools have been developed to support the performance analysis with the facilities of running experiments on parallel computers and generating measurement data to evaluate performance. However, current performance analysis technology does not yet allow for associating found performance problems with causes at a high-level program abstraction. Nor does it support the performance diagnosis process in a well automated manner. We present a systematic method to guide the performance diagnosis process and support the process with minimum user intervention. The motivating observation is that performance diagnosis can be greatly improved with the use of performance knowledge about v parallel computation models. We therefore propose an approach to generating performance knowledge for automatically diagnosing parallel programs. Our approach exploits program execution abstraction and parallelism found in computational models to search and explain performance bugs. We identify categories of knowledge required for performance diagnosis and describe how to derive the knowledge from computational models. We represent the extracted knowledge in a manner such that performance inferencing can be carried out in an automatic manner. We have developed the Hercule automatic performance diagnosis system that implements the model-based diagnosis strategy. In this dissertation, we present how Hercule integrates the performance knowledge into a performance analysis tool and demonstrate the effectiveness of our performance knowledge engineering approach through Hercule experiments on a variety of parallel computational models. We also investigate compositional programs that combine two or more models. We extend performance knowledge engineering to capture the interplay of multiple models in an integrated state, and improve Hercule capabilities to support the compositional performance diagnosis. We have applied Hercule to two representative scientific applications, both of which are implemented with combined models. The experiment results show that, requiring minimum user intervention, model-based performance analysis is vital and effective in discovering and interpreting performance bugs at a high level of program abstraction.
[malonyphd]
Allen D. Malony, "Performance Observability," Ph.D. Dissertation, University of Illinois at Urbana-Champaign, Technical Report UIUCDCS-R-90-1603, October 1990.
Keywords: Parallel performance, perturbation analysis, observation

Performance observabiility is the ability to accurately capture, analyze, and present (collectively observe) information about the performance of a computer system. Advances in computer systems design, particularly with respect to parallel processing and supercomputers, have brought a crisis in performance observation -- computer systems technology is outpacing the tools to understand the performance behavior of and to operate the machines near the high-end of their performance range. In this thesis, we study the performance observability problem with emphasis on the practical design, development, and use of tools for performance measurement, analysis, and visualization.

Tools for performance observability must balance the need for performance data against the cost of obtaining it (environment complexity and performance intrusion) -- to little performance data makes performance analysis difficult; too much data perturbs the measurement system. We discuss several methods for performance measurement concentrating specifically on mechanisms for timing and tracing. We show how minor hardware and software modifications can enable better measurement tools to be built and describe results from a prototype hardware-based software monitor developed for the Intel iPSC/2 multiprocessor.

Any software performance measurement perturbs the measured system. We develop two models of performance perturbation to understand the effects of instrumentation intrusion: time-based and event-based. The time-based models use only measured time overheads of intrumentation to approximate actual execution time performance. We show that this model can give accurate approximations for sequential execution and for parallel execution with independent execution ordering. We use the event-based model to quantify the perturbation effects of instrumentations of parallel executions with ordering dependencies. Our results show that this model can be applied in practice to achieve accurate approximations. We also discuss the limitations of the time-based and event-based models.

The potentially large volume of detailed performance data requires new approaches to presentation that can show both gross performance characteristics while allowing users to focus on local performance behavior. We give several examples where performance visualization techniques have been effectively applied, plus discuss the architecture and a prototype of a general performance visualization environment.

Finally, we apply several of the performance measurement, analysis, and visualization techniques to a practical study of performance observability on the Cray X-MP and Cray 2 supercomputers. Our results show that even modest improvements in the existing set of performance tools for a particular machine can have significant benefits in performance evaluation capabiilities.

[mattms01]
Matthew J. Sottile, "The design of a general method for constructing coupled scientific simulations," M.S. Thesis, University of Oregon, 2001.
Keywords: model coupling, parallel computing, control flow graph, Petri Nets

With the growth of modern high-performance computing systems, scientists are able to simulate larger and more complex systems. The most straightforward way to do this is to couple existing computational models to create models of larger systems composed of smaller sub-systems. Unfortunately, no general method exists for automating the process of coupling computational models. We present the design of such a method here. Using existing compiler technology, we assume that control flow analysis can determine the control state of models based on their source code. Scientists can then annotate the control flow graph of a model to identify points at which the model can provide data to or accept data from other models. Couplings are established between two models by establishing bindings between these control flow graphs. Translation of the control flow graph into Petri Nets allows automatic generation of coupling code to implement the couplings.
[ntrebon05]
Nicholas Dale Trebon, "Performance Measurement and Modeling of Component Applications in a High Performance Computing Environment," M.S. Thesis, University of Oregon, June 2005
Keywords: Performance measurement, modeling, component software, parallel computing, CCA, TAU

A parallel component environment places constraints on performance measurement and modeling. For instance, it must be possible to observe component operation without access to the source code. Furthermore, applications that are composed dynamically at run time require reusable performance interfaces for component interface monitoring. This thesis describes a non-intrusive, coarse-grained performance measurement framework that allows the user to gather performance data through the use of proxies that conform to these constraints. From this data, performance models for an individual component can be generated, and a performance model for the entire application can be synthesized. A validation framework is described, in which simple components with known performance models are used to validate the measurement and modeling methodologies included in the framework. Finally, a case study involving the measurement and modeling of a real scientific simulation code is also presented.
[salman10]
A. M. Salman. "A Software Framework for Simulation-based Scientific Investigations." Department of Computer and Information Science University of Oregon. March 2010
Keywords:

This thesis provides a design and development of a software architecture and programming framework that enables domain-oriented scientific investigations to be more easily developed and productively applied. The key research concept is the representation and automation of scientific studies by capturing common methods for experimentation, analysis and evaluation used in simulation science. Such methods include parameter studies, optimization, uncerta'inty analysis, and sensitiv·ity analys·is. While the framework provides a generic way to conduct investigation on an arbitrary simulation, its intended use is to be extended to develop a domain computational environment. The framework hides the access to distributed system resources and the multithreaded execution. A prototype of such a framework called ODESSI (Open Domain- oriented Environment for Simulation-based Scientific Investigation, pronounced odyssey) is developed and IV evaluated on realistic problems in human neuroscience and computational chemistry domains. ODESSI was inspired by our domain problems encountered in the computational modeling of human head electromagnetic for conductivity analysis and source localization. In this thesis we provide tools and methods to solve state of the m-t problems in head modeling. In particular, we developed an efficient and robust HPC solver for the forward problem and a generic robust HPC solver for bElT (bounded Electrical Impedance Tomography) inverse problem to estimate the head tissue conductivities. Also we formulated a method to include skull inhomogeneity and other skull variation in the head model based on information obtained from experimental studies. ODESSI as a framework is used to demonstrate the research ideas in this neuroscience domain and the domain investigations results are discussed in this thesis. ODESSI supports both the processing of investigation activities as well as manage its evolving record of information, results, and provenance.
[shendephd]
S. Shende, "The Role of Instrumentation and Mapping in Performance Measurement," Ph.D. Dissertation, University of Oregon, August 2001.
Keywords: Instrumentation, SEAA, Mapping, Instrumentation-Aware Compilation

Technology for empirical performance evaluation of parallel programs is driven by the increasing complexity of high performance computing environments and programming methodologies. This complexity - arising from the use of high-level parallel languages, domain-specific numerical frameworks, heterogeneous execution models and platforms, multi-level software optimization strategies, and multiple compilation models - widens the semantic gap between a programmer's understanding of his/her code and its runtime behavior. To keep pace, performance tools must provide for the effective instrumentation of complex software and the correlation of runtime performance data with user-level semantics. To address these issues, this dissertation contributes: * a strategy for utilizing multi-level instrumentation to improve the coverage of performance measurement in complex, layered software; * techniques for mapping low-level performance data to higher levels of abstraction in order to reduce the semantic gap between user's abstractions and runtime behavior; and * the concept of instrumentation-aware compilation that extends traditional compilers to preserve the semantics of fine-grained performance instrumentation despite aggressive program restructuring. In each case, the dissertation provides prototype implementations and case studies of the needed tools and frameworks. This dissertation research aims to influence the way performance observation tools and compilers for high performance computers are designed and implemented.

### Technical Reports

[csrd86]
W. Abu-Sufah and A. Malony, "Experimental Results for Vector Processing on the Alliant FX/ 8," CSRD Tech Report #549, UIUC, Feb. 1986.
Keywords: Alliant, Vector processing, Cedar, performance measurement.Alliant, Vector processing, Cedar, performance measurement.Alliant, Vector processing, Cedar, performance measurement.Alliant, Vector processing, Cedar, performance measurement.Alliant, Vector processing, Cedar, performance measurement.

The Alliant FX/8 multiprocessor implments several high-speed computation ideas in software and hardware. Each of the 8 computational elements (CEs) has vector capabilitiesand multiprocessor support. Generally, the FX/8 delivers its highest processing rates when executing vector loops concurrently. In this paper, we present extensive empirical performance results for vector processing on the FX/8. The vector kernels of the LANL BMK8a1 benchmark are used in the experiments.
[csrd87]
K. Gallivan, W. Jalby, A. Malony and P.-C. Yew, "Performance Analysis on the Cedar System," CSRD Report No. 680, University of Illinois at Urbana-Champaign, Sept. 1987.
Keywords: Cedar, performance measurement.

To understand the complex interactions of the many factors contributing to supercomputer performance, supercomputer designers and users must have access to an integrated performance analysis system capable of measuring, analyzing, modeling, and predicting performance across a hierarchy of details and goals. The performance analysis system being developed for the CEDAR multiprocessor supercomputer embodies these characteristics and is discussed in this paper.
[csrd88]
A. D. Malony, "Regular Processor Arrays," CSRD Report No. 734, UIUC, Jan. 1988.
Keywords: regularity, processor arrays, emulation, interconnection networks

Regular is an often used term to suggest simple and unifrom structure of a parallel processor's organization or a parllel algorithm's operation. However, a strict definitiion is long overdue. In this paper, we define regularity for processor array structures in two dimensions and enumerate the eleven distinct regular topologies. Space and time emulation schemes among the regular processor arrays are constructured to compare their geometric and performance characteristics. We also show how algorithms developed for one regular processor array might be transferred to another regular array using matrix multiplication and LU decomposition as examples.
[csrd88.1]
Allen D. Malony, and Joseph R. Pickert, "An Environment Architecture and its use in Performance Data Analysis," Center for Supercomputing Research and Development, Technical Report 829, University of Illinois, Urbana-Champaign, Illinois, Oct. 1988.
Keywords:
[immd94]
A.D. Malony, V. Mertsiotakis, A. Quick. "Stochastic Modeling of Scaled Parallel Programs," Technical Report, Universitat Erlangen--Nurnberg, IMMD VII, 1994.
Keywords: Stochastic modeling, PEPP

Testing the performance scalability of parallel programs can be a time consuming task, involving many performance runs for different computer configurations, processor numbers, and problem sizes. Ideally, scalability issues would be addressed during parallel program design, but tools are not presently available that allow program developers to study the impact of algorithmic choices under different problem and system scenarios. Hence, scalability analysis is often reserved to existing (and available) parallel machines as well as implemented algorithms. In this paper, we propose techniques for analyzing scaled parallel programs using stochastic modeling approaches. Although allowing more generality and flexibility in analysis, stochastic modeling of large parallel programs is difficult due to solution tractability problems. We observe, however, that the complexity of parallel program models depends significantly on the type of parallel computation, and we present several computation classes where tractable, approximate graph models can be generated. Our approach is based on a parallelization description of programs to be scaled. From this description, scaled stochastic graph models are automatically generated. Different approximate models are used to compute lower and upper bounds of the mean runtime. We present evaluation results of several of these scaled (approximate) models and compare their accuracy and modeling expense (i.e., time to solution) with other solution methods implemented in our modeling tool PEPP. Our results indicate that accurate and efficient scalability analysis is possible using stochastic modeling together with model approximation techniques.
[sandia03]
J. Ray, N. Trebon, R. C. Armstrong, S. Shende, and A. Malony, "Performance Measurement and Modeling of Component Applications in a High Performance Computing Environment: A Case Study, " Technical Report SAND2003-8631, Sandia National Laboratories, Livermore, CA, Nov. 2003.
Keywords: CCA, Performance modeling, CFRFS Combustion, TAU

We present a case study of performance measurement and modeling of a CCA (Commo n Component Architecture) component-based application in a high performance computing envi ronment. We explore issues peculiar to component-based HPC applications and propose a p erformance measurement infrastructure for HPC based loosely on recent work done for Grid environments. A prototypical implementation of the infrastructure is used to collect data fo r a three components in a scientific application and construct performance models for two of them. Both computational and message-passing performance are addres sed.
[sandia03b]
N. Trebon, J. Ray, S. Shende, R. C. Armstrong, and A. Malony, "An Approximate Method for Optimizing HPC component Applications in the Presence of Multiple Component Implementations," Technical Report SAND2003-8760C, Sandia National Laboratories, Livermore, CA, December 2003. Available from [http://infoserve.sandia.gov/sand_doc/ 2003/038760c.pdf]
Keywords: Performance, CCA, proxy components, TAU

The Common Component Architecture allows com- putational scientists to adopt a component-based architecture for scientific simulation codes. Components, which in the scientific context, usually embody a numerical solution facility or a physical or numerical model, are composed at runtime into a simulation code by loading in an implementation of a component and linking it to others. However, a component may admit multiple imple- mentations, based on the choice of the algorithm, data structure, parallelization strategy, etc. posing the user with the problem of having to choose the correct implementation and achieve an optimal (fastest) component assembly. Under the assumption that a performance model exists for each implementation of each component, simply choosing the optimal implementation of each component does not guarantee an optimal component assembly since components interact with each other. An optimal solution may be obtained by evaluating the performance of all the possible realizations of a component assembly given the components and all their implementations, but the exponential complexity renders the approach unfeasible as the number of components and their implementations rise. We propose an approximate approach predicated on the existence, identification and optimization of computationally dominant sub-assemblies (cores). We propose a simple criterion to test for the existence of such cores and a set of rules to prune a component assembly and expose its dominant cores. We apply this approach to data obtained from a CCA component code simulating shock-induced turbulence on four processors and present preliminary results regarding the efficacy of this approach and the sensitivity of the final solution to various parameters in the rules.
[tr9321]
Steven T. Hackstadt and Allen D. Malony, Data Distribution Visualization (DDV) for Performance Visualization, University of Oregon, Department of Computer and Information Science, Technical Report CIS-TR-93-21, October 1993.
Keywords: data distribution visualization, scientific visualization

The next generation of language compilers for parallel architectures offers levels of abstraction above those currently available. Languages such as High Performance Fortran (HPF) and Parallel C++ (pC++) allow the programmer to specify how data structures are to be aligned relative to each other and then distributed across processors. Since a program's performance is often directly related to how its data is distributed, a means of evaluating data distributions and alignments is necessary. Since there is a natural tendency to explain data distributions by drawing pictures, graphical visualizations may be helpful in assessing the benefits and detriments of a given data decomposition. This paper formulates an experimental framework for exploring visualization techniques appropriate to evaluating data distributions. Visualizations are created using IBM's Data Explorer visualization software in conjunction with other software developed by the author. An informal assessment of the resulting visualizations and an explanation of how this research will be extended is also given.
[tr9323]
Steven T. Hackstadt and Allen D. Malony, Next-Generation Parallel Performance Visualization: A Prototyping Environment for Visualization Development, Proc. of the Parallel Architectures and Languages Europe (PARLE) Conference, Athens, Greece, July 1994, pp. 192-201. Also available as University of Oregon, Department of Computer and Information Science, Technical Report CIS-TR-93-23, October 1993.
Keywords: parallel performance visualization, scientific visualization, visualization prototyping

A new design process for the development of parallel performance visualizations that uses existing scientific data visualization software is presented. Scientific visualization tools are designed to handle large quantities of multi-dimensional data and create complex, three-dimensional, customizable displays which incorporate advanced rendering techniques, animation, and display interaction. Using a design process that leverages these tools to prototype new performance visualizations can lead to drastic reductions in the graphics and data manipulation programming overhead currently experienced by performance visualization developers. The process evolves from a formal methodology that relates performance abstractions to visual representations. Under this formalism, it is possible to describe performance visualizations as mappings from performance objects to view objects, independent of any graphical programming. Implementing this formalism in an existing data visualization system leads to a visualization prototype design process consisting of two components corresponding to the two high-level abstractions of the formalism: a trace transformation (i.e., performance abstraction) and a graphical transformation (i.e., visual abstraction). The trace transformation changes raw trace data to a format readable by the visualization software, and the graphical transformation specifies the graphical characteristics of the visualization. This prototyping environment also facilitates iterative design and evaluation of new and existing displays. Our work examines how an existing data visualization tool, IBM's Data Explorer in particular, can provide a robust prototyping environment for next-generation parallel performance visualization.
[tr9409]
Steven T. Hackstadt, Allen D. Malony, and Bernd Mohr, Scalable Performance Visualization for Data-Parallel Programs, Proc. of the Scalable High Performance Computing Conference (SHPCC), Knoxville, TN, May 1994, pp. 342-349. Also available as University of Oregon, Department of Computer and Information Science, Technical Report CIS-TR-94-09, March 1994.
Keywords: scalable performance visualization, scientific visualization, pC++, data-parallel programming

Developing robust techniques for visualizing the performance behavior of parallel programs that can scale in problem size and/or number of processors remains a challenge. In this paper, we present several performance visualization techniques based on the context of data-parallel programming and execution that demonstrate good visual scalability properties. These techniques are a result of utilizing the structural and distribution semantics of data-parallel programs as well as sophisticated three-dimensional graphics. A categorization and examples of scalable performance visualizations are given for programs written in Dataparallel C and pC++.
[tr9515]
Steven T. Hackstadt, Prototyping Advanced Parallel Program and Performance Visualizations, University of Oregon, Department of Computer and Information Science, Masters Thesis, June 1994. Also available as University of Oregon, Department of Computer and Information Science, Technical Report CIS-TR-95-15, June 1995.
Keywords: visualization prototyping, scientific visualization, parallel performance visualization, program visualization, case studies, scalable visualization, data distribution visualization

A new visualization design process for the development of parallel program and performance visualizations using existing scientific data visualization software can drastically reduce the graphics and data manipulation programming overheads currently experienced by visualization developers. Data visualization tools are designed to handle large quantities of multi-dimensional data and create complex, three-dimensional, customizable displays which incorporate advanced rendering techniques, animation, and display interaction. These capabilities can be used to improve performance visualization, but to be effective, they must be applied as part of a formal methodology relating performance data to visual representations. Under such a formalism, it is possible to describe performance visualizations as mappings from performance data objects to view objects, independent of any graphical programming. Through three case studies, this work examines how an existing scientific visualization tool, IBM's Data Explorer, provides a robust environment for prototyping next-generation parallel performance visualizations.
[tr9602]
Steven T. Hackstadt and Allen D. Malony, Distributed Array Query and Visualization for High Performance Fortran, Proc. of Euro-Par '96, Lyon, France, August 1996, pp. 55-63. Also available as University of Oregon, Department of Computer and Information Science, Technical Report CIS-TR-96-02, February 1996.
Keywords: visualization, distributed data access, hpf, parallel tool, runtime program interaction, tool framework

This paper describes the design and implementation of the Distributed Array Query and Visualization (DAQV) system for High Performance Fortran, a project sponsored by the Parallel Tools Consortium. DAQV's implementation leverages the HPF language, compiler, and runtime system to address the general problem of providing high-level access to distributed data structures. DAQV supports a framework in which visualization and analysis clients connect to a distributed array server (i.e., the HPF application with DAQV control) for program-level access to array values. Implementing key components of DAQV in HPF itself has led to a robust and portable solution in which clients do not need to know how the data is distributed.
[tr9605]
Harold H. Hersey, Steven T. Hackstadt, Lars T. Hansen, and Allen D. Malony, Viz: A Visualization Programming System, University of Oregon, Department of Computer and Information Science, Technical Report CIS-TR-96-05, April 1996.
Keywords: programmable visualization system, scheme, open inventor, data reactivity, scientific visualization, animation

This paper describes the design and implementation of a high-level visualization programming system called Viz. Viz was created out of a need to support rapid visualization prototyping in an environment that could be extended by abstractions in the application problem domain. Viz provides this in a programming environment built on a high-level, interactive language (Scheme) that embeds a 3D graphics library (Open Inventor), and that utilizes a data reactive model of visualization operation to capture mechanisms that have been found to be important in visualization design (e.g., constraints, controlled data flow, dynamic analysis, animation). The strength of Viz is in its ability to create non-trivial visualizations rapidly and to construct libraries of 3D graphics functionality easily. Although our original focus was on parallel program and performance data visualization, Viz applies beyond these areas. We show several examples that highlight Viz functionality and the visualization design process it supports.
[tr9708]
Steven T. Hackstadt, Domain-Specific Metacomputing for Computational Science: Achieving Specificity Through Abstraction, University of Oregon, Department of Computer and Information Science, Oral Comprehensive Exam Position Paper, September 1997. Available as University of Oregon, Department of Computer and Information Science, Technical Report CIS-TR-97-08, November 1997.
Keywords: metacomputing, heterogeneous computing, domain-specific environments, DSE, software architecutre,domain-specific software architecture, DSSA, computational science

A new area called domain-specific metacomputing for computational science is defined. This area cuts across the larger areas of parallel and distributed computing, computational science, and software engineering in search of techniques and technology that will better allow the creation of useful tools for computational scientists. The paper focuses on how metacomputing, domain-specific environments, and software architectures can be employed as key technologies to this end.
[tr9802]
Steven T. Hackstadt, Christopher W. Harrop, and Allen D. Malony, A Framework for Interacting with Distributed Programs and Data, Proceedings of the Seventh IEEE International Symposium on High Performance Distributed Computing (HPDC7), Chicago, IL, July 28-31, 1998. Also available as University of Oregon, Department of Computer and Information Science, Technical Report CIS-TR-98-02, June 1998.
Keywords: parallel tools, distributed arrays, visualization, computational steering, model coupling, runtime interaction, data access, Fortran 90

The Distributed Array Query and Visualization (DAQV) project aims to develop systems and tools that facilitate interacting with distributed programs and data structures. Arrays distributed across the processes of a parallel or distributed application are made available to external clients via well-defined interfaces and protocols. Our design considers the broad issues of language targets, models of interaction, and abstractions for data access, while our implementation attempts to provide a general framework that can be adapted to a range of application scenarios. The paper describes the second generation of DAQV work and places it in the context of the more general distributed array access problem. Current applications and future work are also described.

### Talks and Presentations

[EuroPVM06]
K. Huck, A. Malony, S. Shende and A. Morris. "TAUg: Runtime Global Performance Data Access using MPI." EuroPVM/MPI Conference, September 2006
Keywords: TAU, TAUg, global data acsess, performance monitoring, online performance adaption
[STHEC08b]
A. Nataraj, A. Malony, A. Morris, D. Arnold, B. Miller, "TAUoverMRNet (ToM) : A Framework for Scalable Parallel Performance Monitoring", Presented at STHEC'08: International Workshop on Scalable Tools for High-End Computing, held in conjunction with the International Conference on Supercomputing (ICS 2008)
Keywords: performance, monitoring, tree-based, overlay, TAU, MRNet
[acts01]
S. Shende, "Tuning and Analysis Utilities," presentation at ACTS Toolkit Workshop, "Solving Problems in Science and Engineering," LBNL, NERSC, Berkeley, CA, Oct. 10-13, 2001.
Keywords: TAU, PDT
[acts02]
Allen D. Malony, "The TAU Performance System." Presented at the DOE ACTS workshop September 2002
Keywords: TAU, Instrumentation, Mesurement, Analysis, Performance Mapping, PETSc
[acts03b]
Sameer Shende , "Tuning and Analysis Utilities ."
Keywords: TAU, PDT, pprof, openMP
[acts05b]
Sameer Shende, Allen D. Malony, "TAU Performance System ." Presented at ACTS Workshop 2005
Keywords: Performance problem solving, TAU, PDT, ParaProf, PerfExplorer, PerfDMF
[acts06]
S. Shende, A. D. Malony, and A. Morris, "TAU Performance System," talk at ACTS Workshop, LBL, Aug. 2006.
Keywords: TAU, PDT, Vampir, VNG
[acts_sc00]
A. D. Malony, S. Shende, R. A. Bell, "Parallel Program Analysis Framework for the DOE ACTS Toolkit," presentation at NERSC ACTS booth, SC'00, 2002.
Keywords: TAU, PDT, ACTS
[apart02]
Allen D. Malony, "TAU Performance DataBase Framework (PerfDBF)." Presented at EuroPar 2002
Keywords: PrefDBF, TAU, X-PARE
[arl02]
Sameer Shende, Allen D. Malony, "Performance Technology for Complex Parallel Systems", Talk at Army Research Lab (Aberdeen Proving Ground), MD, Sept. 2002.
Keywords: TAU, PDT, PAPI, Opari
[bbmi05]
"Neuroinformatics Research at UO" Presented for the Brain Biology Machine Initiative February 2005
Keywords: Neuroinformatics, EGI, EEG
[bgl06]
A. Morris, S. Shende, A. Malony, "TAU Performance System", presentation at BGL Workshop, Tokyo 2006
Keywords: TAU, BGL
[bio04]
Allen D. Malony "Neuroinformatics, the ICONIC Grid, and Oregons Science Industry." Presented to the 2004 Bioscience Conference
Keywords: Neuroinformatics, ICONIC Grid, Oregons Science Industry
[cacr]
"Performance Technology for Complex Parallel Systems."
Keywords: TAU, PDT, pprof
[cca03]
Allen D. Malony, Sameer Shende, "Performance Engineering Technology for Complex Scientific Component Software." Presented at Pasadena CCA Meeting January 2003
Keywords: TAU, CCA, Performance Engineered Component Software, PDT, Measurement Port
[cca04]
Sameer Shende, "Generating Proxy Components using PDT." Presented at Boulder CCA Meeting April 2004
Keywords: TAU, PDT, Proxy Components, CCA
[cca05]
Sameer Shende, Alan Morris, "Advances in the TAU Performance System."
Keywords: TAU, PDT, pprof, CCA, ParaProf
[cluster06]
A. Nataraj, A. Malony, S. Shende, A. Morris, "Kernel-Level Measurement for Integrated Parallel Performance Views: the KTAU Project,"
Keywords: kernel mesurment, KTAU, TAU
[cluster08]
A. Nataraj, A. Malony, A. Morris, D. Arnold, B. Miller, "In Search of Sweet-Spots in Parallel Performance Monitoring", Presented at International Conference on Cluster Computing, Tsukuba, Japan, September 2008
Keywords: performance, monitoring, tree-based, overlay, TAU, MRNet
[compframe05talk]
N. Trebon, A. Morris, J. Ray, S. Shende, and A. Malony, "Performance Modeling of Component Assemblies with TAU," presentation at the CompFrame05 conference, Atlanta, 2005.
Keywords: Performance measurement, modeling, component software, parallel computing, CCA, TAU
[cug06talk]
S. Shende, A. D. Malony, A. Morris, P. Beckman, "Performance and Memory Evaluation using TAU," Presentation at the Cray User's Group conference (CUG'06), May 2006.
Keywords: TAU, PDT, Cray XT3, Catamount, Memory evaluation

The TAU performance system is an integrated performance instrumentation, measurement, and analysis toolkit offering support for profiling and tracing modes of measurement. This paper introduces memory introspection capabilities of TAU featured on the Cray XT3 Catamount compute node kernel. TAU supports examining the memory headroom, or the amount of heap memory available, at routine entry, and correlates it to the program's callstack as an atomic event.
[dagstuhl02]
Allen D. Malony, Sameer Shende, "Advances in the TAU Performance System." Persented to Dagstuhl Conference August 2002
Keywords: TAU, Performance System, Performance Technology, Instrumentation Control, Performance Maping, Performance Data Interaction
[dapsys00_talk]
A. D. Malony, S. Shende, "Performance Technology for Complex Parallel and Distributed Systems," presentation at DAPSYS 2000 conference, 2000.
Keywords: TAU, PDT
[dodugc04]
"TAU Parallel Performance System."
Keywords: TAU, DOD, pprof, Code Transformation and Instrumentation, ParaProf
[epvmmpi05]
Sameer Shende, Allen D. Malony, Alan Morris, Felix Wolf, "Performance Profiling Overhead Compensation for MPI Programs ." Presented at EuroPVM-MPI
[erdc04]
Sameer Shende, Allen D. Malony , "Performance Optimization and Tools for HPC Architectures using TAU." Presented at ERDC October 2004
Keywords: TAU, PDT, Preformance Mapping, CCA, ParaProf, pprof
[esmf04]
Sameer Shende, Nancy Collins, "Using TAU Performance Technology in ESMF." Presented at ESMF Team Meeting July 2004
Keywords: TAU, PDT, MPI, ESMF
[europar02]
Allen D. Malony, "TAU Performance DataBase Framework (PerfDBF)." Persented at APART EuroPar 2002 workshop
Keywords: TAU, Performance Database Framework, XML profile data representation, X-PARE
[europar03]
Robert Bell, Allen D. Malony, Sameer Shende, "ParaProf: A Portable, Extensible, and Scalable Tool for Parallel Performance Profile Analysis." Presented at EuroPar August 2003
Keywords: Paraprof, Profiling Tools
[europar07]
A. Nataraj, M. Sottile, A. Morris, A. D. Malony, S. Shende. "TAUoverSupermon (ToS) Low-Overhead Online Parallel Performance Monitoring." Presented at Euro-Par 2007.
Keywords: TAU, Supermon, Performance Monitoring, Online Monitoring
[europara06]
A. Nataraj, A. Malony, A. Morris, S. Shende, "Early Experiences with KTAU on the IBM Blue Gene / L" Europar 2006.
Keywords: tau, zeptoOS, ktau, phase profiling, kernel profiling,
[ewomp01talk]
B. Mohr, A. D. Malony, S. Shende, and F. Wolf, "Towards a Performance Tool Interface for OpenMP: An Approach Based on Directive Rewriting," Presentation at EWOMP'01 Third European Workshop on OpenMP, Sept. 2001.
Keywords: OpenMP, directive rewriting, instrumentation interface, TAU, EXPERT

In this article we propose a `standard'' performance tool interface for OpenMP, similar in spirit to the MPI profiling interface in its intent to define a clear and portable API that makes OpenMP execution events visible to performance libraries. When used together with the MPI profiling interface, it also allows tools to be built for hybrid applications that mix shared and distributed memory programming. We describe an instrumentation approach based on OpenMP directive rewriting that generates calls to the interface and passes context information (e.g., source code locations) in a portable and efficient way. Our proposed OpenMP performance API further allows user functions and arbitrary code regions to be marked and performance measurement to be controlled using new proposed OpenMP directives. The directive transformations we define are implemented in a source-to-source translation tool called OPARI. We have used it to integrate the TAU performance analysis framework and the automatic event trace analyzer EXPERT with the proposed OpenMP performance interface. Together, these tools show that a portable and robust solution to performance analysis of OpenMP and hybrid applications is possible.
[gridperf02]
Allen D. Malony, "Performance Technology for Scientific (Parallel and Distributed) Component Software." Presented at Grid Performance Workshop 2002
Keywords: Grid, Performance Engineering, Performance Technology
[hbp00]
A. D. Malony, "Distributed Computational Architectures for Integrated Time-Dynamic Neuroimaging," HBP Neuroinformatics conference, 2000.
Keywords: neuroinformatics
[hill01]
A. D. Malony, "Distributed Computational Architectures for Integrated Time-Dynamic Neuroimaging," presentation at Hill Center, 2001.
Keywords: neuroinformatics
[hill05]
Allen D. Malony, "Distributed Computational Architectures for Integrated Time-Dynamic Neuroimaging." Presented at The Hill Center November 2005
Keywords: Computational Science, Cognitive Neuroscience, Brain Dynamics, Computational Architectures
[ibm06]
A. Nataraj, "TAU: Recent Advances KTAU: Kernel-Level Measurement for Integrated Parallel Performance Views, TAUg: Runtime Global Performance Data Access Using MPI."
Keywords: KTAU, TAUg, perturbation, zeptoOS, global performance, load balancing
[iccs03talk]
J. Dongarra, A. D. Malony, S. Moore, P. Mucci, and S. Shende, "Performance Instrumentation and Measurement for Terascale Systems," Proc. Terascale Performance Analysis Workshop, International Conference on Computational Science (ICCS 2003), 2003.
Keywords: TAU, PAPI, Perfometer, instrumentation, measurement, performance analysis, terascale

As computer systems grow in size and complexity, tool support is needed to facilitate the efficient mapping of large-scale applications onto these systems. To help achieve this mapping, performance analysis tools must provide robust performance observation capabilities at all levels of the system, as well as map low-level behavior to high-level program constructs. Instrumentation and measurement strategies, developed over the last several years, must evolve together with performance analysis infrastructure to address the challenges of new scalable parallel systems.
[iscope01_talk]
S. Shende and A. D. Malony, "Integration and Application of the TAU Performance System in Parallel Java Environments," presentation at the Joint ACM Java Grande - ISCOPE 2001 Conference, June 2001.
Keywords: TAU, Java, MPI,

Parallel Java environments present challenging problems for performance tools because of Javas rich language system and its multi-level execution platform combined with the integration of native-code application libraries and parallel runtime software. In addition to the desire to provide robust performance measurement and analysis capabilities for the Java language itself, the coupling of different software execution contexts under a uniform performance model needs careful consideration of how events of interest are observed and how cross-context parallel execution information is linked. This paper relates our experience in extending the TAU performance system to a parallel Java environment based on mpiJava. We describe the complexities of the instrumentation model used, how performance measurements are made, and the overhead incurred. A parallel Java application simulating the game of Life is used to show the performance systems capabilities.
[ishpc02]
"Integrating Performance Analysis in the Uintah Software Development Cycle." Presented at ISHPC 2002
Keywords: Scientific Software Engineering, C-SAFE, Uintah Computational Framework, TAU, Performance Mapping, Performance Analysis
[javaics2ktalk]
Sameer Shende, and Allen D. Malony, "Performance Tools for Parallel Java Environments," slides from talk at Second Workshop on Java for High Performance Computing, ICS 2000, Santa Fe, May 2000.
Keywords: Java, TAU, parallel, MPI, profiling, tracing, performance evaluation

Parallel Java environments present challenging problems for performance tools because of Java's rich language system and its multi-level execution platform combined with the integration of native-code application libraries and parallel runtime software. In addition to the desire to provide robust performance measurement and analysis capabilities for the Java language itself, the coupling of different software execution contexts under a uniform performance model needs careful consideration of how events of interest are observed and how cross-context parallel execution information is linked. This paper relates our experience in extending the TAU performance system to a parallel Java environment based on mpiJava. We describe the instrumentation model used, how performance measurements are made, and the overhead incurred. A parallel Java application simulating the game of life is used to show the performance system's capabilities.
[kt]
"KTAU - Kernel Tuning and Analysis Utilities."
Keywords: KTAU, Kernel, BG/L, KTAU-D
[ktb]
Aroon Nataraj, Suravee Suthikulpanit, "KTAU: Kernel TAU."
Keywords: KTAU
[lacsi00]
Sameer Shende, "TAU: New Directions", presentation at Parallel Software Tools Workshop, LACSI 2000 Symposium, Aug 28-30, 2000, Santa Fe, NM.
Keywords: TAU, PDT, profiling, tracing, directions

This talk describes the current status (as of Aug 2000) of TAU and new research directions.
[lacsi01]
Sameer Shende , "The TAU Performance System: Advances in Performance Mapping."
Keywords: TAU, PDT, POOMA, Performance Mapping
[lacsi02]
"Performance Technology for Component Software," Allen D. Malony, Sameer S. Shende, Presentation at Performance Tools Workshop, Los Alamos Computer Science Institute Symposium (LACSI'02), Santa Fe, NM, Oct. 2002.
Keywords: CCA, TAU, CCAFEINE, Port, Measurement, POC, PKC
[lacsiworkshop01]
S. Shende, "The TAU Performance System: Advances in Performance Mapping," presentation at "Tools for Performance Analysis of large Scale Applications," workshop, LACSI 2001 Symposium, Santa Fe, NM, Oct. 15-18, 2001.
Keywords: TAU, PDT, mapping
[llnl01]
A. D. Malony, S. Shende, and R. A. Bell, "TAU Performace System: Developments and Evolution," presentation at LLNL, 2001.
Keywords: TAU, PDT
[llnl02]
Allen D. Malony, Sameer Shende, "Recent Advances in the TAU Performance System." Presented at LLNL September 2002
Keywords: TAU, Instrumentation Control, Performance Mapping, Component Software Performance Analysis
[llnl04]
Allen D. Malony, "Performance Technology for Productive, High-End Parallel Computing." Presented at LLNL October 2004
Keywords: Performance Technology, Autonomic Performance Tools, TAU, Performance Data Mining
[llnl05]
Sameer Shende, Allen D. Malony, "TAU: Performance Technology for Productive, High Performance Computing." Presented at LLNL 2005
Keywords: TAU, PDT, ParaProf, PerfExpoler
[llnl06]
"TAU Performance System." Sameer Shende, Allen D. Malon. Presented at Lawrence Livermore National Laboratory January 2006
Keywords: TAU, paraprof, vampir, Performance data management, data mining, perfexplorer, Clustering analysis
[navo04]
Sameer Shende, Allen D. Malony, Robert Bell, "The TAU Performance Technology for Complex Parallel Systems." Presented at NASA Stennis Space Center March 2004
Keywords: TAU, PDT, pprof, ParaProf, PerfDMF, OPARI
[nrl04]
Sameer Shende, Allen D. Malony, Robert Bell University of Oregon , "The TAU Performance Technology for Complex Parallel Systems." Presented at NRL D.C. BYOC Workshop August 2004
Keywords: TAU, Callpath, PDT, ParaProf
[omp01]
A. D. Malony, "Performance Tools Interface for OpenMP," a presentation to the OpenMP Futures Committee, 2001.
Keywords:
[ornl05]
Allen D. Malony, "Performance Technology for Productive, High-End Parallel Computing." Presented at ORNL 2005
Keywords: performance technology, Autonomic Performance Tools, TAU, Performance Data Mining, PerfDMF, PerfExplorer, Comparative analysis, Clustering analysis
[papi03]
Jack Dongarra, Shirley Moore, Philip Mucci, Sameer Shende, and Allen Malony, "Performance Instrumentation and Measurement for Terascale Systems." 2003
Keywords: Performance Evaluation, DyninstAPI, TAU, PDT, OPARI, ParaProf, PAPI, DynaProf, Perfometer
[para06a]
S. Shende, A. Malony, A. Morris, "Optimization of Instrumentation in Parallel Performance Evaluation Tools," Performance Research Laboratory, Department of Computer and Information Science University of Oregon, Eugene, OR, USA.
Keywords: Instrument optimization, selective instrumentation, measurement, Performance measurement and analysis, parallel computing
[para06b]
S. Shende, A. Malony, A. Morris, "Workload Characterization using the TAU Performance System," Performance Research Laboratory, Department of Computer and Information Science University of Oregon, Eugene, OR, USA.
Keywords: Performance mapping, measurement, instrumentation, performance evaluation, workload characterization
[parco03]
Allen D. Malony, Sameer Shende, Robert Bell, "Online Performance Monitoring, Analysis, and Visualization of Large-Scale Parallel Applications." Presented at ParaCo 2003
Keywords: Measurement Intrusion, Online Performance Analysis, TAU
[parco05]
Allen D. Malony, Sameer Shende, Alan Morris, "Phase-Based Parallel Performance Profiling" Presented at ParaCo 2005
Keywords: Callpath, Phase Profiling, Performance Mapping, NAS Parallel Benchmarks
[parco07]
K. Huck, A. Malony, S. Shende, A. Morris, "Scalable, Automated Parallel Performance Analysis with TAU, PerfDMF and PerfExplorer." Presented at International Conference on Parallel Computing (ParCo) September 2007.
Keywords: TAU, PerfDMF, PerfExplorer, phase profiling, metadata, regression analysis
[pdpta01_talk]
S. Shende, A. D. Malony, R. Ansell-Bell, "Instrumentation and Measurement Strategies for Flexible and Portable Empirical Performance Evaluation," presentation at Tools and Techniques for Performance Evaluation Workshop, PDPTA'01, C.S.R.E.A., June 2001.
Keywords: TAU, instrumentation, measurement, MPI, DyninstAPI

Flexibility and portability are important concerns for productive empirical performance evaluation. We claim that these features are best supported by robust instrumentation and measurement strategies, and their integration. Using the TAU performance system as an exemplar performance toolkit, a case study in performance evaluation is considered. Our goal is both to highlight flexibility and portability requirements and to consider how instrumentation and measurement techniques can address them. The main contribution of the paper is methodological, in its advocation of a guiding principle for tool development and enhancement. Recent advancements in the TAU system are described from this perspective.
[perc]
Allen D. Malony, Sameer Shende, "PERC Ideas." Presented at Performance Technology for Productive, High-End Parallel Computing
Keywords: TAU, PDT, ParaProf, PerfExpoler
[pnnl04]
Allen D. Malony, "High-Performance Computing, Computational Science, and NeuroInformatics Research." Presented at PNNL April 2004
Keywords: High Performance Computing, Neuroinformatics, TAU, ICONIC Grid
[ppam03]
Kai Li, Allen D. Malony, Robert Bell, Sameer Shende, "A Framework for Online Performance Analysis and Visualization of Large-Scale Parallel Applications." Presented at PPAM 2003
Keywords: Online Performance Analysis, TAU, Computation Steering, Performance Steering
[psc05]
Allen D. Malony, "Performance Technology for Productive, High-End Parallel Computing." Presented at PSC 2005
Keywords: Performance Problem Solving, TAU, PDT, ParaProf, PerfExplorer
[psu2000talk]
Sameer Shende, "Building Your Own Performance Evaluation Tools", talk at Portland State University, May 13, 2000.
Keywords: TAU, performance evaluation, profiling, tracing, JVMPI, MPI, instrumentation, measurement, analysis

Performance evaluation of parallel and distributed programs involves choosing from a wide variety of performance models, instrumentation and measurement techniques, and execution models. The ability of performance technology to keep pace with the growing complexity of parallel and distributed systems depends on robust performance frameworks that can at once provide system-specific performance capabilities and support high-level performance problem solving. This talk gives an overview of choices and constraints that a performance technologist faces while building tools. We share our experience in building the TAU (Tuning and Analysis Utilities) suite of portable profiling and tracing tools. As an example, we illustrate tools for a parallel Java environment where instrumentation from multiple levels is integrated to provide the coupling of different software execution contexts under a uniform performance model. The techniques discussed in this talk are aimed at helping you design simple performance evaluation tools and effectively understanding and using existing performance tools.
[ptools00]
A. D. Malony, "TAU: A Framework for Parallel Performance Analysis," presentation at PTOOLS meeting, 2000.
Keywords: TAU, PDT
[ptools02]
Sameer Shende, Allen D. Malony, "Recent Advances in the TAU Performance System," Presentation at PTOOLS'02 meeting, Knoxville, TN, Sept. 2002.
Keywords: TAU, PDT, Opari
[sc00]
Allen D. Malony, Sameer Shende, Robert Ansell-Bell, "Parallel Program Analysis Framework for the DOE ACTS Toolkit." Presented at Super Computing Confernce 2000
Keywords: TAU, Hardware Performance Measurement, PDT
[sc00talk]
K. A. Lindlan, J. Cuny, A. D. Malony, S. Shende, B. Mohr, R. Rivenburgh, C. Rasmussen. "A Tool Framework for Static and Dynamic Analysis of Object-Oriented Software with Templates." Talk at SC2000: High Performance Networking and Computing Conference, Dallas, November 2000.
Keywords: Program Database Toolkit, PDT, static analysis, dynamic analysis, object-oriented, templates, IL Analyzer, DUCTAPE, TAU, SILOON

The developers of high-performance scientific applications often work in complex computing environments that place heavy demands on program analysis tools. The developers need tools that interoperate, are portable across machine architectures, and provide source-level feedback. In this paper, we describe a tool framework, the Program Database Toolkit (PDT), that supports the development of program analysis tools meeting these requirements. PDT uses compile-time information to create a complete database of high-level program information that is structured for well-defined and uniform access by tools and applications. PDT's current applications make heavy use of advanced features of C++, in particular, templates. We describe the toolkit, focussing on its most important contribution -- its handling of templates -- as well as its use in existing applications.
[sc01_tut]
A. D. Malony, S. Shende, and B. Mohr, "Performance Technology for Complex Parallel Systems," Tutorial at SC'01 conference, Nov. 2001.
Keywords: TAU, PDT, Kojak, Expert, tools

Fundamental to the development and use of parallel systems is the ability to observe, analyze, and understand their performance. However, the growing complexity of parallel systems challenge performance technologists to produce tools and methods that are at once robust (scalable, extensible, configurable) and ubiquitous (cross-platform, cross-language). This half-day tutorial will focus on performance analysis in complex parallel systems which include multi-threading, clusters of SMPs, mixed-language programming, and hybrid parallelism. Several representative complexity scenarios will be presented to highlight two fundamental performance analysis concerns: 1) the need for tight integration of performance observation (instrumentation and measurement) technology with sophisticated programming environments and system platforms, and 2) the ability to map execution performance data to high-level programming abstractionsimplemented on layered, hierarchical software systems. The tutorial will describe the TAU performance system in detail and demonstrate how it is used to successfully address the performance analysis concerns in each complexity scenario discussed. Tutorial attendees will be introduced to TAU's instrumentation, measurement, and analysis tools, and shown how to configure the TAU performance system for specific needs. A description of future enhancements of the TAU performance framework, including a demonstration of a prototype for automatic bottleneck analysis, will conclude the tutorial.
[sc02]
Allen D. Malony, Sameer S. Shende, Robert Bell, "The TAU Performance System." Presented at Super Computing Conference November 2002
Keywords: TAU, PDT, ParaProf, PerfExpoler, PETSc, Callpath
[sc02_cca]
Allen D. Malony, Sameer Shende, Craig Rasmussen, Jaideep Ray, Matt Sottile, "Performance Technology for Component Software - TAU."
Keywords: TAU, PDT, CCA
[sc02_petsc]
"Case Study: PETSc ex19."
Keywords: PETSc, Callpath
[sc04]
Allen D. Malony , "ICONIC Grid Improving Diagnosis of Brain Disorders." Presented at Super Computing Conference 2004
Keywords: Neuroinformatics, ICONIC Grid, Brain Dynamics
[sc07]
A. Nataraj, A. Morris, A. Malony, M. Sottile, P. Beckmanl, "The Ghost in the Machine: Observing the Effects of Kernel Operation on Parallel Application Performance," Presented at SuperComputing Conference 2007.
Keywords: TAU, KTAU, Kernel Performance, OS Noise, OS interference
[siam06a]
S. Shende, A. Malony, A. Morris, H. Brunst, W. Nagel, "Performance and Memory Evaluation using the TAU Performance System," Presented in SIAM, 2006.
Keywords: TAU Performance System, Open Source Performance system, HPC systems, performance mapping
[siam06b]
S. Shende, A. Malony, A. Morris, K. Huck, "Tools for Performance Discovery and Optimization," Presented at SIAM, 2006.
Keywords: Performance Discovery, Performance Optimization, PerfExplorer, Multi-level performance instrumentation
[smpage02]
Sameer Shende, Allen D. Malony, "Integration and Application of the TAU Performance System in Parallel Java Environments." Presented at SMPAG Java Interest Group May 2002
Keywords: JAVA, TAU, Java HPC, Performance Technology, Multi-Threading Performance Measurement, Virtual Machine Performance Instrumentation,
[tum00]
A. D. Malony, "Performance Technology for Complex Parallel and Distributed Systems," presentation at T.U.M. Germany, 2000.
Keywords: TAU, PDT
[utk05]
Allen D. Malony, "Multi-Experiment Performance Data Management and Data Mining." Presented at UTK 2005
Keywords: TAU, PerfExplorer, PerfDMF, Performance Data Mining
[wompat02a]
Bernd Mohr, Allen Malony, Rudi Eigenmann, "On the Integration and Use of OpenMP Performance Tools in the SPEC OMP2001 Benchmarks." Presented at John von Neumann Istitut fur Computing
Keywords: OpenMP, TAU, ZAMpano, POMP
[wompat02b]
Bernd Mohr, Allen D. Malony, Rudi Eigenmann, "On the Integration and Use of OpenMP Performance Tools in the SPEC OMP2001 Benchmarks." Presented at John von Neumann Institut fur Computing
Keywords: OpenMP, TAU, OMP2001, Performance Tools
[wompat02talk]
B. Mohr, A. D. Malony, R. Eigenmann, "On the Integration and Use of OpenMP Performance Tools in the SPEC OMP2001 Benchmarks," Presentation at the WOMPAT 2002 conference.
Keywords: TAU, OpenMP, Expert, Opari, SPEC OMP 2001, benchmarks

The Standard Performance Evaluation Corporation (SPEC) benchmark suite for OpenMP (named SPEC OMP2001) allows the performance evaluation of modern shared-memory multiprocessors executing programs made parallel using the OpenMP API. While the SPEC OMP2001 suite reports only total program execution for benchmarking purposes, detailed performance studies of the individual programs can reveal interesting runtime characteristics. Clearly, for programmers attempting to diagnose performance problems and make tuning decisions, such detailed performance information can be invaluable, especially when programming with a new parallel API such as OpenMP. Unfortunately, tools for performance measurement and analysis of parallel programs do not, in general, meet the same portability, configurability, and ease of use standards found in a robust benchmark suite such as SPEC OMP2001. As a result, more in-depth performance analysis is often isolated to those platforms where tools exists, or it is not done at all for lack of tool expertise. During the past year, we have proposed a performance tool interface (referred to as the POMP interface) for OpenMP. The goal of POMP is to define a clear and portable API that makes OpenMP execution events visible to runtime performance measurement tools. The POMP API is designed based on OpenMP directive semantics, allowing POMP instrumentation to be accomplished through source-to-source translation; we developed the Opari instrumentation tool for this purpose. In addition to the POMP interface specification, we have demonstrated its use with prototype POMP libraries for the Expert automatic event trace analyzer and the TAU performance analysis framework. This paper reports on the application of the POMP performance interface and toolset to the SPEC OMP2001 benchmark suite. The goals of the work are three-fold. First, we want to show how support for detailed performance instrumentation and measurement can be integrated in the SPEC OMP2001 benchmarking methodology, using an approach based on POMP's capabilities. Second, we want to then use the SPEC OMP2001 benchmarks as testcases for the POMP technology, the API and Opari instrumentation tool. This will allow us to further evaluate the robustness of the API and Opari's automatic transformation capabilities. Third, we want to demonstrate the value of integrated performance tools in conducting cross-platform performance studies. Here, our goal is be able to automatically capture detailed performance information across a variety of platforms listed in the SPE C OMP2001 results database.
[zam02]
A. D. Malony, "Integrating Performance Analysis in Complex Scientific Software: Experiences with the Uintah Computational Framework," presentation at FZJ, ZAM, NIC Germany, 2002.
Keywords: TAU, Uintah, SCIRun, MPI, threads, XPARE

### Other Publications

[dxcomm95]
Steven T. Hackstadt and Allen D. Malony, Visualizing Parallel Program and Performance Data with IBM Visualization Data Explorer, IBM Visualization Data Explorer Communiqu Newsletter, Vol. 3, No. 1, March 1995, pp. 6-8.
Keywords: ibm data explorer, parallel performance visualization, scientific visualization, visualization prototyping

Performance visualization is the use of graphical display techniques for the visual analysis of performance data to improve the understanding of complex performance phenomena. While the graphics of current performance visualizations are predominantly confined to two-dimensions, one of the primary goals of our work is the development of new methods for rapidly prototyping next-generation, multi-dimensional performance visualizations. By applying the tools of scientific visualization to performance visualization, we have found that next-generation displays for performance visualization can be prototyped, if not implemented, in existing data visualization software products like Data Explorer, using graphical techniques that physicists, oceanographers, and meteorologists have used for several years now.
[lacsi06]
A. Nataraj, M. Sottile, A. Malony, A. Morris, S. Shende, R. Minich, K. Huck. "Scalable Online Parallel Performance Measurement Over a Cluster Monitor." Presented at LACSI'06 (Los Alamos Computer Science Institute Symposium).
Keywords: TAU, supermon, Cluster monitoring, Online Performance monitoring
[pgipp96]
Steven T. Hackstadt and Allen D. Malony, Distributed Array Query and Visualization for High Performance Fortran, Peak Performance Newsletter, Portland Group, Inc., Spring 1996.
Keywords: visualization, distributed data access, hpf, parallel tool, runtime program interaction, tool framework
[sc2002]
K. Li, A. D. Malony, S. Shende, R. Bell, "Online Performance Analysis and Visualization of Large-Scale Parallel Applications", Poster SC 2002 conference.
Keywords: Paravis, TAU, SCIRun, online, performance, visualization

Parallel performance tools offer the program developer insights into the execution behavior of an application and are a valuable component in the cycle of application development and deployment. However, most tools do not work well with large-scale parallel applications where the performance data generated comes from thousands of processes. Not only can the data be difficult to manage and the analysis complex, but existing performance display tools are mostly restricted to two dimensions and lack the customization and display interaction to support full data investigation. In addition, it is increasingly important that performance tools be able to function online, making it possible to control and adapt long-running applications based on performance feedback. Again, large-scale parallelism complicates the online access and management of performance data, and it may be desirable to integrate performance analysis and visualization in existing computational steering infrastructures. The coupling of advanced three-dimensional visualization with large-scale, online performance data analysis could enhance application performance evaluation. The challenge is to develop a framework where the tedious work, such as access to the performance data and graphics rendering, is supported by the underlying system, leaving tool developers to focus on the high level design of the analysis and visualization capabilities. We designed and prototyped a system architecture for online performance access, analysis, and visualization in a large-scale parallel environment. The architecture consists of four components. The performance data integrator component is responsible for interfacing with a performance monitoring system to merge parallel performance samples into a synchronous data stream for analysis. The performance data reader component reads the external performance data into internal data structures of the analysis and visualization system. The performance analyzer component provides the analysis developer a programmable framework for constructing analysis modules that can be linked together for different functionality. The performance visualizer component can also be programmed to create different display modules. Our prototype is based on the TAU performance system, the Uintah computational framework, and the SCIRun computational steering and visualization system. Parallel profile data from a Uintah simulation are sampled and written to profile files during execution. A profile reader, implemented as a SCIRun module, saves profile samples in SCIRun memory. SCIRun provides a programmable system for building and linking the analysis and visualization components. We have developed two analysis modules and three visualization modules to demonstrate how parallel profile data from large-scale Uintah applications are processed online.
[sc99a]
Advanced Computing Laboratory, Los Alamos National Laboratory: TAU: Tuning and Analysis Utilities, Supercomputing '99 flyer, Los Alamos National Laboratory Publication LALP-99-205, November 1999.
Keywords: TAU, profiling, tracing, toolkit, instrumentation, threads, Java, windows, HPF, MPI

TAU flyer for SC'99.
[sc99b]
Advanced Computing Laboratory, Los Alamos National Laboratory: PDT: Program Database Toolkit, Supercomputing '99 flyer, Los Alamos National Laboratory Publication LALP-99-204, November 1999.
Keywords: PDT, IL, Analyzer, Program, Database, Toolkit, SILOON, TAU, DUCTAPE

PDT flyer for SC'99
[sigplan94]
Janice Cuny, George Forman, Alfred Hough, Joydip Kundu, Calvin Lin, Lawrence Snyder, and David Stemple, The Ariadne Debugger: Scalable Application of Event-Based Abstraction, SIGPLAN Notices, Vol. 28, No. 12, 1994, pp. 85-95.
Keywords: event-based debugging, Ariadne, parallel, scalable

Massively parallel computations are difficult to debug. Users are often overwhelmed by large amounts of trace data and confused by the effects of asynchrony. Event-based behavioral abstraction provides a mechanism for managing the volume of data by allowing users to specify models of intended program behavior that are automatically compared to actual program behavior. Transformations of logical time ameliorate the difficulties of coping with asynchrony by allowing users to see behavior from a variety of temporal perspectives. Previously, we combined these features in a debugger that automatically constructed animations of user-defined abstract events in logical time. However, our debugger, like many others, did not always provide sufficient feedback nor did it effectively scale up for massive parallelism. Our modeling language required complex recognition algorithms which precluded informative feedback on abstractions that did not correspond to observed behavior. Feedback on abstractions that did match behavior was limited because it relied on graphical animations that did not scale well to even moderate numbers of processes (such as 64). We address these problems in a new debugger, called Ariadne.
[stevedrp]
Stephen McLaughry, "Debugging Optimized Parallel Programs," Directed Research Project (DRP) report, University of Oregon, May 1997.
Keywords: Debugging, ZEE, ZPL, Optimizations, Mappings

Optimization of data-parallel languages compounds the difficulty of the problem of parallel debugging. While the constrained structure of such languages is intended to simplify the job of the parallel programmer, the loss of flexibility concomitant with this structure often results in programs that, if left unoptimized, would have unacceptably poor performance. The programmer needs a debug tool capable of interacting with an optimized, distributed system and reporting the behavior of such a system in terms of the source code from which it is derived. We describe some of the obstacles to source-level debugging of optimized data-parallel programs. We present general solutions to these problems, and discuss implementation details. We then describe several example debugging scenarios to demonstrate the capabilities of our prototype system, ZEE (ZPL DEBUGGER).

278 items listed; 189 items annotated
Created: Thu May 23 22:02:00 US/Pacific 2013
Bibify!