Skip Navigation

Colloquium Details

Scaling Up to Large (Really Large) Systems

Author:Barton P. Miller University of Wisconsin
Date:February 07, 2017
Time:15:30
Location:220 Deschutes
Host:Professor Joseph Sventek

Abstract

I will discuss the problem of developing tools and middleware for large scale parallel environments. We are especially interested in systems, both leadership class parallel computers and clusters that have 100,000's or even millions of processors. The infrastructure that we have developed to address this problem is called MRNet, the Multicast/Reduction Network. MRNet's approach to scale is to structure control and data flow in a tree-based overlay network (TBON) that allows for efficient request distribution and flexible data reductions.

I will then present a brief overview of the MRNet design, architecture, and computational model and then discuss a few of the applications of MRNet. The applications include scalable automated performance analysis, STAT (a scalable stack trace analyzer running currently on millions of core), Totalview (a popular and mature parallel debugger), and an extreme-scale cluster algorithm that we developed (called "Mr. Scan").

Biography

Barton Miller is Professor of Computer Sciences at the University of Wisconsin, where he has graduated 21 Ph.D. students. He is Chief Scientist for the DHS Software Assurance Marketplace research facility and is Software Assurance Lead on the NSF Cybersecurity Center of Excellence. In addition, he co-directs the MIST software vulnerability assessment project in collaboration with his colleagues at the Autonomous University of Barcelona. He also leads the Paradyn Parallel Performance Tool project, which is investigating performance and instrumentation technologies for parallel and distributed applications and systems. His research interests include systems security, binary and malicious code analysis and instrumentation extreme scale systems, parallel and distributed program measurement and debugging, and mobile computing. Miller's research is supported by the U.S. Department of Homeland Security, U.S. Department of Energy, National Science Foundation, NATO, and various corporations.

In 1988, Miller founded the field of Fuzz random software testing, which is the foundation of many security and software engineering disciplines. In 1992, Miller (working with his then-student, Prof. Jeffrey Hollingsworth), founded the field of dynamic binary code instrumentation and coined the term "dynamic instrumentation". Dynamic instrumentation forms the basis for his current efforts in malware analysis and program instrumentation.

Miller is on the Advisory Board for the DHS Application Security Threat Attack Modeling (ASTAM) project, was the chair of the IDA Center for Computing Sciences Program Review Committee, member of the Los Alamos National Laboratory Computing, Communications and Networking Division Review Committee, U.S. Secret Service Electronic Crimes Task Force (Chicago Area), Advisory Committee for Tuskegee University's High Performance Computing Program, and the Advisory Board for the International Summer Institute on Parallel Computer Architectures, Languages, and Algorithms in Prague. Miller has been a distinguished lecturer and keynote speaker on four continents and served in technical leadership roles for a variety of conferences and workshops.

Miller, with his colleague, Prof. Heymann, has taught tutorials around the world on software vulnerability assessment, secure coding, and software assurance tools.

He is a Fellow of the ACM, received an R&D 100 Award, and received a variety of teaching awards. His Ph.D. is from the University of California, Berkeley.