The program consists of 80 minute lectures presented by internationally recognized leaders in programming languages and formal reasoning research. Panel sessions will complement lecture sessions by providing extended time for students to ask questions and multiple speakers to interact. The goal of the panels will be to identify research needs (both new logic and theorem proving topics and open problems on the topics discussed at the school) as well as techniques for better transferring the research contributions into practice. The panelists will begin with a 5-10 minute overview of a position. During and afterwards, students and other researchers will be encouraged to question the researcher's assumptions and claims, debate the validity of the arguments given and propose their own ideas.
Garbage Collection and the Metronome GC
These lectures will discuss the techniques and principles of automatic memory reclamation (garbage collection) in modern programming languages, with particular emphasis on incremental and real-time garbage collection. Real-time Garbage Collection is the process of collecting data that is no longer in use by an application at the same time that the application continues to run, while keeping the interruptions by the collector (also called pause times) short and evenly spaced. The goal of the Metronome project is to produce a garbage collector with pause times that are as low as operating system context-switch times in the worst case while providing highly uniform CPU utilization, low memory overhead, and low garbage collection cost. To date the Metronome Garbage Collector achieves these goals with a maximum pause time of a few hundred microseconds.
Algorithmic Program Synthesis
Sketching is a software synthesis approach where the programmer develops a partial implementation — a sketch — and a separate specification of the desired functionality. The synthesizer then completes the sketch to behave like the specification. The correctness of the synthesized implementation is guaranteed by the compiler, which allows, among other benefits, rapid development of highly tuned implementations without the fear of introducing bugs. Programming by sketching is supported by the SKETCHlanguage, an imperative language with a "hole" construct. Holes can be used in place of hard-to-get-right expressions, such as index expressions and loop bounds. The SKETCH synthesizer infers the content of holes using a SAT-based combinatorial search over the space of possible sketch completions. Thanks to the combinatorial search, the synthesizer requires no domain-specific knowledge and can therefore be applied to a wide variety of programming problems, as long as they compute finite programs, i.e., functions that take inputs of bounded size and perform a finite computation. SKETCH has been used to synthesize efficient implementations for non-trivial functions — including ciphers (e.g., DES and AES), error-correction codes, and long integer multiplication — in only a fraction of the time required for coding a full and correct implementation.
The verification of program properties being undecidable, all approaches have limitations. Either human help is required (to provide test cases, a finite model, or help in theorem proving) or tools fail to prove automatically that some program properties do hold (this is "incompleteness"). But for tiny programs for which exhaustive exploration is possible, the main difficulty is therefore to scale up. Faced with this situation, compromises have to be found to make practical automatic verification tools. One solution is to allow for non-termination, or memory usage explosion, for which it is impossible to know if a correctness proof could have been found with just a little more resources. Another compromise is to abandon soundness which essentially means that possible complex behaviors of the program to be verified are simply ignored e.g. by exploring the beginning of executions only or ignoring some or most of them. This does help in discovering bugs but not in proving their absence. Most tools are both incomplete and unsound, hence are debugging aids rather than correctness certification tools.
Abstract interpretation aims at discovering and proving model or program properties by sound approximations of their behaviors which are computable with finite resources hence are necessarily incomplete. This means that abstract interrpetation-based verifiers may produce false alarms when mathematically true program properties are signaled as not proved. The design objective is therefore to find a balance between precision of the analysis (to minimize false alarms) and its cost (to make it usable in practice).
The course sketches the principle of abstract interpretation where program analyzers are designed by sound approximation of the program properties. It explores practical a few applications of abstraction where the verification of specific classes of program properties has shown to be possible in practice with no or few alarms for very large programs.
Continuations to Go
These lectures provide a basic knowledge about continuations with which a PhD student should be good to go. This basic knowledge consists of: what continuations are (representations of "the rest of the computation" in part or in whole); where they arose (to model control effects such as jumps); where they have found good use so far (to account, eg, for streams, coroutining, backtracking, and mobility) and where they are used today (eg, to program the web); their formalization (small-step and big-step operational semantics, abstract machines, and continuation-passing style); their key formal properties (indifference, simulation, translation, and double negation); how they are implemented (control stack, segmented stack, and plain heap); and which alternatives have been proposed to continuations and continuation-passing style (eg, computational monads, monadic normal forms, and contexts).
Art of Invariant Generation applied to Symbolic Bound Computation
These lectures will discuss three dimensions in the art of invariant generation: (a) Program Transformations, (b) Colorful Logic, and (c) Fixpoint Brush. Program transformations alleviate the need for employing sophisticated invariant generation tools. Simpler invariants on the transformed program correspond to sophisticated invariants on the original program. Examples of such transformations include control-flow refinement, loop flattening/peeling, quantitative attributes instrumentation, and non-standard choice of cut-points. The colorful logic represents the language of program invariants and is associated with a decision procedure to reason about formulas in that logic. The lectures would cover decision procedures for various logics such as arithmetic, uninterpreted functions, logic of arrays, logic of lists/reachability, and combination of theories. The fixpoint brush represents the technique used to automatically generate loop invariants for a particular shade of logic such as conjunctive, k-disjunctive, or predicate-based abstractions. Examples of such fixpoint brush include iteration (forward or backward), constraint based, proof-rules, and learning.
A variety of the above-mentioned principles for invariant generation will then be applied to present a solution to the important, but relatively under-studied, problem of computing a precise symbolic bound on the number of visits to a given control-location.
Managed Runtime Environments: Implementations and Opportunities
These lectures will present the implementation of key services provided by managed systems for modern type-safe languages. The lectures will focus on key ways to extract performance from programs using feedback-directed optimization and adaptation. Finally, we will discuss the potential for adaptive optimization for the next generation of high-level languages, software, and systems.
Many static analyses of languages with pointers require accurate approximations of the targets of pointers as a prerequisite. Pointer information is especially important for object-oriented programs, which tend to manipulate many linked, heap-allocated data structures. The lectures will survey the broad spectrum of pointer analyses that have been developed to provide these approximations and outline the tradeoffs between analysis efficiency and precision of the results. Although most of the focus will be on analysis of object-oriented languages such as Java, analysis of lower-level languages such as C will also be presented.
Control-flow Analysis of Higher-Order Languages
Meaningful static analysis of higher-order programs depends on solving the higher-order control-flow problem: Given a call site, what procedures may be invoked there? This course will explore approaches to solving this problem, such as constraint-solving and abstract interpretation, and examine the trade-offs involved in specific solutions. With the fundamentals in place, the course will then examine generalizations of control-flow analysis, such as environment analysis, and applications of control-flow analysis to optimization and security.
Multi-Threaded Programming and Transactional Memory
These lectures will present the fundamental principles of modern multi-threaded programming mechanisms, and will examine how these translate to transactional memory mechanisms. Multi-threaded programming is not the only or the most approachable kind of parallel programming, but it is a well-understood and very general paradigm for shared-memory concurrency. We will examine traditional mechanisms for monitor style programming (mutexes and condition variables) and will derive principles for their correct use. We will then examine transactional memory: an approach to multi-threaded programming that has been under intense study for the past few years and promises to simplify the area, with appropriate implementation support at all levels.
This page is the property of the University of Oregon