University of Oregon

Oregon Programming Languages Summer School — June 15-25, 2010

Logic, Languages, Compilation, and Verification

Lecture Schedule

The program consisted of 80 minute lectures presented by internationally recognized leaders in programming languages and formal reasoning research.

Some of the lectures will assume interactive sessions using Coq, a proof assistant. Students are encouraged to bring a laptop on which coq has been installed. Coq is available from INRIA.

Lecture Abstracts

Type Theory Foundations — Robert Harper

The method of logical relations is a fundamental tool in type theory that is used to prove termination and normalization and to analyze equations between terms, including parametricity properties for polymorphism. The main idea is to interpret types as relations (of a suitable class) on terms by associating to each type constructor a "relational action" that determines the relation associated to a compound type as a function of its constituent types. The interpretation is chosen so that well-typed terms stand in the relation associated to their type, and so that related terms satisfy a property of interest, from which it follows that well-typed terms have that property. The method has many applications, but all share the characteristic that a global property of terms is reduced to local properties of types. I will develop the theory of logical relations from first principles, concentrating on two important cases, Goedel's System T and Girard's System F.

Proof Theory Foundations — Frank Pfenning

Lecture 1: Intuitionistic logic.
Natural deduction, verifications and uses, harmony of introduction and eliminations, difference from classical logic.

Lecture 2: Proofs as programs.
Proof term assignment, Curry-Howard isomorphism, subject reduction, notions of normal form, type checking.

Lecture 3: Proof search and sequent calculus.
Sequent calculus as search calculus, left and right rules, identity, cut elimination, connection to natural deduction.

Lecture 4: Focusing.
Weak focusing, cut elimination revisited, inversion, full focusing, proof terms and pattern matching, logic programming.

Intuitionistic proofs correspond to functional programs and vice versa. Since the celebrated discovery of this Curry-Howard isomorphism, similar relationships have guided our understanding of advanced programming language constructs, for example, for distributed computation or staged computation. Every computer scientist should therefore understand the basic structure of proofs and their computational interpretation.

In this course we explore the theory of proofs, emphasizing the fundamental principles that underly systems of logical inference. A student taking this course will learn

The course is divided into four lectures.

  1. Natural Deduction: how to define logics
  2. Verifications: the intrinisic meaning of propositions
  3. Sequent Calculus: searching for proofs
  4. Focusing: from myopic to prescient inference
    1. Below are some basic references. Most closely related to the material in lectures 1-3 (although not in the same order) are Lecture 1: Judgments and Propositions, Lecture 2: Proofs as Programs and Lecture 8: Sequent Calculus from a course on Modal Logic at Carnegie Mellon in Spring 2010. Attached below are lectures notes for Lecture 4: Focusing.

      1. Gerhard Gentzen. Untersuchungen über das logische Schließen. Mathematische Zeitschrift, 39:176-210, 405-431, 1935. English translation in M. E. Szabo, editor, The Collected Papers of Gerhard Gentzen, pages 68-131, North-Holland, 1969. Basic historical reference; introduces both the natural deduction and the sequent calculus.
      2. Dag Prawitz. Natural Deduction. Almquist & Wiksell, Stockholm, 1965. Investigates the metatheory of natural deduction.
      3. W. A. Howard. The formulae-as-types notion of construction. In J. P. Seldin and J. R. Hindley, editors, To H. B. Curry: Essays on Combinatory Logic, Lambda Calculus and Formalism, pages 479-490. Academic Press, 1980. Hitherto unpublished note of 1969, rearranged, corrected, and annotated by Howard. The note introducing the Curry-Howard isomorphism for natural deduction. Curry had previously described it for combinatory logic.
      4. Per Martin-Löf. On the meanings of the logical constants and the justifications of the logical laws. Technical report, Scuola di Specializzazione in Logica Matematica, Dipartimento di Matematica, Università di Siena, April 1983. Reprinted in the Nordic Journal of Philosophical Logic 1(1), pp. 11-60, 1996. [Available electronically] Lays out the justification of the definition of the logical connectives by distinguishing judgments from propositions.
      5. Michael Dummett. The Logical Basis of Metaphysics. Harvard University Press, Cambridge, Massachusetts, 1991. The William James Lectures, 1976. Analysis of verificationists and pragmatist points of view and their harmony.
      6. Frank Pfenning and André Platzer. Modal logic. Notes for a course at Carnegie Mellon University, Spring 2010. Lectures 1, 2, 8. [Available electronically] Lectures notes closely related to lectures 1-3 presented at the summer school.
      7. Frank Pfenning and Rowan Davies. A judgmental reconstruction of modal logic. Mathematical Structures in Computer Science, 11:511-540, 2001. Notes to an invited talk at the Workshop on Intuitionistic Modal Logics and Applications (IMLA'99), Trento, Italy, July 1999. [PDF] Introduces explicit notions of local soundness and completeness and applies them in the richer setting of modal logic where additional judgments arise.
      8. Frank Pfenning. Structural cut elimination I. Intuitionistic and classical logic. Information and Computation, 157(1/2):84-141, March 2000. [PDF] Introduces the cut elimination proof of lecture 3 and its formalization in a logical framework.
      9. Jean-Marc Andreoli. Logic programming with focusing proofs in linear logic. Journal of Logic and Computation, 2(3):197-347, 1992. The seminal paper on focusing, treating classical linear logic.
      10. Chuck Liang and Dale Miller. Focusing and polarization in linear, intuitionistic, and classical logics. Theoretical Computer Science, 410(46):4747-4768, November 2009. [PDF] A paper that also treats polarization and intuitionistic logic. Probably the closest reference for Lecture 4.
      11. Sean McLaughlin and Frank Pfenning. Efficient intuitionistic theorem proving with the polarized inverse method. In R.A.Schmidt, editor, Proceedings of the 22nd International Conference on Automated Deduction (CADE-22), pages 230-244, Montreal, Canada, August 2009. Springer LNCS 5663. [PDF] An application of focusing in a realistic theorem prover.
      12. Noam Zeilberger. The Logical Basis of Evaluation Order and Pattern-Matching. PhD thesis, Department of Computer Science, Carnegie Mellon University, May 2009. Available as Technical Report CMU-CS-09-122. An application of focusing in the design of programming languages.

      Dependently Typed Programming — Conor McBride

      In these lectures, we shall look at dependent types without the mediating lens of the Curry-Howard isomorphism. We shally exploit them for data and for programs as a kind of "logical infrastructure" capturing relative notions of intrinsic validity, in contrast (but not conflict) with the use of dependent types as "logical superstructure", expressing propositions about simply typed things.

      We shall explore the trade-offs involved in working with sets indexed by data expressing size, bounds, type, and other properties which refine what it means for data to be valid. Similarly, we shall index control structures, modelling interaction relative to the state of an uncertain world.

      We shall investigate the mathematical foundations of indexed data and control, with the intention of reflecting them. By manipulating types "in software", we shall begin to find tools which help us navigate the design space of indexing disciplines and negotiate our own compromise between intrinsic validity and extrinsic verification.

      Lecture 1

      This is basically a magic show, more about sketching the possibilities which advanced type systems open, than about communicating technical material. At the same time, it's a demo of the Strathclyde Haskell Enhancement. If you have Haskell (and cabal), you can try this kit with cabal install she.

      The slides are here, now that I've photographed them.

      Further reading:

      • Applicative programming with effects (McBride, Paterson)
      • The essence of the ITERATOR pattern (Gibbons, Oliveira)

      Lecture 2

      Agda basics: some vectors, at a less crazy pace; some examples of improved hygiene; views; simply typed lambda calculus.

      Humourless Marxism alert: there's quite a lot of boilerplate here. Hopefully, on Friday, we'll see the method in this madness.

      Further reading:

      Lecture 3

      ...was the back end of lecture 2.

      Lecture 4

      Universes, Ornaments, Algebras is based on my unfinshed draft Ornamental Algebras, Algebraic Ornaments. It's an exploration of incremental design of datatypes.

      Proofs-as-Processes: Reasoning about Concurrency in Computational Type Theory — Robert Constable

      The Internet is changing how we work and how we think. It is based on an asynchronous message passing computing model whose services are provided by interacting protocols. Many of the critical protocols are relatively small, a dozen or so lines of code that are very difficult to comprehend and very easy to "get wrong". Many of the critical protocols are accompanied by proofs, but the level of detail required to handle the routine parts obscures the important insights.

      To understand these protocols and prove their properties requires that we reason at high levels of abstraction, at the level of processes and above. Our goal is to raise the abstraction level high enough that we see processes as realizers for declarative statements about the Internet computing model in the same way that in the proofs-as-programs paradigm we see programs as realizers for assertions about sequential computing.

      It turns out that Computational Type Theory (CTT) is a very good host for a combined semantics of processes and programs and for a logic of events based on them. These lectures will introduce the basic ideas from CTT, a theory similar to the Calculus of Inductive Constructions (CIC). CTT is implemented in the Nuprl and MetaPRL provers and CIC is implemented in the Coq prover. The four lectures will have these titles. Computational Type Theory Basics, Processes and Systems in CTT, The Power of Automation, Game Changing Applications.

      Programming Language Methods for Compositional Security — Anupam Datta

      Compositional security is a recognized central scientific challenge for trustworthy computing. Contemporary systems are built up from smaller components. However, even if each component is secure in isolation, the composed system may not achieve the desired end-to-end security property: an adversary may exploit complex interactions between components to compromise security. Such attacks have shown up in the wild in many different settings, including web browsers and infrastructure, network protocols and infrastructure, and application and systems software. These lectures will report on progress on applying programming language methods to address this problem and will be divided into two parts:

      Part I: Protocol Composition Logic
      Protocol Composition Logic (PCL) is a logic for proving security properties of network protocols that use public and symmetric key cryptography. The logic is designed around a process calculus with actions for possible protocol steps including generating new random numbers, sending and receiving messages, and performing decryption and digital signature verifcation actions. The proof system consists of axioms about individual protocol actions and inference rules that yield assertions about protocols composed of multiple steps. Although assertions are written only using the steps of the protocol, the logic is sound in a strong sense: each provable assertion involving a sequence of actions holds in any protocol run containing the given actions and arbitrary additional actions by a malicious adversary. This approach lets us prove security properties of protocols under attack while reasoning only about the actions of honest parties in the protocol. PCL supports compositional reasoning about complex security protocols and has been applied to a number of industry standards including SSL/TLS, IEEE 802.11i and Kerberos V5.

      Part II: Logic of Secure Systems
      We present a formal framework for compositional reasoning about secure systems. A key insight is to view a trusted system in terms of the interfaces that the various components expose: larger trusted components are built by combining interface calls in known ways; the adversary is confined to the interfaces it has access to, but may combine interface calls without restriction. Compositional reasoning for such systems is based on an extension of rely-guarantee reasoning for system correctness to a setting that involves an adversary whose exact program is not known. It generalizes prior work on Protocol Composition Logic. At a technical level, the approach is based on an expressve concurrent programming language with recursive functions for modeling interfaces and a logic of programs in which compositional reasoning principles are formalized and proved sound with respect to trace semantics. The methods are applied to representative examples of web-based systems and network protocols. As a running example, we consider an example mashup system and present a modular proof of integrity in the presence of a class of interface-confined adversaries. We also demonstrate the generality of our methods by presenting a modular proof of symmetric key Kerberos V5 in the presence of a symbolic adversary.

      References:

        A. Datta, A. Derek, J. C. Mitchell, A. Roy, Protocol Composition Logic (PCL), in Electronic Notes in Theoretical Computer Science (Gordon D. Plotkin Festschrift) , 2007.
        D. Garg, J. Franklin, D. Kaynar, A. Datta, Compositional System Security with Interface-Confined Adversaries, to appear in Proceedings of 26th Annual Conference on Mathematical Foundations of Programming Semantics, Electronic Notes in Theoretical Computer Science , May 2010.

      Proving a Compiler: Mechanized Verification of Program Transformations and Static Analyses — Xavier Leroy

      Formal semantics of programming languages supports not only reasoning over individual programs (program correctness), but also reasoning over program transformations and static analyses, as typically found in compilers (tool correctness). With the help of a proof assistant, we can prove semantic preservation properties of program transformations and semantic soundness properties of static analyses that greatly increase the confidence we can have in compilers and program verification tools.

      The topics covered in this lecture include:

      • Non-optimizing compilation of a structured imperative language to a virtual machine, and its correctness proof.
      • Notions of semantic preservation.
      • A panorama of mechanized semantics: small-step, big-step, coinductive big-step, definitional interpreter, denotational semantics.
      • Examples of program optimizations: dead code elimination, register allocation.
      • Design and soundness proof of a generic static analyzer based on abstract interpretation.
      • Compiler verification "in the large" : an overview of the CompCert verified C compiler.

      We will use the Coq proof assistant and build on the formalization of the IMP language shown in Benjamin Pierce's "Software Foundations" lectures.

      Course material available here, especially the corresponding Coq development (compiler-verification.tar.gz).

      References:

      Ynot Programming — Greg Morrisett

      Next-generation programming languages are attempting to integrate more expressive forms of types (e.g., dependent and refinement types). But the attempts at taking standard languages, like Java, ML, Haskell, etc. and extending them with dependency and refinement run up against a host of problems: unsoundness due to side-effects, incompleteness due to weak logics, and absence of modularity due to a lack of appropriate abstraction mechanisms.

      An alternative is to start with Coq's (very) pure, functional core language (Gallina) and augment it with features needed to do practical programming. In particular, we will consider extensions to Gallina that support higher-order programs with side-effects, and how to smoothly integrate specifications of those effects into types so that we get a modular treatment of programs with dependency and refinement.

      The outcomes for my portion of the summer school are as follows: A student who attends the lectures and does the homework will:

      1. Learn how to write and reason about imperative ADTs and programs in the context of Coq.
      2. Learn about and be able to apply the ideas behind separation logic for reasoning about pointers.
      3. Learn about an approach to tactic-based theorem proving in Coq that leads to smaller and more maintainable proof scripts.
      4. Build a certified type inference engine.

      The primary goal is to teach you about the integration of Hoare-style program logics and type systems, or Hoare Type Theory (HTT). HTT makes it possible to write and reason about imperative, higher-order programs and their computational effects. We can embed HTT into Coq by providing only a very few new primitives. On top of this basic framework, we can define refined versions of HTT that simplify proofs of programs. In particular, we will build a separation-based interface for assembling and reasoning about references and state, and show how this style of interface supports modular reasoning about imperative, abstract data types.

      We will also discuss a technique for constructing proofs in Coq that I call "Chlipala-style" after Adam Chlipala. A Chlipala-style proof is assembled from a collection of re-usable tactics. The tactics are written to be insensitive to as many changes as possible. In particular, the tactics are written to use Ltac's pattern matching to find suitable hypotheses instead of a specific name. Although writing proofs in this style does not come naturally (to me), Adam has convinced me that the resulting proof scripts are easier to maintain than the brute-force approach that I was used to.

      Along the way, we'll study how to build a unification-based type-inference engine and prove its correctness. We'll first consider a purely functional version of unification, based on a JFP article of Connor McBride, which uses dependent types in a crucial way to ensure termination. Then using the ideas behind HTT, we'll build an imperative version which you can extract to efficient ML code.

      References:

      Building upon the earlier presentations in the Summer School, students will learn how to use Coq to build and prove correct simple imperative data structures (e.g., linked-lists, hash-tables, etc.) and algorithms (e.g., unificaton).

      Software Foundations in Coq — Benjamin Pierce

      Picking up where Andrew Tolmach's lectures leave off (and following the second half of the Software Foundations in Coq text), these lectures will develop more advanced techniques for using a proof assistant to formalize and reason about programming languages. Topics will include formalizations of simple imperative programs, Hoare logic, lambda-calculus, type systems, and the correctness of simple compilers.

      Essential Coq from Scratch — Andrew Tolmach

      Coq is a mechanized proof assistant that is widely used for reasoning about programs, programming languages, and programming tools. While it enjoys a very simple foundational core based on type theory, the full Coq system is rich and complex, and can be daunting for beginners. These lectures will give a pragmatic introduction to the essential features of Coq, based on the early chapters of the textbook Software Foundations in Coq by B. Pierce et al. They will be complemented with plenty of hands-on exercises.

      Lecture topics will include:

      • Proving properties of functional programs on numbers and lists.
      • Inductive data, propositions, and proofs.
      • Definition of logical connectives and quantifiers.
      To take advantage of these lectures, it is definitely necessary to spend some time working with Coq yourself, so plan to spend some of your evenings doing this! You should arrive with Coq version 8.2 installed on your laptop. You will also need an interactive development environment for Coq, either the CoqIde tool that comes with the system, or, if you are an emacs user, the Proof General tool.

      Recommended exercises from the first five chapters of Software Foundations:

      • Basics.v
        • ESSENTIALS:
          • all 1 star exercises
          • andb_true_elim2
          • beq_nat_refl
          • more_exercises (ble_nat,refl, zero_nbeq_S, etc. )
          • decreasing
        • STRONGLY RECOMMENDED:
          • mult_comm
          • binary
      • Lists.v
        • ESSENTIALS:
          • 1 star exercises
          • list_exercises
          • list_design
          • beq_natlist
          • apply_exercises
        • STRONGLY RECOMMENDED:
          • alternate
          • bag_functions
          • bag_theorem
          • bag_count_sum
      • Poly.v
        • ESSENTIALS:
          • all 1 star exercises
          • map_rev
          • override_neq
          • beq_nat_eq'
          • plus_n_n_injective
          • override_same
          • beq_nat_trans
        • STRONGLY RECOMMENDED:
          • combine_split (and a suitable inverse)
          • override_permute
      • Ind.v
        • ESSENTIALS:
          • All 1 star exercises
          • Give tactic proof AND proof object that if n is even, then so is 4+n.
          • double_even
          • ev_sum
          • MyProp_0
          • MyProp_plustwo
          • ev_MyProp
        • STRONGLY RECOMMENDED:
          • ev_ev_even
          • gen_dep_practice
          • palindrome
      • Logic.v
        • ESSENTIALS:
          • All 1 star exercises
          • even_ev
          • definition of True as an inductive Prop
          • not_eq_beq_false
          • dist_exists_or
          • R_provability
          • le_exercises (at least some of them)
        • STRONGLY RECOMMENDED:
          • MyProp_iff_ev
          • R_fact
          • no_repeats
Sponsors
Copyright © University of Oregon Department of Computer and Information Science. All rights reserved.
Privacy Policy