|
Proof-Carrying Code (PCC) is a technique by which a code consumer (e.g., host) can verify that code
provided by an untrusted code producer adheres to a predefined set of safety rules. These rules, also
referred to as the saftey policy, are chose by the code consumer in sucvh a way that they are
sufficient guaranttes for safe behavior of programs. There are many potential application of PCC. For
example, for mobile code the the code consumer would be an Internet host (e.g., a web browser) and the
code producer a server that sends applets. In operating systems, one can have the kernel act as the
host, with untrusted applications acting as code producers that download and execute code in the
kernel's address space.
These lectures will describe an implementation of PCC consisting of four elements:
- a formal specification language used to express the safety policy
- a formal semantics of the language used by the untrusted code, usually in the form of a logic
relating programs to specifications
- a language used to express the proofs
- an algorithm for validating proofs
These lectures will describe Typed Assembly Language (TAL), a new language-bassed security mechanism
that extends traditional untyped assembly languages with typing annotations, memory management
primitives, and a sound set of typing rules. These typing rules guarantee the memory safety, control
flow safety, and type safety of TAL programs. Moreover, the typing constructs are expressive enough to
encode most source langauge programming features including records and structures, arrays, higher-order
and polymorphic function, exceptions, abstract data types, subtyping, and modules. Just as
importantly, TAL is flexible enough to admit many low-level compiler optimizations. Consequently, TAL
is an ideal target platform for type-directed compilers that wnat to produce verifiably safe code for
use in secure mobile code applications or extensible operation system kernels.
Information-flow policies provide end-to-end protection of confidential program data.
In this set of lectures, we will study language-based mechanisms for enforcing these kinds of policies.
These mechanisms typically differentiate between secret and public data and prevent the system from
leaking secret imformation to public locations. These lectures are intended to give the big picture
of recent and current research on information-flow security, particularly on state-of-the-art program
analyses.
Lecture notes from a similar topic in March 2002
A program monitor is a process that runs in parallel with an untrusted application and examines
security-sensitive events generated by the application. If the application behavior deviates from a
specificed security policy, the monitor can terminate the program or take action to correct the illegal
behavior. This series of lectures will explore the design and implementation of program monitors and
their associated tools. To Illustrate the general principles, we will show how to use these tools to
make Java programs more secure.
|
|