ICSE 2009: ICSE09
slide show picture of Vancouver
31st International
Conference on
Software
Engineering®
31st International Conference on Software Engineering, Vancouver, Canada, May 16-24, 2009.   Sign up for announcements!

RSS feed image RSS Feed

Information for
Potential Conference
Exhibitors

T02 Cognitive Crash Dummies: User Interface Prototyping with a Difference

Authors:
Bonnie E. John, Human-Computer Interaction Institute, School of Computer Science, Carnegie Mellon University, USA


Held on: Sunday the 17th (morning and afternoon)

Abstract:

Prototyping tools are making it easier to explore a design space so many different ideas can be generated and discussed, but evaluating those ideas to understand whether they are better, as opposed to just different, is still an intensely human task. User testing, concept validation, focus groups, design walkthroughs, all are expensive in both people’s time and real dollars.

Just as crash dummies in the automotive industry save lives by testing the physical safety of automobiles before they are brought to market, cognitive crash dummies save time, money, and potentially even lives, by allowing designers to automatically test their design ideas before implementing them. Cognitive crash dummies are engineering models of human performance that make quantitative predictions of human behavior on proposed systems without the expense of empirical studies on running prototypes.

When cognitive crash dummies are built into prototyping tools, design ideas can be rapidly expressed and easily evaluated.

This course reviews the state of the art of predictive modeling and presents a tool, CogTool, that integrates rapid prototyping with modeling. Participants will use their own laptops to prototype an interactive system and create a model of skilled performance time on that prototype. The course ends with a review of other tools and a look to the future of predictive modeling.

Back to top

T05 Parameterized Unit Testing: Principles, Techniques, and Applications in Practice

Authors:
Nikolai Tillmann, Microsoft Research, USA;
Jonathan de Halleux, Microsoft Research, USA;
Tao Xie, North Carolina State University, USA;
Wolfram Schulte, Microsoft Research, USA


Held on: Sunday the 17th (morning)

Abstract:

Among various types of testing, developer testing has been widely recognized as an important and valuable means of improving software reliability, partly due to its capabilities of exposing bugs early in the development life cycle. Recently parameterized unit testing has emerged as a very promising and effective methodology to allow the separation of two developer testing concerns or tasks: the specification of external, black-box behavior (i.e., assertions or specifications) by developers and the generation and selection of internal, white-box test inputs (i.e., high-code-covering test inputs) by tools. A parameterized unit test (PUT) is simply a test method that takes parameters, calls the code under test, and states assertions. PUTs have been supported by JUnit 4 and .NET test frameworks such as NUnit, xUnit, and MbUnit. Various industrial testing tools such as Pex for Microsoft Visual Studio .NET and Agitar AgitarOne for Java also exist to generate test inputs for PUTs.

This tutorial presents the latest research and practice on principles, techniques, and applications of parameterized unit testing in practice, highlighting success stories, research and education achievements, and future research directions in developer testing. The tutorial will help improve developer skills and knowledge for writing PUTs and give overview of tool automation in supporting PUTs. Attendees will acquire the skills and knowledge needed to perform research or conduct practice in the field of developer testing and to integrate developer testing techniques in their own research, practice, and education.

Back to top

T09 Economics-Driven Architecting

Authors:
Rick Kazman, Carnegie Mellon Software Engineering Institute, USA;
Ipek Ozkaya, Carnegie Mellon Software Engineering Institute, USA


Held on: Monday the 18th (morning and afternoon)

Abstract:

Since software engineering artifacts exist to serve the business goals of an enterprise, optimizing the value of software systems is a central concern of software engineering. However, software engineers are not well equipped with techniques that can assist them in making value-based decisions. Existing techniques in software economics address software engineering life-cycle practices, most of which rely on techniques that culminate in implementation and code-level activities. Equipping software engineers with practical techniques for reasoning about software economics will provide them with the tools, vocabulary and rationale needed to articulate and analyze the value-driven impact of architectural decisions. This full-day tutorial will provide researchers and practitioners who need to reason about and make software architecture decisions with economics-driven techniques. The tutorial presents techniques for combining utility, quality attribute reasoning, and software architecture decision-making to best serve the business context. The format involves lectures, presentation of a case study where the techniques presented were applied in a real-world setting, and a hands-on exercise.

Back to top

T11 Effective Model-Based Testing

Authors:
Wolfgang Grieskamp, Microsoft Corporation, USA;
Nicolas Kicillof, Microsoft Corporation, USA


Held on: Monday the 18th (morning and afternoon)

Abstract:

Model-Based Testing (MBT) is a prolific research area that has demonstrated a mixed track record when applied to actual systems. The MBT strategy begins with the creation of a behavioral model of the target system from which tests can be automatically generated. When run against an implementation, these tests can find conformance bugs and provide a high degree of confidence that the actual system meets the specification.

Our team set out to overcome the obstacles that most commonly discourage the adoption and success of MBT in an industrial setting. This involved building an MBT tool called Spec Explorer. The design focused on a short learning curve for developers, while also providing features to make it both reliable and effective for medium-to-large projects. These claims were put to test when Microsoft chose it as a cornerstone of the effort to validate open protocol documentation, consisting of more than 30,000 pages to specify more than 200 networking protocols.

This tutorial teaches underlying concepts, methodology, tools, and application of Model-Based Testing. It builds on the success of quick ramp up of hundreds of developers without previous knowledge of behavioral modeling or formal methods. Topics covered will be behavioral modeling with Finite and Abstract State Machines using mainstream programming languages, scenario-oriented and test purpose modeling, exploration and model checking, state-space slicing, traversals and combinatorial parameter generation, automated test creation, and non-determinism and asynchronicity.

Back to top

T12 Semantic Web Technologies in Software Engineering

Authors:
Harald C Gall, University of Zurich, Switzerland;
Gerald Reif, University of Zurich, Switzerland


Held on: Monday the 18th (morning and afternoon)

Abstract:

Over the years, the software engineering community developed various tool that help software developers to specify, develop, test, analyze, and maintain software. Many of these tools use proprietary data formats to store their artifacts which hamper the interoperation between tools. On the other hand, the Semantic Web provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries. In the Semantic Web, ontologies are used to define the concepts in the domain of discourse and the relationship between these concepts and as such provide the formal vocabulary applications use to exchange semantically rich data. Beside the Web, the technologies developed for the Semantic Web have proven to be useful also in other domains, especially when data has to be exchanged between applications from different parties. Software engineering is one of these domains where recent research shows that Semantic Web technologies are able to reduce the barriers of proprietary data formats and improve the interoperation of tools, even if they were originally not designed to work in cooperation. Design, architecture, code, tests, or models can be shared across application boundaries enabling a seamless integration of process results.

In this tutorial, we will give an introduction to the Semantic Web, the related technologies and tools, and present results on applying these technologies in the domain of software engineering. In a specific hands-on session attendees will work with ontologies in an SE context.

Back to top

T15 Find Your Voice

Authors:
Gail E. Harris, Instantiated Software Inc., Canada


Held on: Monday the 18th (morning)

Abstract:

Do you get tongue tied at important meetings? Do you wonder why your ideas are never adopted? Do you wish your colleagues and clients understood you better?

In the first part of this tutorial we will examine strategies and techniques for communicating, regardless of methodology. You will learn how to analyze the communication situation and your audience. With these techniques you will be able to identify in 5 minutes or less the communication premise and style of your audience. Armed with this knowledge you will be able to adapt your own style to ensure the message you send is the message you intend.

The second part of this tutorial is about teamwork. Working as a team is inevitable in modern software projects, but sometimes teams struggle to collaborate effectively. In this tutorial you will learn how to:

- recognize the signs that something is wrong,

- improve team collaboration,

- build trust amongst team members,

- apply a variety of group problem solving methods, and

- facilitate decisions that everyone supports.

In short, you will learn to be a leader while still being part of the team.

Back to top

T19 Fundamentals of Dependable Computing

Authors:
John C. Knight, University of Virginia, U.S.A.


Held on: Tuesday the 19th (morning and afternoon)

Abstract:

Computer systems provide us with a wide range of services upon which we have come to depend. Many computers are embedded in more complex devices that we use such as automobiles, aircraft, appliances, and medical devices. Others are part of sophisticated information systems that provide us with a variety of important facilities such as financial services, telecommunications, transport, and energy production. Most systems, even those as apparently benign as computer games, need to pay attention to dependability because failure can be very costly.

Engineering systems to be as dependable as we need them to be is a significant challenge and requires a variety of analysis and development techniques. Dependability has to be engineered into a system during its development. It cannot be treated as an “add on” if it is to be effective. It is also essential that dependability be treated systematically and comprehensively. Computer engineers, software engineers, and project managers need to understand the major elements of current technology in the field of dependability, yet this material tends to be unfamiliar to researchers and practitioners alike.

This tutorial is designed to equip all involved with computing system development with the fundamental principles necessary to tackle issues in dependable system development. The tutorial emphasizes software issues although hardware and system topics are addressed as necessary. The topics covered include terminology, dependability requirements, types of fault, fault-tree analysis, FMECA, HazOp, formal software specification, software correctness by construction, software and hardware fault tolerance, safety kernels, Byzantine agreement, and fail-stop machines.

Back to top

T22 Multicore Software Engineering

Authors:
Walter Tichy, University of Karlsruhe, Institute for Program Structures and Data Organization, Germany;
Victor Pankratius, University of Karlsruhe, Institute for Program Structures and Data Organization, Germany;
Jeffrey Gallagher, Intel Corporation, USA


Held on: Tuesday the 19th (morning and afternoon)

Abstract:

Due to stagnating clock rates, future increases in processor performance will have to come from parallelism. Inexpensive multicore processors with several cores on a chip become standard in PCs, laptops, servers, and embedded devices; manycore chips with hundreds of processors on a single chip are predicted. Software engineers are now asked to write parallel applications of all sorts, and need to quickly grasp the relevant aspects of general-purpose parallel programming; this tutorial prepares them for this challenge.

The first part presents state-of-the-art concepts and techniques in multicore software engineering, such as basics of parallel programming, programming models, design patterns for parallelism, parallelism in modern programming languages, and testing and debugging techniques for multicore. Experience reports on the parallelization of realworld applications with over hundred thousand lines of code are used for illustration. Current research topics are addressed.

The second part consists of hands-on exercises. Intel will sponsor 16 laptops with pre-installed environments and professional tools for multicore performance monitoring and debugging. Participants will work in groups of 2-3 people and apply the concepts taught in the first part by learning about important software tools and using them to examine and modify provided sample code that is specifically designed to illustrate important parallelism implementations.

Back to top

T23 Mining Software Engineering Data

Authors:
Tao Xie, North Carolina State University, USA;
Ahmed Hassan, Queen's University, Canada


Held on: Tuesday the 19th (morning)

Abstract:

Software engineering data (such as code bases, execution traces, historical code changes, mailing lists, and bug databases) contains a wealth of information about a project's status, progress, and evolution. Using well established data mining techniques, practitioners and researchers can explore the potential of this valuable data in order to better manage their projects and to produce higher quality software systems that are delivered on time and within budget.

This tutorial presents the latest research in mining Software Engineering (SE) data, discusses challenges associated with mining SE data, highlights SE data mining success stories, and outlines future research directions. Attendees will acquire the knowledge and skills needed to perform research or conduct practice in the field and to integrate data mining techniques in their own research or practice. Our tutorials on this subject were well attended at ICSE 2007 and ICSE 2008.

Back to top

T26 Process and Product Architectures and Practices for Achieving both Agility and High Assurance

Authors:
Barry Boehm, University of Southern California, USA;
Jo Ann Lane, University of Southern California, USA


Held on: Tuesday the 19th (afternoon)

Abstract:

With today’s rapid pace of change and need for system assurance, software system developers are challenged to provide needed software capabilities in the desired timeframe. Our experiences in helping to define, acquire, develop, assess, and evolve these software systems have taught us that traditional acquisition and development processes do not work well on such systems. This tutorial first sets the stage by summarizing the characteristics of such software systems and the associated challenges. Some of the more challenging software systems are those self-adaptive systems which are often integrated to form system of systems that cross both business and political boundaries and are based upon lean/six-sigma and agile development processes and flexible, adaptable architectures.

One of the recent approaches to meeting these challenges is the Incremental Commitment Model (ICM), developed in a recent National Research Council (NRC) study tasked to integrate human factors into the systems development process. The NRC group developed a framework to organize systems engineering and acquisition processes in ways that better accommodate the different strengths and difficulties of hardware, software, and human factors engineering approaches. While its general form is rather complex, its risk-driven nature has enabled us to determine a set of ten common risk patterns and organize them into a decision table that can help new projects converge on a process that fits well with their particular process drivers. The tutorial elaborates on each of the ten cases and provides examples of their use.

Back to top