Carving and Replaying Differential Unit Test Cases from System Test CasesContact
Sebastian ElbaumLink to paper
: Carving and Replaying Differential Unit Test Cases from System Test Cases.pdf
: Unit test cases are focused and efficient. System tests are effective at exercising complex usage patterns. Differential unit tests (DUT) are a hybrid of unit and system tests that exploits their strengths. They are generated by carving the system components, while executing a system test case, that influence the behavior of the target unit, and then re-assembling those components so that the unit can be exercised as it was by the system test. In this paper we show that DUTs retain some of the advantages of unit tests, can be automatically generated, and have the potential for revealing faults related to intricate system executions. We present a framework for carving and replaying DUTs that accounts for a wide variety of strategies and tradeoffs, we implement an automated instance of the framework with several techniques to mitigate test cost and enhance flexibility and robustness, and we empirically assess the efficacy of carving and replaying DUTs on three software artifacts.
Back to top
Do Crosscutting Concerns Cause Defects?Contact
Marc EaddyLink to paper
: Do Crosscutting Concerns Cause Defects.pdf
: There is a growing consensus that crosscutting concerns harm code quality. An example of a crosscutting concern is a functional requirement whose implementation is distributed across multiple software modules. We asked the question, “How much does the amount that a concern is crosscutting affect the number of defects in a program?” We conducted three extensive case studies to help answer this question. All three studies revealed a moderate to strong statistically significant correlation between the degree of scattering and the number of defects. This paper describes the experimental framework we developed to conduct the studies, the metrics we adopted and developed to measure the degree of scattering, the studies we performed, the efforts we undertook to remove experimental and other biases, and the results we obtained. In the process, we have formulated a theory that explains why increased scattering might lead to increased defects.
Back to top
Tools and Experiments Supporting a Testing-based Theory of Component CompositionContact
Dick HamletLink to paper
: Tools and Experiments Supporting a Testing-based Theory of Component Composition.pdf
Development of software using off-the-shelf components seems to offer
a chance for improving product quality and developer productivity.
This paper reviews a foundational testing-based theory of component
composition, describes tools that implement the theory, and presents
experiments with functional and non-functional component/system properties
that validate the theory and illuminate issues in component composition.
The context for this work is an ideal form of component-based
software development (CBSD) supported by tools. Component developers
describe their components by measuring approximations to functional
and non-functional behavior on a finite collection of subdomains.
Systems designers describe an application-system structure by the
component connections that form it. From measured component descriptions
and a system structure, a CAD tool synthesizes the system properties,
predicting how the system will behave. The system is not built, nor
are any test executions performed. Neither the component sources nor
executables are needed by systems designers. From CAD calculations a
designer can learn (approximately) anything that could be learned by
testing an actual system implementation. The CAD tool is often more
efficient than it would be to assemble and execute an actual system.
Using tools that support an ideal separation between component- and
system development, experiments were conducted to investigate two
- To what extent can unit (that is, component)
testing replace system testing?
- What properties of software and
subdomains influence the quality of subdomain testing?
The figure above (adapted from Fig. 11 of the TOSEM paper) shows
predictions calculated by CAD tools for a systems design
that utilizes six components in a structure connected by series, conditional,
and iteration constructions.
Each component was tested using samples from about 50 subdomains.
From these measurements an approximate prediction of the
system was synthesized on a laptop in about 270 ms, shown as the red step functions
in the figure. For comparison, actual execution values are plotted
in black. The weighted r-m-s error in the run-time prediction is
about 2.4%. The tools run on GNU/Linux, Mac OS X, and Windows platforms,
with source code, documentation, and tutorial examples.
Back to top
Software Engineering for the Planet
The ICSE organisers have worked hard this year to make the conference "greener" - to reduce our impact on the environment. This is partly in response to the growing worldwide awareness that we need to take more care of the natural environment. But it is also driven by a deeper and more urgent concern.
During this century, we will have to face up to a crisis that will make the current economic turmoil look like a walk in the park. Climate change is accelerating, confirming the more pessimistic of scenarios identified by climate scientists [1-4]. Its effects will touch everything, including the flooding of low-lying lands and coastal cities, the disruption of fresh water supplies for much of the world, the loss of agricultural lands, more frequent and severe extreme weather events, mass extinctions, and the destruction of entire ecosystems .
And there are no easy solutions. We need concerted systematic change in how we live, to reduce emissions so as to stabilize the concentration of greenhouse gases that drive climate change. Not to give up the conveniences of modern life, but to re-engineer them so that we no longer depend on fossil fuels to power our lives. The challenge is massive and urgent - a planetary emergency. The type of emergency that requires all hands on deck. Scientists, engineers, policymakers, professionals, no matter what their discipline, need to ask how their skills and experience can contribute.
We, as software engineering researchers and software practitioners have many important roles to play. Our information systems help provide the data we need to support intelligent decision making, from individuals trying to reduce their energy consumption, to policymakers trying to design effective governmental policies. Our control systems allow us to make smarter use of the available power, and provide the adaptability and reliability to power our technological infrastructure in the face of a more diverse set of renewable energy sources.
The ICSE community in particular has many other contributions to make. We have developed practices and tools to analyze, build and evolve some of the most complex socio-technical systems ever created, and to coordinate the efforts of large teams of engineers. We have developed abstractions that help us to understand complex systems, to describe their structure and behaviour, and to understand the effects of change on those systems. These tools and practices are likely to be useful in our struggle to address the climate crisis, often in strange and surprising ways. For example, can we apply the principles of information hiding and modularity to our attempts to develop coordinated solutions to climate change? What is the appropriate architectural pattern for an integrated set of climate policies? How can we model the problem requirements so that the stakeholders can understand them? How do we debug the models on which policy decision are based?
This conference session is intended to kick start a discussion about the contributions that software engineering research can make to tackling the climate crisis. Our aim is to build a community of concerned professionals, and find new ways to apply our skills and experience to the problem. We will attempt to map out a set of ideas for action, and identify potential roadblocks. We will start to build a broad research agenda, to capture the potential contributions of software engineering research, and discuss strategies for researchers to refocus their research towards this agenda. The session will begin with a short summary of the latest lessons from climate science, and a concrete set of examples of existing software engineering research efforts applied to climate change. We will include an open discussion session, to map out an agenda for action. We invite everyone to come to the session, and take up this challenge.
Steve Easterbrook is a professor of computer science at the University of Toronto. He received his Ph.D. (1991) in Computing from Imperial College in London (UK), and was a lecturer at the School of Cognitive and Computing Science, University of Sussex from 1990 to 1995. In 1995 he moved to the US to lead the research team at NASA´s Independent Verification and Validation (IV&V) Facility in West Virginia, where he investigated software verification on the Space Shuttle Flight Software, the International Space Station, the Earth Observation System, and several planetary probes. He moved to the University of Toronto in 1999. His research interests range from modelling and analysis of complex software software systems to the socio-cognitive aspects of team interaction, including communication, coordination, and shared understanding in large software teams. He has served on the program committees for many conferences and workshops in Requirements Engineering and Software Engineering, and was general chair for RE'01 and program chair for ASE'06. In the summer of 2008, he was a visiting scientist at the UK Met Office Hadley Centre.
Steve's blog is titled Serendipity.
Dr. Spencer Rugaber
is a faculty member in the College of Computing
at the Georgia Institute of Technology. His research interests are
in the area of Software Engineering, specifically reverse engineering
and program comprehension, software evolution and maintenance and
software design. Dr. Rugaber has served as Program Director for
the Software Engineering and Languages Program at the U. S. National
Science Foundation and as as Vice-Chairman of the IEEE Technical
Committee on Reverse Engineering. He is currently the Principle
Investigator on the Georgia Tech portion of the Earth System Curator
Back to top
Technical Briefing (TB1) - Software Governance
Successful software engineering is closely linked to the broader corporate governance arrangements within the developer and deployer organisations yet this relationship is seldom discussed or explored. A substantial proportion of software development projects fail because of complex system 'ownership', misalignments in incentives and difficulties in securing 'accountability' for critical decisions. This talk introduces the area of governance, reviews the state of the art, examines a set of case studies and sets out a research agenda for the area.
Anthony Finkelstein is Professor of Software Systems Engineering at University College London (UCL).
He is a Visiting Professor at Imperial College and at the National Institute for Informatics, Tokyo, Japan.
He has published more than 200 scientific papers and secured more than £20m of research funding.
His work spans a broad range from software development tools and process to applications of software technology in the life sciences.
He has a longstanding interest in issues at the boundary of business and software engineering.
He was a winner of the International Conference on Software Engineering 'most influential paper' prize for work on 'viewpoints' and winner of the Requirements Engineering 'most influential paper' prize for work on traceability.
He has served on numerous editorial boards including that of ACM TOSEM and IEEE TSE, and was founder editor of Automated Software Engineering.
As an industry consultant Anthony Finkelstein has been involved as an expert in legal disputes and in organising enquiries into software development failures.
His interest in software engineering governance stems from this practical experience.
Back to top
Technical Briefing (TB2) - Green SE: Ideas for Including Energy Efficiency into your Software Projects
Software can play a key role in reducing energy consumption of
general purpose computing, including application-level
control (e.g., Power Management APIs in Windows), monitoring
(e.g., Software Energy Profiling), and architecture (e.g.,
energy-efficient middleware design).
The goal of this talk is to address architectural and conceptual issues concerning software energy consumption with respect to established computer layer models. Besides theoretical perspectives, an outlook to industrial relevance is provided.
Software engineering consequences and ideas for dealing with the challenge of incorporating energy efficiency into software projects will complete the story.
Gerald Kaefer is Senior Engineer at Siemens Corporate Technology in Munich, Germany. His research interests include Pervasive and Autonomic Computing focusing on emerging software architecture and patterns. Currently he works on Green SE and cloud computing topics. Prior to joining Siemens he was an assistant professor at Graz University of Technology. In his research work Gerald has
published a number of research papers and articles in the field of wearable computing, power aware computing, and software architecture. In his industrial career he has worked on a broad spectrum of projects ranging from embedded system design to enterprise application design.
Back to top
Technical Briefing (TB3) - Multicore Software Engineering
With multicore processors, servers, PCs, and laptops have become truly parallel machines. The mobile phone and embedded applications will follow. Hardware manufacturers predict a hundred processors and more per chip. General-purpose, parallel software, on the other hand, is scarce. What to do with all the processors?
This briefing will overview the current hardware developments, provide examples of successfully parallelized, non-numeric applications, and discuss the lessons learnt for software engineering. The good news is that parallelization is not a black art—it can be handled with reasonable effort. However, significant restructurings are typically required when parallelizing existing, serial applications. We also present recent advances in the areas of automatic performance tuning and programming languages that allow the succinct expression of frequent parallel patterns. A comparative case study will shed light on the actual advantages of transactional memory over locking. An outlook on future tools and methods for parallelization will be given.
Time permitting, a summary of the International Workshop of Multicore Software Engineering will be presented.
Questions from the audience will be welcome throughout the briefing.
Walter F. Tichy has been professor at the University Karlsruhe, Germany, since 1986, and was dean of the faculty of computer science from 2002 to 2004. Previously, he was senior scientist at Carnegie Group, Inc., in Pittsburgh, Pennsylvania and served six years on the faculty of Computer Science at Purdue University in West Lafayette, Indiana. His primary research interests are software engineering and parallelism. In the past two years, he has assembled a strong team for researching all aspects of multicore software.
He earned an M.S. and a Ph.D. in Computer Science from Carnegie Mellon University in 1976 and 1980, resp. He is director at the Forschungszentrum Informatik, a technology transfer institute. He is co-founder of ParTec, a company specializing in cluster computing. He was program co-chair for the 25th International Conference on Software Engineering (2003). Dr. Tichy is a member of ACM, IEEE Computer Society, and GI.
Victor Pankratius heads the young investigator group "Software Engineering for Multicore Systems" at the University of Karlsruhe. He chairs the international working group "Software Engineering for parallel Systems" (SEPAS) of the Gesellschaft für Informatik. Dr. Pankratius' research concentrates on how to make parallel programming easier for the average programmer. His work on multicore software engineering covers a range of topics, including empirical studies, auto-tuning, language design, and debugging.
Dr. Pankratius holds a Ph.D. with distinction from the University of Karlsruhe, Germany. He received a Diplom degree (M.S.) in Business Computer Science best of class from the University of Münster, Germany. He co-organizes the International Workshop on Multicore Software Engineering. He is a member of ACM, IEEE, and GI.
Back to top
Wednesday 20th May
Open Lunch-time Briefing Session on the Graduate Software Engineering Reference Curriculum (GSwERC) project – much more than a matter of guesswork?
In 2007, a coalition from academia, industry, government, and professional societies formed the integrated Software and Systems Engineering Curriculum project (iSSEc) to create a new reference curriculum that reflects current development practices and the greater role of software in today’s systems. With more than thirty authors (leaders in software education and industrial software development), the Graduate Software Engineering Reference Curriculum (GSwERC – pronounced Guesswork) is a product of the iSSEc project. The GSwERC primarily addresses the education of students for a professional master’s degree in software engineering; i.e., a degree intended for someone who is primarily interested in pursuing a career in the practice of software development. Version 0.5 of GSwERC was released for broad review in October 2008 (http://asysti.org/issechome.aspx). Version 1.0 is expected by the end of 2009.
GSwERC builds on the SEI curriculum foundations plus those of other initiatives, such as the Software Engineering 2004: Curriculum Guidelines for Undergraduate Degree Programs in Software Engineering (SE2004) and Guide to the Software Engineering Body of Knowledge (SWEBOK).
The briefing session at ICSE 2009 will provide an opportunity to meet with members of the development team, receive information on the developments so far, and enter into an open discussion regarding what has been achieved so far and the future.
Back to top