Software Reliability in an On Demand World
Access to information is the most critical
requirement on information technology solutions today. Since the onset of the
microprocessor era, the IT industry has evolved through the following three
predictable phases: access to information; integration of disparate systems; and
on demand capability. The fact that this metamorphosis occurred in an era of
economic uncertainty has placed tremendous pressure on information technology companies,
such as IBM, to provide highly available software systems, while at the same
time reducing total cost of ownership.
In the traditional SMP server space, hardware
reliability has made significant gains, increasing the demand for software
systems to be of the utmost quality. In the near future, the emerging scale-out
computer architectures (e.g. Blades) with built-in redundancy and a high
availability strategy based on a "rapid detection and
reconfiguration" approach will transform this demand into a requirement.
This presentation summarizes how software has
transformed over time to improve the reliability of applications; relate the
effect this transformation had on the industry; describe the attributes that
make up a highly available system; apply technologies, such as clustering and
virtualization, to satisfy the requirements of high availability systems;
explain how to exploit technological advancements to take full advantage of the
increased capabilities of reliable hardware components; and finally, discuss
implications of software availability on the future.
Dr. Nick Bowen is Vice President of
UNIX and xSeries Software for IBM?s Server Group.
Prior to that, Dr. Bowen held several positions in IBM's Research Division.
Most recently he was Director of Computing Utilities where he spearheaded the
definition of the "intelligent infrastructure" research program. The
group had many exploratory system projects for which they received numerous
awards. In addition, they made significant contributions to AIX, OS/390,
AS/400, and xSeries servers as well as many products
within IBM's software group. Prior to that Dr. Bowen made technical contributions
to the S/390 Parallel Sysplex effort and led several
of the OS/390 initiatives to embrace internet and object technologies.
Dr. Bowen received his B.S. degree in computer
science from the
Dr. Bowen?s IBM career spans over 20 years,
which includes expertise in high availability, memory management, and parallel
processing. He is a senior member of IEEE, a member of ACM, and a guest
The Next Generation Secure Computing Base (NGSCB)
KEYNOTE, NOVEMBER 18, 2003 ISSRE
Next Generation Computing Base (NGSCB) provides a high assurance computing
environment on open computer systems like PCs. Traditional high assurance
systems are built from very restricted hardware and software combinations, and
are designed with specific security goals. This effectively means that high
assurance systems are closed systems of restricted flexibility. Further, in
order to maintain security, the trusted computing base
both hardware and software must change infrequently.
Conversely, open systems like PCs support a
great variety of hardware and software from many suppliers, and the hardware
and software computing base changes very rapidly so that users can use new
devices and features. This openness and flexibility has served the PC ecosystem
well, but are at odds with the basic design principles of high assurance
systems. As the PC plays a larger role in our daily lives, it is necessary to
provide a secure execution environment without disturbing the openness that has
contributed to the PC?s success. NGSCB is designed to give us the best of the worlds
of openness and security. NGSCB hardware allows two or more partitions to be
established by a small and simple machine monitor. In one partition users can run
a simple operating designed for security, and in the other they can run a large
feature rich operating system supporting any hardware device the user desires.
The monitor in conjunction with NGSCB hardware ensures that the secure OS is
protected from viruses and trojans in the main OS.
This talk will sketch the NGSCB hardware and
software system, and then discuss the software engineering challenges that we
are facing, and the steps we are taking to build a secure product. Our design
assumption is that the monitor, the secure OS, and hosted applications will be
under attack from sophisticated adversarial system code. Steps we are taking to
make the system secure, even in the face of such adversaries, range from
aggressive use of formal methods in algorithm development and program
verification, through the use of tools in development and testing.
Paul England works in Microsoft?s
Security Business Unit where he is the principal architect of NGSCB. Prior to
this he spent 5 years in Microsoft Research. Much of this time was spent on
design, development and evangelization of the ideas now being productized by
Microsoft and its hardware partners. Before Microsoft, Paul was at Bell
Communications Research working on various aspects of distributed systems. He
started his professional career at Bellcore studying the
electrical and optical properties of novel semiconducting
and superconducting materials and devices. Paul holds a Ph.D. in condensed
matter physics from
The Economics of Software Reliability
software applications, investments in software reliability compete with
investments in such alternate capabilities as functionality, response time,
adaptability, and speed of development. Investigating the tradeoffs among these
sources of investment raises a number of significant questions about the nature
of software reliability and its interactions with other desired software capabilities.
These questions include:
? What software capabilities are your various
stakeholders really relying on (liveness,
responsiveness, quality of service)? What happens when these aspects of
? Is success in the marketplace a monotone
function of achieved reliability?
? Is quality really free in all situations?
How can one determine how much investment in reliability is enough in a given
? Are there ways to quantify the tradeoffs
among schedule, cost, and reliability? Is "faster, cheaper, better"
? Many current software reliability-related
techniques assume that every requirement, use case, test case, and defect is
equally important. How cost-effective are such value-neutral methods?
? What are the strengths and weaknesses of
emerging "agile methods" in coping with reliability-related investments?
This talk will explore these and related
questions from the perspective of the emerging discipline of Value- Based
Software Engineering. It will show that, at least in many cases, reasoning
about the economics of software reliability can lead to more satisfactory
outcomes than will the application of value-neutral techniques.
Dr. Barry Boehm is TRW Professor of Software
Engineering, Computer Science Department, USC. Director,
Future of Computer Software Systems; Commodity or
Department of Defense
(WS) received significant attention recently by government agencies and
computer industries. WS provides a new architecture/paradigm for building distributed
computing applications based on XML. It provides a uniform and widely
accessible interface to glue the services implemented by the other middleware platform
over the Internet by utilizing standard Internet protocols such as WSDL, SOAP,
WS is a young technology, and many issues
still need to be addressed, such as finalizing draft specifications, runtime
verification and validation, and quality assurance by the UDDI servers. Many
keen observers agree that WS represents a new, significant trend for software
systems integration that will be developed, structured, acquired, and
maintained. For example, instead of buying and maintaining software, software
can be leased and downloaded when needed. Thus, software upgrade will be
automated because the latest version will be used when the service is called at
runtime. WS implementation requires a loosely coupled architecture, where new
services can be added at runtime and old services can be replaced.
Furthermore, vendors will compete to supply
the most dependable and/or marketable services on the web, and this also
changes the way software industries earn their revenue. Quality assurance as
well as security and privacy will be important for both service clients and
providers, including those who serve as intermediate agents such as UDDI
WS provides a new opportunity for quality and
globalization. Companies, regardless of their nationalities, languages, and
culture, must now compete in a global market where the only rule is
interoperability quality via architecture and interfaces. It is no longer true
that certain companies have an advantage due to market segmentation, whether
local, national, or international. If a company does not compete in the global
service market, its business will decline as new services are published on the web.
Companies that have great software quality technology will rise above companies
that only have great financial resources.
The concepts of WS are far beyond software. In
the future, hardware will also have a corresponding service where vendors will
supply new components to fit into existing well-published architecture for
Raymond Paul: As a professional
electronics engineer, software architect, developer, tester, and evaluator for
the past 24 years, Dr. Paul has held many positions in the field of software
engineering. Currently, Dr. Paul serves the as the technical director for
command and control (C2) policy. In this position, Dr. Paul supervises command
and control systems engineering development for objective, quantitative, and
qualitative measurements concerning the status of software/systems engineering resources
and evaluates project outcomes to support major investment decisions. This
measurement data is required to meet various Congressional mandates, most notably
the Clinger-Cohen Act.
Dr. Paul holds a doctorate in software engineering and is an active senior member of the IEEE Computer Society. He has published more than 64 papers on software engineering in various technical journals and symposia proceedings, primarily under DoD, ACM, and IEEE sponsorship.