Working Conference on
"The Quality of Numerical Software: Assessment and Enhancement"
A working conference
on "The Quality of Numerical Software: Assessment
and Enhancement", was arranged 8 - 12 July 1996,
in Oxford, England. This was the seventh working conference
organized by the IFIP
Working Group on Numerical Software
(WG 2.5) on
behalf of the IFIP Technical Committee on Software: Theory and Practice
(TC 2).
Information on the city
Oxford
is available.
- Françoise Chaitin-Chatelin (Toulouse, France)
- Jeremy DuCroz (Oxford, United Kingdom)
- Bo Einarsson
(Linköping, Sweden)
- Wayne Enright (Toronto, Ontario)
- Brian Ford (Oxford, United Kingdom) Co-chair
- Eric Grosse
(Murray Hill, New Jersey)
- Richard Hanson
(Houston, Texas)
- Elias Houstis
(Patras, Greece)
- George Paul (Yorktown Heights, New York)
- John Rice
(West Lafayette, Indiana) Co-chair
- Takou Tsuda (Hiroshima, Japan)
- Mladen Vouk
(Raleigh, North Carolina)
The proceedings were published by the IFIP publisher
Chapman & Hall
and were edited by Ronald Boisvert, National Institute of Standards and Technology, USA.
A list of contents for the proceedings is now available.
The publishers page for the proceedings is available.
- Sunday, 7 July
- Welcome Reception (St. Catherine's College)
- Monday, 8 July
- Session 1: Case-studies of software development in optimization,
differential equations and linear algebra.
- Session 2: The measurement of components of software quality.
- Session 3: The economics of developing high quality software.
- Tuesday, 9 July
- Session 4: Evaluation of high quality software for differential equations,
optimization and data analysis.
- Session 5: Assessments of benchmark construction and use.
- Session 6: Standards for software quality.
- Wednesday, 10 July
- Session 7: Software engineering testing methods applied to numerical software.
- Session 8: Human factors in the development of numerical software.
- Excursion, including visit to Blenheim Palace (home of the Churchill family)
- Theatre, Stratford Upon Avon (Shakespeare play As You Like It)
- Thursday, 11 July
- Session 9: The impact of arithmetic quality on the quality of numerical software.
- Session 10: The measurement of accuracy and reliability for problem classes
that include unsolvable problems.
- Session 11: Open session.
- Conference Dinner (St. Catherine's College)
- Friday, 12 July
- Session 12: The impact of modularity and software engineering technologies
on the quality of numerical software.
- Session 13: Accuracy control in solving differential equations and for
basic numerical algorithms.
Numerical software is central to our computerized society,
it is used to design airplanes and bridges, to operate manufacturing lines,
to control power plants and refineries, to analyze
future options for financial markets and the economy. It is
essential that it be of high quality; efficient, accurate,
reliable, robust, easy to use, easy to maintain and easily adapted to
new conditions of use. Quality control of numerical software
faces all the challenges of software in general; it can be large
and complex, expensive to produce, difficult to maintain and hard
to use. Numerical software has two characteristics that distinguish
it from general software. First, much of it is built upon
a long established foundation of mathematical and computational
knowledge. Second, it routinely solves problems of types where
it is known that these cannot be generally reliable solution
algorithms. Another characteristic of numerical software of a
less fundamental, but still disturbing nature, is that much of it
uses algorithms that are known to be neither fast nor accurate
nor reliable.
This conference considers eight somewhat independent
components or aspects of software quality:
- EFFICIENCY
This component measures the consumption of computer
resources: processor time, memory used, network bandwidth utilization, and
similar resources.
- ACCURACY
This measures the quality of the solution. Since achieving a
prescribed accuracy is rarely easy in numerical computations, the importance
of this component of quality is often under weighted.
- RELIABILITY
Since prescribed accuracies are not always achieved, one
measures how ``often'' the software fails. Robustness is a related criterion
that measures how ``gracefully'' software fails.
- ADAPTABILITY
This criterion measures the ease with which the
software can adapt, or be ported, to new computing environments.
- MAINTAINABILITY
This criterion measures how easy software is to
maintain, covering purposes such as fixing bugs, making enhancements, and
combining it with other software.
- EASE-OF-USE
In addition to the usual human engineering aspects of
software, this criterion measures the number of user inputs required and the
difficulty of providing good values.
- COST
The expense of developing, buying, maintaining and using
software.
- HUMAN FACTORS
Show to plan the development of software; how best
to organize a team to work together; how best should developers, testers and
users interact.
It is not easy to objectively measure some of the components of software
quality. It is difficult to evaluate the trade-offs between the components.
Nevertheless, it is important that, wherever possible, meaningful rating
information be available which is appropriate for users, for software
developers, and for researchers.
The problem domains particularly important for this conference are:
- Differential Equations
- Linear Algebra
- Data Analysis
- Basic Tools
- Integrals
- Derivatives
- Elementary functions
- etc.
There are also software methodologies that are applicable to many or all
problem domains. Those particularly important to this conference are:
- Benchmarks. Sets of standard problems.
- Testing. This involves techniques to search for weaknesses and
defects in the software or to provide evidence of its correctness.
- Quality Standards. Existing standards which are concerned with
aspects of quality (such as ISO 9000 or the Capability Maturity Model)
might be discussed: how relevant are they to numerical software?
- Software Properties. Certain general properties, such as modularity, of
software construction may help to achieve high quality.
- Accuracy Control. The two main approaches are
- a priori analysis
(one verifies beforehand that the prescribed accuracy will be achieved).
- a posteriori analysis (one produces a solution and then verifies
that the prescribed accuracy is achieved).
It is well known that no single algorithm provides the best quality for any
large class of numerical problems. The polyalgorithm idea expresses the fact
that ``at run time'' one can choose between algorithms. Similarly, the
algorithm selection techniques based on problem features (i.e., expert
systems for choosing software modules) reflect this fact. For many
numerical areas the identification of the best software for a specific problem
or a small class of problems is a complex task well beyond the capabilities of
most users. The conference will address the following questions about this
situation:
- Features. How does one identify features (e.g., ``stiff'',``unstable'',
``nearly singular'', ``oscillatory'') that are known to be important
to software selection? How does one measure these features?
- Ratings. How does one appropriately document software quality for
casual users? For non-expert code developers? For expert code developers?
For researchers?
- Cost. What is the cost of achieving quality? What is the cost of
selecting high quality software? What is the cost of using lower
quality software?
- Development Costs. Developing High quality numerical software is a
high cost process, e.g., research codes are developed and then high quality
commercial codes are derived from this. The high cost is recovered by
selling libraries at fairly high prices (thousands of dollars) to central
computer facilities. This development paradigm is passing and the future
marketplace requires an order of magnitude or more lower prices. Will
cheap, inferior codes drive out expensive, superior codes? Is this
happening for numerical software?
Last modified: May 21, 2014
Imprint