Verification/Validation/Certification
Carnegie Mellon University
18-849b Dependable Embedded
Systems
Spring 1999
Author: Eushiuan Tran
Abstract:
In the development of an embedded system, it is important to be able to
determine if the system meets specifications and if its outputs are correct.
This is the process of verification and validation (V & V) and its planning
must start early in the development life cycle. Both aspects are necessary as a
system meeting its specifications does not necessary mean it is technically
correct and vice versa. There are many different V & V techniques which are
applicable at different stages of the development life cycle. The results of V
& V forms an important component in the safety case, which is a document
used to support certification. Certification is usually pursued due to either
legal reasons or economic advantages. The certification process also starts
from the beginning of the life cycle and requires cooperation between the
developer and regulatory agency from the very start. Thorough V & V does
not prove that the system is safe or dependable, and there is always a limit to
how much testing is enough testing. In addition, certification does not prove
that a system is correct, so it does not eliminate the developer's legal and
moral obligations. Therefore, extreme care should be taken in the developement
of embedded systems to make sure that the right amount of time is spent on V
& V, and also that certification not be used to prove that a system is
correct.
Contents:
Introduction
Verification, validation, and certification are essential in the life cycle of
any safety critical embedded system. The development of any system is not
complete without rigorous testing and verification that the implementation is
consistent with the specifications. Verification and validation (V & V)
have become important, especially in software, as the complexity of software in
systems has increased, and planning for V & V is necessary from the
beginning of the development life cycle. Over the past 20 to 30 years, software
development has evolved from small tasks involving a few people to enormously
large tasks involving a many people. Because of this change, verification and
validation has similarly also undergone a change. Previously, verification and
validation was an informal process performed by the software engineer himself.
However, as the complexity of systems increased, it became obvious that
continuing this type of testing would result in unreliable products. It
became necessary to look at V & V as a separate activity in the overall
software development life cycle. The V & V of today is significantly
different from the past as it is practiced over the entire software life cycle.
It is also highly formalized and sometimes activities are performed by
organizations independent of the software developer. [Andriole86] In addition,
V & V is very closely linked with certification because it is a major
component in support of certification.
While the terms verification and
validation are used interchangably in papers and texts, there are distinct
differnences in their terminology. According to the IEEE Standard Glossary of
Software Engineering Terminology, verification is defines as "The process
of evaluating a system or component to determine whether the products of a
given development phase satisfy the conditions imposed at the start of that
phase." Validation, on the other hand, is defined as "The process of
evaluating a system or component during or at the end of the development
process to determine whether it satisfies specified requirements." So
verification simply demonstrates whether the output of a phase conforms to the
input of a phase as opposed to showing that the output is actually correct.
Verification will not detect errors resulting from incorrect input
specification and these errors may propagate without detection through later
stages in the development cycle. It is not enough to only depend on
verification, so validation is necessary to check for problems with the
specification and to demonstrate that the system is operational. Finally,
certification is "A written guarantee that a system or compnent complies
with its specified requirements and is acceptable for operational use."
Key Concepts
Verification
Techniques There are many different verification techniques but
they all basically fall into 2 major categories - dynamic testing and static
testing.
- Dynamic testing - Testing that involves the execution of a system
or component. Basically, a number of test cases are chosen, where each test
case consists of test data. These input test cases are used to determine output
test results. Dynamic testing can be further divided into three categories -
functional testing, structural testing, and random testing.
- Functional testing - Testing that involves identifying and testing
all the functions of the system as defined within the requirements. This form
of testing is an example of black-box testing since it involves no knowledge of
the implementation of the system.
- Structural testing - Testing that has full knowledge of the
implementation of the system and is an example of white-box testing. It uses
the information from the internal structure of a system to devise tests to
check the operation of individual components. Functional and structural testing
both chooses test cases that investigate a particular characteristic of the
system.
- Random testing - Testing that freely chooses test cases among the
set of all possible test cases. The use of randomly determined inputs can
detect faults that go undetected by other systematic testing techniques.
Exhaustive testing, where the input test cases consists of every possible set
of input values, is a form of random testing. Although exhaustive testing
performed at every stage in the life cycle results in a complete verification
of the system, it is realistically impossible to accomplish. [Andriole86]
- Static testing - Testing that does not involve the operation of the
system or component. Some of these techniques are performed manually while
others are automated. Static testing can be further divided into 2 categories -
techniques that analyze consistency and techniques that measure some program
property.
- Consistency techniques - Techniques that are used to insure program
properties such as correct syntax, correct parameter matching between
procedures, correct typing, and correct requirements and specifications
translation.
- Measurement techniques - Techniques that measure properties such as
error proneness, understandibility, and well-structuredness. [Andriole86]
Validation
Techniques There are also numerous validation techniques,
including formal methods, fault injection, and dependability analysis.
Validation usually takes place at the end of the development cycle, and looks
at the complete system as opposed to verification, which focuses on smaller
sub-systems.
- Formal methods - Formal methods is not only a verification
technique but also a validation technique. Formal methods means the use of
mathematical and logical techniques to express, investigate, and analyze the
specification, design, documentation, and behavior of both hardware and
software.
- Fault injection - Fault injection is the intentional activation of
faults by either hardware or software means to observe the system operation
under fault conditions.
- Hardware fault injection - Can also be called physical fault
injection because we are actually injecting faults into the physical hardware.
- Software fault injection - Errors are injected into the memory of
the computer by software techniques. Software fault injection is basically a
simulation of hardware fault injection.
- Dependability analysis - Dependability analysis involves
identifying hazards and then proposing methods that reduces the risk of the
hazard occuring.
- Hazard analysis - Involves using guidelines to identify hazards,
their root causes, and possible countermeasures.
- Risk analysis - Takes hazard analysis further by identifying the
possible consequences of each hazard and their probability of occuring.
[Kopetz97]
The IEEE Standard for Software Verification and Validation (IEEE Std 1012-1998)
contains information on software integrity levels, the V & V process, the
Software V & V reporting, administrative, and documentation requirements,
and an outline of the software verification and validation plan.
Verification and validation can be performed by the same organization
performing the design, development, and implementation but sometimes it is
performed by an independent testing agency. This is called independent
verification and validation (IV & V). These agencies usually need to be
accredited by a higher organization, to be sure that their results are
dependable. For example, in the United Kingdom, the National Measurement
Accreditation Service has begun to accredit companies for testing computer
software used in safety-critical systems. The first company was accredited in
1994. The testing methods approved includes a suite of in-house procedures
including static and dynamic testing techniques. [Storey96]
Verification and validation is a very time consuming process as it consists
of planning from the start, the development of test cases, the actual testing,
and the analysis of the testing results. It is important that there are people
specifically in charge of V & V that can work with the designers. Since
exhaustive testing is not feasible for any complex system, an issue that occurs
is how much testing is enough testing. Sure, the more testing the better but
when do the cost and time of testing outweigh the advantages gained from
testing. The amount of time and money spent on V & V will certainly vary
from project to project. In many organizations, testing is done until either or
both time and money runs out. Whether this method is effective or not, it is a
technique used by many companies.
Certification
Process
Verification and validation are part of the long certification process for
any embedded system. There are different reasons why a product needs
certification. Sometimes certification is required for legal reasons. For
example, before an aircraft is allowed to fly, it must obtain a license. Being
certified would also be important for commercial reasons like having a sales
advantage. One of the main reasons for certification is to show competence in
specific areas. Certification are usually carried out by government agencies or
other organizations with a national standing.
Certification can be applied to either organizations or individuals, tools
or methods, or systems or products. Certification of organizations aims at
assuring that the organization achieves a certain level or proficiency and that
they agree to certain standards or criterias. This however, is not applicable
to all areas because while it is easy to measure the procedures of a company,
it is much harder to measure the competence with which they are performed. So
certification is usually applied to areas such as quality assurance and testing
as opposed to design. Certification may also apply to individuals where workers
must be certified in order to be a certain profession. This usually applies to
workers such as doctors, lawyers, accountants, and civil engineers. Tools or
methods may also be certified. For example, although DO-178B does not
specifically define the tools that must be used, it does give certain
requirements of tools used to gain certification. Finally, systems or products
may also be certified. [Storey96] In certification, there is always the issue
of whether artifacts or methodology be certified. This becomes an issue in the
certification of products containing software. Because software testing is so
difficult, certification must be based on the process of development and on the
demonstrated performance. This is a case where the methodology (development
process) is certified instead of the artifact (software).
Even though certification does not occur until the end of a system
development cycle, the planning starts from the very beginning. Because
certification is a complicated process between the developer and the regulatory
agency, the certification liason between the parties must be established early
on in the process. Next, the developer should submit a verification plan for
approval by the regulatory agency. After the submission, discussion takes place
between the developer and regulatory agency to resolve areas of
misunderstanding and disagreement. Changes to the methods used have to be
approved by the regulatory body to insure that certification will not be
affected. Throughout the entire development life cycle of the product,
documentation must be continually submitted to show that the certification plan
is satisfied. The regulating authority will also hold a series of reviews to
discuss the submitted material. At the end, if the terms of the certification
plan have been satisfied, then a certificate or license is issued. [Storey96]
The safety case is an important document used to support certification. It
contains a set of arguments supported by analytical and experimental evidence
concerning the safety of a design. It is created early in the development cycle
and is then expanded as issues important to safety comes up. In the safety
case, the regulatory authority will look to see that all potential hazards have
been identified, and that appropriate steps have been taken to deal with them.
In addition, the safety case must also demonstrate that appropriate development
methods have been adopted and that they have been performed correctly. Items
that should be included in the safety case includes, but are not limited to the
following: specification of safety requirements, results of hazard and risk
analysis, verification and validation strategy, and results of all verification
and validation activities. The CONTESSE Test Handbook, which is applicable in
the United Kingdom, lists a number of items that should be included in a safety
case. [Storey96]
A potential problem with certification is that manufacturers use it to avoid
its legal and moral obligations. An important aspect of certification is that
it does not prove that the system is correct. Certification only proves that a
system has met certain standards set by the certifying agency. The standards
show that a product has met certain guidelines, but it does not mean that the
system is correct. Any problem with the system is ultimately the responsibility
of the designer and manufacturer, not the certification agency.
In the United States, different government organizations are responsible for
the certification of different products. For example, the FDA is in charge of
the certification of medical devices and the FAA is in charge of the
certification of aircraft. Specifically, the FAA software certification is
based on the standard RTCA/DO-178B. The standard provides information
about all aspects of the software certification process including the following
sections: software planning process, software development process, software
verification process, and the certificate liason process. The software
verification process includes more than testing, since testing in general
cannot show the absence of errors. Therefore, the software verification process
is usually a combination of review, analyses, and testing. Review and analyses
are performed on the following different components. [RTCA92]
- Requirements analyses - To detect and report requirements errors
that may have surfaced during the software requirements and design process.
- Software architecture - To detect and report errors that occured
during the development of the software architecture.
- Source code - To detect and report errors that developed during
source coding.
- Outputs of the integration process - To ensure that the results of
the integration process is complete and correct.
- Test cases and their procedures and results - To ensure that the
testing is performed accurately and completely.
The 2 main objectives of the software testing process is to demonstrate that it
satisfies all the requirements and to demonstrate that errors leading to
unacceptable failure conditions are removed. The testing process includes the
following three different types of testing. [RTCA92]
- Hardware/software integration testing - To verify that the software
is operating correctly in the computer environment.
- Software integration testing - To verify the interrelationships
between the software requirements and components and to verify the
implementation of the requirements and components in the software architecture.
- Low-level testing - To verify the implementation of software
low-level requirements.
The standard includes a plan that outlines the information necessary for the
certification of the software and the software verification plan. A section
also details tool qualification. This is necessary when processes in the
standard are eliminated, reduced, or automated by using a software tool without
following the software verification process.
In addition, there is also a
section in the standard about alternative methods. This section includes
information that were not in the previous section because of immaturity at time
of print. Some alternative verification methods include the use of formal
methods and exhaustive input testing. [RTCA92] Research has been performed on
formal methods and the certification of critical systems. There are two reasons
why formal methods might be used to support certification. The first reason is
to use formal methods for reasons other than improved quality control and
assurance. This can be achieved by three ways: to supplement traditional
processes and documentation, to substitute formal specifications for some
traditional documentation, and to substitute formal proofs for some traditional
reviews and analyses. The second reason is to use formal methods to improve
quality control and assurance. [Rushby93] However, it is not always possible to
formally prove all pieces of software. Another alternative method of testing is
exhaustive input testing. This method has limitations too, as it is only
feasible if the software component is simple and isolated.
The certification process is greatly assisted by and sometimes requires the
use of guidelines and standards. Some documents are specific to a particular
industry while others are generic. Several standards will be briefly mentioned
below. [Storey96]
- IEC 1508 Functional Safety: Safety Related Systems - This
international standard is primarily concerned with safety-related control
systems including electrical, electronic, or programmable electronic
subsystems. However, it also gives more general guidance so that it is
applicale for all forms of safety-critical systems.
- MoD Interim Defense Standard 00-55 - Requirements for the Procurement
of Safety-Critical Software in Defense Equipment - Major parts of this
British standard deals with safety management issues and software engineering
practices.
- HSE Guidelines - Programmable Electronic Systems in Safety-Related
Applications - This is a 2 volume set of guidelines on the design and
development of safety-critical programmable electronic systems published by the
United Kingdom Health and Safety Executive. The first volume contains an
introductory guide for non-specialists while the second volume conatins
technical information for more specialized engineers.
In addition, there is also ISO 9000 Certification. This international
certification, contrary to popular belief, is not concerned with how to make
well-engineered products or how to supply high-quality service. Instead, it is
about maintaining a framework that will enable you to continually improve your
product or service. ISO 9000 certification is about certifying the process. To
achieve certification, it requires companies to submit documentation about
information such as how you select your suppliers, what information is included
on your purchase orders, what checks you make on incoming goods, and what
checks you make on outgoing items. [ISO99] This certification is very different
from P. E. (Professional Engineers) certification, which is based on producing
a well engineered product.
Available tools, techniques, and metrics
There are an abundance of verification and validation tools and techniques. It
is important that in selecting V & V tools, all stages of the development
cycle are covered. For example, Table 1 lists the techniques used from the
requirements analysis stage through the validation stage. Sources such as the
Software Engineer's Reference Book (McDermid, 1992), Standard for Software
Component Testing (British Computer Society, 1995), and standards such as
DO-178B and IEC 1508 are useful in selecting apprpriate tools and techniques.
Table 1: Use of testing methods throughout the development life cycle
[Storey96]
Static
Dynamic
Requirements analysis and functional
specification
walkthroughs
design reviews
checklists
Top-level design
walkthroughs
design reviews
checklists
formal proofs
fagan inspection
Detailed design
walkthroughs
design reviews
control flow analysis
data flow analysis
symbolic execution
checklists
fagan inspection
metrics
Implementation
static analysis
functional testing
boundary value analysis
structured-based testing
probabilistic testing
error guessing
process simulation
error seeding
Integration testing
walkthroughs
functional testing
design reviews
time and memory tests
sneak circuit analysis
boundary value analysis
performance testing
stress testing
probabilistic testing
error guessing
Validation
functional testing
There are many organizations and companies that perform independent
verification and validation. For example, NASA has a Software Independent
Verification & Validation Facility which provides IV & V technical
analyses for NASA programs, industries, and other government agencies. [NASA99]
With all the work dealing with the Y2K problem, many Y2K companies have
embraced a new role as IV & V companies. One such example is SEEC. SEEC has
an IV & V Workbench which is a comprehensive, integrated solution that
includes year 2000 remediation. The workbench brings together powerful tool and
processes such as the SEEC COBOL Analyst 2000 and SEEC Smart Change 2000 for
verification and the SEEC COBOL Slicer and SEEC/TestDirector for validation.
[SEEC99]
Relationship to other topics
Conclusions
In conclusion, verification and validation is a crucial part of the development
life cycle of an embedded system. Verification starts from the requirements
analysis stage where design reviews and checklists are used to the validation
stage where functional testing and environmental modelling is done. The results
of the V & V process is an important component in the safety case, which is
heavily used to support the certification process. V & V is very important
and an issue that comes up is how much verification is enough verification.
Obviously, the more testing the better, but when do the benefits from testing
outweigh the cost and time of the project. This will vary from project to
project and only the developer can determine this. In addition, V & V
cannot be used to prove that a system is safe or dependable.
There are also
several issues concerning the certification process. The first issue is should
artifacts be certified or the methodology certified? The advantage of
certifying the methodology is that it is applicable to different products. So
if the same methodology is applied to different products, then each product
does not need to be re-certified. The advantage of certifying the artifact is
that if the methodology used to develop the artifact changes, the product may
not have to be re-certified.
In addition, certification does not prove correctness. If a product receives
certification, it simply means that it has met all the requirements needed to
be met for certification. It does not mean that the product is error free.
Therefore, the manufacturer cannot use certification to avoid assuming it's
legal or moral obligations.
While verification, validation, and certification are important in the
development of any system, they are even more important in the development of
safety-critical embedded systems. Tests such as those for electromagnetic
compatibility prevent electronic systems from harmful interference with its
surroundings, which may include humans. Certification by the FCC also
guarantees that products meet certain safety limits.
Future work in this area includes the standardization of certification
methods used in different industries. Currently, these methods vary
considerably. Therefore, not only does this situation limit the exchange of
information between different industries, but it also limits the full use of
the available human resources. The design of IEC 1508 has helped industries to
not only maintain a common approach to safety but also the ability to still
produce their own standards. The use of formal methods in software
certification is also a relatively new area and debate is still occuring as to
whether formal methods can accurately verify and validate safety-critical
embedded systems.
Annotated Reference List
- [Andriole86] Andriole, Stephen J., editor, Software Validation,
Verification, Testing, and Documentation, Princeton, NJ: Petrocelli Books,
1986.
This book presents an overview of the software verification and validation
process including the planning stage, testing stage, and documentation stage.
- [ISO99] The ISO Information Exchange,
http://www.iso-9000.co.uk/index.html,
accessed May 7, 1999.
This website contains information about ISO 9000 certification.
- [Kopetz97] Kopetz, Herman, Real-Time Systems: Design Principles
for Distributed Embedded Applications, Boston, MA: Kluwer Academics
Publishers, 1997.
This book details the design of embedded real-time applications. There is a
chapter on validation techniques for real-time systems.
- [NASA99] NASA Software Independent Verification & Validation Facility,
http://www.ivv.nasa.gov, accessed May 5,
1999.
An IV & V facility provided by NASA.
- [RTCA92] RTCA-DO-178B, Software Considerations in Airborne Systems and
Equipment Certification, December 1992.
The standard used by the FAA for software certification, it includes details on
testing procedures and necessary documentation.
- [Rushby93] Rushby, John, "Formal Methods and the Certification of
Critical Systems," SRI-CSL Technical Report, November 1993.
This technical report includes 1.) the technical basis for formal methods, 2.)
the use of formal methods in the specification and verification of software and
hardware requirements, design, and implementation, 3.) the benefits,
weaknesses, and difficulties of applying formal methods to digital systems used
in safety critical applications, and 4.) factors to consider when using formal
methods in support of certification.
- [SEEC99] SEEC, http://www.seec.com,
accessed May 5, 1999.
SEEC is a Y2K company with an independent verification and validation product,
the IV & V Workbench.
- [Storey96] Storey, Neil, Safety Critical Computer Systems, Harlow,
England: Addison-Wesley, 1996.
This book gives a good overview of safety critical computer systems without
assuming previous knowledge of critical systems. There is a good chapter
detailing verification and validation during a product life cycle, and also
another chapter on certification. In addition, various other parts of the book
describes particular verification and validation techniques.
Further Reading
- Andriole, Stephen J., editor, Software Validation, Verification,
Testing, and Documentation, Princeton, NJ: Petrocelli Books, 1986.
This book presents an overview of the software verification and validation
process including the planning stage, testing stage, and documentation stage.
- Neumann, B. de, editor, Software Certification, London, New York:
Elsevier Applied Science, 1986.
This book contains a collection of papers dealing with the certification of
computer software.
Loose Ends
Go To
Project Page