Sampling of whiskey from a barrel

High Software Quality Required

What Now?

Time to Read 7 min

While the quality of software used to be achieved "by chance", a minimum quality level is increasingly required today. This is the case, for example, due to functional safety or software security standards.
Therefore established quality standards as well as safety or security standards are a good source for measures for reaching the demanded software quality.
In principle the software quality measures can be divided into the following categories:

  • Structured specification and design
  • Guidelines for design and coding
  • Verification and validation methods; i.e. reviews, analyses and tests

These categories show that software quality cannot be achieved by either verification or testing alone. Nevertheless, verification is a fundamental part of the measures to be implemented.
In this article, we will therefore take a closer look at verification and validation methods.
How do I ensure that the resulting quality of my software is not left to chance? Is it enough for software developers to informally verify that the software behaves as hoped?

In this blog, I answer the following questions:

Which Strategies are used to Check the Software? Validation and Verification

What does Validation Mean?

The software is used as intended in the intended environment. Validation shows whether the software meets the user's expectations with respect to its function.

Validation should be "structured", i.e. planned according to previously defined criteria, e.g.:

  • Tests by developers based on assumed user expectations (e.g., structured execution of all "user stories")
  • Monitored field tests: specifically selected users (beta testers) use the software or the system. Structured collection of feedback is important here.

What does Verification Mean?

Verification checks whether the software fulfills the defined requirements correctly and completely.

Verification should be "structured", i.e. planned with previously defined criteria, e.g.:

  • Tests by developers based on test cases derived from requirements
  • Analyses by developers; for example, data flow or control flow analysis
  • Reviews by developers using predefined criteria (guidelines, checklists)

Why do we Perform Reviews and Analyses?

Tests alone are not sufficient to ensure good software quality. Software is generally too complex for all errors to be found using testing. Therefore, the structured, correct creation of the software must be ensured as well. This happens among other things by reviews or analyses during design and implementation of the software.

Note: Guidelines for design and implementation are also important. However, these are not part of verification or validation and are therefore outside the scope of this article.

There are different structured and formalized approaches for reviews:

  • Walk Through: Mostly unstructured and less formalized presentation of a work product by its author with the goal to get feedback regarding errors. The walk through requires little effort, but its efficiency is highly dependent on the author's presentation.
  • Peer Review: Review of a work product by a peer.
  • Inspection (e.g., Fagan Inspection): Structured approach to reviewing work product as a team.

No less important than the procedure is the structured control of the aspects to be inspected. This is done, for example, through:

  • Review guidelines
  • Checklists with checks on various aspects to be reviewed

At which Levels is Testing Performed?

The fact that software tests are necessary to ensure quality is probably obvious for everyone. But on which levels of software development such tests are useful or even necessary often leads to lengthy discussions.

Unit Testing

Unit tests check the smallest parts of the software individually. Thus it is typically possible that all code parts are executed at least once during the tests (see also below: Code Coverage). Typically, unit testing is understood to be the testing of individual functions.

Depending on the number of hierarchal levels on which tests are performed, a different approach can be useful: instead of a single function, a test case covers several functions including their interaction. In this way, initial integration problems can already be detected at the lowest test level.

Unit tests should usually be performed with tools specially developed for this purpose. The reasons for this are:

  • Tools allow adequate management of a very high number of test cases
  • Tools can prevent a "change for the worse" of productive code through test support code (keyword: code instrumentation by tools for the execution of white-box tests)
  • further process requirements like e.g. the determination of code coverage

For embedded software it is generally useful if software tests are performed on the target hardware. This is particularly important for integration and software tests (see the following chapters). Depending on the required quality, unit testing on the development system (PC) may be sufficient.

Integration Testing

A part of the software, several integrated units, are tested. The correct cooperation of the units at their interfaces is verified.

Test of the Whole Software

The software is tested as a whole. I.e. we check whether the software shows the correct behavior with defined inputs, e.g. makes the correct outputs.

What should be Taken into Account? Important Considerations for Software Testing

Black or White Box Testing

  • Black Box Testing: Testing is limited to creating input values and evaluating the output values of the software.
  • White Box Testing: Block box testing with reasonable effort is sometimes not possible. White box testing (modifying or checking internal information of the software) can help in this case.

Quality of the Test Cases

The definition of the test cases of software-, integration- or unit-tests is not trivial. In the end, the quality of a test does heavily dependent on the "engineering judgment" of the author.

However, there are methods that allow a structured, repeatable creation and assessment of the quality of tests:

Structured Definition of Test Cases based on Guidelines/ Check Lists

The tests are defined based on guidelines. These specify, for example, the aspects to be tested. Examples are:

  • Define test cases based on the analysis of the functional requirements
  • Define test cases to ensure function under all expected conditions (keywords: Boundary Values, Equivalence Classes)
  • Define test cases to ensure robustness (e.g. bugs in the software or by users, unexpected environmental conditions)
  • Define test cases based on expected failures (test critical aspects based on experience)

Analysis of Requirements Coverage

Establish traceability of test cases to requirements to analyze the coverage of requirements by tests in a structured way.

Analysis of Code Coverage

Modern test tools allow to capture the code coverage during test execution. This analysis is used to check whether the defined test cases will test the entire implemented functionality or not.

Depending on the type of tests and the desired quality, different metrics can be used., e.g.:

  • Call- and Caller-Coverage: Proportion of the called functions respectively of the function calls during the test execution.
  • Statement coverage: Percentage of statements/commands executed by the software during test execution.
  • Decision-Coverage: Percentage of decisions made during test execution
  • Modified condition/decision coverage (MC/DC): Proportion of active conditions for all decisions during test execution.

The Safety Net - the Combination Makes the Difference!

Basically, it must be assumed that errors will happen when creating software and that errors will also be overlooked when implementing a verification method.

Therefore, it makes sense to apply multiple verification methods at different levels (e.g. unit-level
software) in order to have a "safety net" that can detect potential errors. Depending on the required quality, this "safety net" can be coarse- or fine-meshed.

How do I Select the Appropriate Verification Methods for My System or Project?

Quality standards (e.g. CMMI) or safety and security standards provide information about which methods are useful for which quality requirements.

Depending on the required software quality, a suitable set of verification methods is selected based on their cost-benefit ratio.

The following list shows established principles sorted by typical cost-benefit ratio:

  1. Testing through monitored use of the software (validation testing, field testing)
  2. Testing of the entire software
    1. Requirements-based testing of the function
    2. Boundary condition, equivalence classes and performance tests
    3. Robustness tests
  3. Design analyses and reviews
  4. Software integration tests (classes of tests analogous to tests of the entire software above)
  5. Code reviews
  6. Unit tests (classes of tests analogous to tests of the entire software above)

Have fun with testing, verifying and validating!

Matthias Eggenberger

Do you have additional questions? Do you have a different opinion? If so, email me  or comment your thoughts below!

Author

Comments

No Comments

What is Your Opinion?

* These fields are required

Projects? Ideas? Questions? Let's do a free initial workshop!