Software Defect Phase Containment


Software Defect Phase Containment

The software element of products continues to grow. Likewise the number of field failures due to software issues continues to grow. Writing code is relatively straight forward, and some may even say it’s fun. The process of debugging, or finding and fixing software defects, is not fun.

Software defect occur. The teams ability to find and remove defects takes discipline and investment. Thus being able to measure the effectiveness of defect debugging efforts is necessary. Making the assumption that once a defect is detected it will be resolved, Fagan (1979) defines error detection effectiveness as:

\frac{\text{Errors Found by Inspection}}{\text{Total Errors in the Product before Inspection}}\times 100%

The approach was later expanded by Jones (1997) to include defect found in the field (during use). The idea is to determine the ratio of defects found at a stage of the inspection / review process, over the total number of defects found later and were available to be found (I.e. The defect was not created after the inspection.)

Software Phase Process Defects Injection and Removal

According the CMMI maturity matrix and others [Chrissis 2011], the key to creating great software lies in establishing the software requirement correctly. If the requirements are not clear or complete the software is likely to have significantly more defects then otherwise. Of course defects are also injected during design, coding and later when working to remove bugs.

Each phase of the software development process should include inspections and reviews. Two well know methods are the Fagan inspection methodology or the Glib methodology. There are other less formal or strict approaches and each organization will have to determine the appropriate approach given their product, liability, and customer exceptions.

Catching (or better avoiding) defects early has an economic benefit as “the cost of correction of a requirements defect identified in the field is over 40 times more expensive than if it were detected at the requirements phase” [Boehm, 1981]

Development Phase Defect Injection Defect Removal
Requirements Faulty, unclear, or missing software requirements or specifications Requirements review and inspection
High Level Design Faulty design, errors or omissions Design review and inspection
Low Level Design Faulty design, errors or omissions Design review and inspection
Coding Faulty code Code review and inspection
Unit Test Poor defect removal Unit testing
Integration Poor defect removal Integration testing
System Test Poor defect removal System Testing

Keep in mind that the development phases may have different names in your organization or have a different basic approach. Yet the basic idea of setting requirements followed by design, coding and testing is failure universal. Three common software development processes are: Prototyping, Waterfall and Spiral.

Prototyping includes multiple cycles with:

  • Determine Objectives
  • Cycles of Develop, Refine, Demonstrate
  • Test
  • Implement

Waterfall includes:

  • Requirements
  • Design
  • Implementation
  • Testing
  • Integration
  • Deployment/Installation
  • Maintenance

Spiral includes in a requirements and multiple prototype phases each of:

  • Determine Objectives
  • Identify and resolve risks
  • Develop and Test
  • Plan the next iteration

There are other approaches that are less and more formal. The idea is to find and resolve software issues early. The ability to measure the effectiveness of a review, inspection or testing step allows the team to improve their software development process.

Fagan, Michael E. 1999. Design and code inspections to reduce errors in program development. IBM Systems Journal 38 (2/3): 258.

Jones, Capers. 1997. Applied Software Measurement : Assuring Productivity and Quality. New York: McGraw-Hill. Web. Note: now is 3rd edition 2008

Chrissis, Mary Beth, Mike Konrad, and Sandy Shrum. 2011. CMMI for Development Guidelines for Process Integration and Product Improvement. – “CMMI-DEV, Version 1.3”–Cover P. [1]. Web.

Boehm, Barry W. 1981. Software Engineering Economics. Englewood Cliffs, N.J.: Prentice-Hall. Web.

This entry was posted in I. Reliability Management and tagged by Fred Schenkelberg. Bookmark the permalink.
Unknown's avatar

About Fred Schenkelberg

I am an experienced reliability engineering and management consultant with FMS Reliability, a consulting firm I founded in 2004. I left Hewlett Packard (HP)’s Reliability Team, where I helped create a culture of reliability across the organization, to assist other organizations. Given the scope of my work, I am considered an international authority on reliability engineering. My passion is working with teams to improve product reliability, customer satisfaction, and efficiencies in product development; and to reduce product risk and warranty costs. I have a Bachelor of Science in Physics from the United States Military Academy and a Master of Science in Statistics from Stanford University.

Leave a comment