ISTQB Foundation Level
  • ISTQB CTFL Syllabus 2018 V3.1
  • Author - Magdalena Olak
  • 1. Fundamentals of Testing
    • 1.1. What is Testing?
      • 1.1.1. Typical Objectives of Testing
      • 1.1.2. Testing and Debugging
    • 1.2. Why is Testing Necessary?
      • 1.2.1 Testing’s Contributions to Success
      • 1.2.2 Quality Assurance and Testing
      • 1.2.3 Errors, Defects, and Failures
      • 1.2.4 Defects, Root Causes and Effects
    • 1.3. Seven Testing Principles
    • 1.4. Test Process
      • 1.4.1 Test Process in Context
      • 1.4.2 Test Activities and Tasks
      • 1.4.3 Test Work Products
      • 1.4.4 Traceability between the Test Basis and Test Work Products
    • 1.5. The Psychology of Testing
      • 1.5.1 Human Psychology and Testing
      • 1.5.2 Tester’s and Developer’s Mindsets
  • 2. Testing Throughout the Software Development Lifecycle
    • 2.1. Software Development Lifecycle Models
      • 2.1.1. Software Development and Software Testing
      • 2.1.2. Software Development Lifecycle Models in Context
    • 2.2. Test Levels
      • 2.2.1. Component Testing
      • 2.2.2 Integration Testing
      • 2.2.3. System Testing
      • 2.2.4. Acceptance Testing
    • 2.3. Test Types
      • 2.3.1. Functional Testing
      • 2.3.2. Non-functional Testing
      • 2.3.3. White-box Testing
      • 2.3.4. Change-related Testing
      • 2.3.5. Test Types and Test Levels
    • 2.4. Maintenance Testing
      • 2.4.1 Triggers for Maintenance
      • 2.4.2 Impact Analysis for Maintenance
  • 3 Static Testing
    • 3.1 Static Testing Basics
      • 3.1.1 Work Products that Can Be Examined by Static Testing
      • 3.1.2 Benefits of Static Testing
      • 3.1.3 Differences between Static and Dynamic Testing
    • 3.2 Review Process
      • 3.2.1 Work Product Review Process
      • 3.2.2 Roles and responsibilities in a formal review
      • 3.2.3 Review Types
      • 3.2.4 Applying Review Techniques
      • 3.2.5 Success Factors for Reviews
  • 4 Test Techniques
    • 4.1 Categories of Test Techniques
      • 4.1.1 Categories of Test Techniques and Their Characteristics
    • 4.2 Black-box Test Techniques
      • 4.2.1 Equivalence Partitioning
      • 4.2.2 Boundary Value Analysis
      • 4.2.3 Decision Table Testing
      • 4.2.4 State Transition Testing
      • 4.2.5 Use Case Testing
    • 4.3 White-box Test Techniques
      • 4.3.1 Statement Testing and Coverage
      • 4.3.2 Decision Testing and Coverage
      • 4.3.3 The Value of Statement and Decision Testing
    • 4.4 Experience-based Test Techniques
      • 4.4.1 Error Guessing
      • 4.4.2 Exploratory Testing
      • 4.4.3 Checklist-based Testing
  • 5 Test Management
    • 5.1 Test Organization
      • 5.1.1 Independent Testing
      • 5.1.2 Tasks of a Test Manager and Tester
    • 5.2 Test Planning and Estimation
      • 5.2.1 Purpose and Content of a Test Plan
      • 5.2.2 Test Strategy and Test Approach
      • 5.2.3 Entry Criteria and Exit Criteria (Definition of Ready and Definition of Done)
      • 5.2.4 Test Execution Schedule
      • 5.2.5 Factors Influencing the Test Effort
      • 5.2.6 Test Estimation Techniques
    • 5.3 Test Monitoring and Control
      • 5.3.1 Metrics Used in Testing
      • 5.3.2 Purposes, Contents, and Audiences for Test Reports
    • 5.4 Configuration Management
    • 5.5 Risks and Testing
      • 5.5.1 Definition of Risk
      • 5.5.2 Product and Project Risks
      • 5.5.3 Risk-based Testing and Product Quality
    • 5.6 Defect Management
  • 6 Tool Support for Testing
    • 6.1 Test Tool Considerations
      • 6.1.1 Test Tool Classification
      • 6.1.2 Benefits and Risks of Test Automation
      • 6.1.3 Special Considerations for Test Execution and Test Management Tools
    • 6.2 Effective Use of Tools
      • 6.2.1 Main Principles for Tool Selection
      • 6.2.2 Pilot Projects for Introducing a Tool into an Organization
      • 6.2.3 Success Factors for Tools
Powered by GitBook
On this page

Was this helpful?

  1. 5 Test Management

5.6 Defect Management

Since one of the objectives of testing is to find defects, defects found during testing should be logged. The way in which defects are logged may vary, depending on the context of the component or system being tested, the test level, and the software development lifecycle model. Any defects identified should be investigated and should be tracked from discovery and classification to their resolution (e.g., correction of the defects and successful confirmation testing of the solution, deferral to a subsequent release, acceptance as a permanent product limitation, etc.). In order to manage all defects to resolution, an organization should establish a defect management process which includes a workflow and rules for classification. This process must be agreed with all those participating in defect management, including architects, designers, developers, testers, and product owners. In some organizations, defect logging and tracking may be very informal.

During the defect management process, some of the reports may turn out to describe false positives, not actual failures due to defects. For example, a test may fail when a network connection is broken or times out. This behavior does not result from a defect in the test object, but is an anomaly that needs to be investigated. Testers should attempt to minimize the number of false positives reported as defects.

Defects may be reported during coding, static analysis, reviews, or during dynamic testing, or use of a software product. Defects may be reported for issues in code or working systems, or in any type of documentation including requirements, user stories and acceptance criteria, development documents, test documents, user manuals, or installation guides. In order to have an effective and efficient defect management process, organizations may define standards for the attributes, classification, and workflow of defects.

Typical defect reports have the following objectives:

  • Provide developers and other parties with information about any adverse event that occurred, to enable them to identify specific effects, to isolate the problem with a minimal reproducing test, and to correct the potential defect(s), as needed or to otherwise resolve the problem

  • Provide test managers a means of tracking the quality of the work product and the impact on the testing (e.g., if a lot of defects are reported, the testers will have spent a lot of time reporting them instead of running tests, and there will be more confirmation testing needed)

  • Provide ideas for development and test process improvement

A defect report filed during dynamic testing typically includes:

  • An identifier

  • A title and a short summary of the defect being reported

  • Date of the defect report, issuing organization, and author

  • Identification of the test item (configuration item being tested) and environment

  • The development lifecycle phase(s) in which the defect was observed

  • A description of the defect to enable reproduction and resolution, including logs, database dumps, screenshots, or recordings (if found during test execution)

  • Expected and actual results

  • Scope or degree of impact (severity) of the defect on the interests of stakeholder(s)

  • Urgency/priority to fix

  • State of the defect report (e.g., open, deferred, duplicate, waiting to be fixed, awaiting confirmation testing, re-opened, closed)

  • Conclusions, recommendations and approvals

  • Global issues, such as other areas that may be affected by a change resulting from the defect

  • Change history, such as the sequence of actions taken by project team members with respect to the defect to isolate, repair, and confirm it as fixed

  • References, including the test case that revealed the problem

Some of these details may be automatically included and/or managed when using defect management tools, e.g., automatic assignment of an identifier, assignment and update of the defect report state during the workflow, etc. Defects found during static testing, particularly reviews, will normally be documented in a different way, e.g., in review meeting notes.

An example of the contents of a defect report can be found in ISO standard (ISO/IEC/IEEE 29119-3) (which refers to defect reports as incident reports).

Previous5.5.3 Risk-based Testing and Product QualityNext6 Tool Support for Testing

Last updated 4 years ago

Was this helpful?