ISTQB Foundation Level
  • ISTQB CTFL Syllabus 2018 V3.1
  • Author - Magdalena Olak
  • 1. Fundamentals of Testing
    • 1.1. What is Testing?
      • 1.1.1. Typical Objectives of Testing
      • 1.1.2. Testing and Debugging
    • 1.2. Why is Testing Necessary?
      • 1.2.1 Testing’s Contributions to Success
      • 1.2.2 Quality Assurance and Testing
      • 1.2.3 Errors, Defects, and Failures
      • 1.2.4 Defects, Root Causes and Effects
    • 1.3. Seven Testing Principles
    • 1.4. Test Process
      • 1.4.1 Test Process in Context
      • 1.4.2 Test Activities and Tasks
      • 1.4.3 Test Work Products
      • 1.4.4 Traceability between the Test Basis and Test Work Products
    • 1.5. The Psychology of Testing
      • 1.5.1 Human Psychology and Testing
      • 1.5.2 Tester’s and Developer’s Mindsets
  • 2. Testing Throughout the Software Development Lifecycle
    • 2.1. Software Development Lifecycle Models
      • 2.1.1. Software Development and Software Testing
      • 2.1.2. Software Development Lifecycle Models in Context
    • 2.2. Test Levels
      • 2.2.1. Component Testing
      • 2.2.2 Integration Testing
      • 2.2.3. System Testing
      • 2.2.4. Acceptance Testing
    • 2.3. Test Types
      • 2.3.1. Functional Testing
      • 2.3.2. Non-functional Testing
      • 2.3.3. White-box Testing
      • 2.3.4. Change-related Testing
      • 2.3.5. Test Types and Test Levels
    • 2.4. Maintenance Testing
      • 2.4.1 Triggers for Maintenance
      • 2.4.2 Impact Analysis for Maintenance
  • 3 Static Testing
    • 3.1 Static Testing Basics
      • 3.1.1 Work Products that Can Be Examined by Static Testing
      • 3.1.2 Benefits of Static Testing
      • 3.1.3 Differences between Static and Dynamic Testing
    • 3.2 Review Process
      • 3.2.1 Work Product Review Process
      • 3.2.2 Roles and responsibilities in a formal review
      • 3.2.3 Review Types
      • 3.2.4 Applying Review Techniques
      • 3.2.5 Success Factors for Reviews
  • 4 Test Techniques
    • 4.1 Categories of Test Techniques
      • 4.1.1 Categories of Test Techniques and Their Characteristics
    • 4.2 Black-box Test Techniques
      • 4.2.1 Equivalence Partitioning
      • 4.2.2 Boundary Value Analysis
      • 4.2.3 Decision Table Testing
      • 4.2.4 State Transition Testing
      • 4.2.5 Use Case Testing
    • 4.3 White-box Test Techniques
      • 4.3.1 Statement Testing and Coverage
      • 4.3.2 Decision Testing and Coverage
      • 4.3.3 The Value of Statement and Decision Testing
    • 4.4 Experience-based Test Techniques
      • 4.4.1 Error Guessing
      • 4.4.2 Exploratory Testing
      • 4.4.3 Checklist-based Testing
  • 5 Test Management
    • 5.1 Test Organization
      • 5.1.1 Independent Testing
      • 5.1.2 Tasks of a Test Manager and Tester
    • 5.2 Test Planning and Estimation
      • 5.2.1 Purpose and Content of a Test Plan
      • 5.2.2 Test Strategy and Test Approach
      • 5.2.3 Entry Criteria and Exit Criteria (Definition of Ready and Definition of Done)
      • 5.2.4 Test Execution Schedule
      • 5.2.5 Factors Influencing the Test Effort
      • 5.2.6 Test Estimation Techniques
    • 5.3 Test Monitoring and Control
      • 5.3.1 Metrics Used in Testing
      • 5.3.2 Purposes, Contents, and Audiences for Test Reports
    • 5.4 Configuration Management
    • 5.5 Risks and Testing
      • 5.5.1 Definition of Risk
      • 5.5.2 Product and Project Risks
      • 5.5.3 Risk-based Testing and Product Quality
    • 5.6 Defect Management
  • 6 Tool Support for Testing
    • 6.1 Test Tool Considerations
      • 6.1.1 Test Tool Classification
      • 6.1.2 Benefits and Risks of Test Automation
      • 6.1.3 Special Considerations for Test Execution and Test Management Tools
    • 6.2 Effective Use of Tools
      • 6.2.1 Main Principles for Tool Selection
      • 6.2.2 Pilot Projects for Introducing a Tool into an Organization
      • 6.2.3 Success Factors for Tools
Powered by GitBook
On this page

Was this helpful?

  1. 1. Fundamentals of Testing
  2. 1.2. Why is Testing Necessary?

1.2.1 Testing’s Contributions to Success

Throughout the history of computing, it is quite common for software and systems to be delivered into operation and, due to the presence of defects, to subsequently cause failures or otherwise not meet the stakeholders’ needs. However, using appropriate test techniques can reduce the frequency of such problematic deliveries, when those techniques are applied with the appropriate level of test expertise, in the appropriate test levels, and at the appropriate points in the software development lifecycle. Examples include:

  • Having testers involved in requirements reviews or user story refinement could detect defects in these work products. The identification and removal of requirements defects reduces the risk of incorrect or untestable features being developed.

  • Having testers work closely with system designers while the system is being designed can increase each party’s understanding of the design and how to test it. This increased understanding can reduce the risk of fundamental design defects and enable tests to be identified at an early stage.

  • Having testers work closely with developers while the code is under development can increase each party’s understanding of the code and how to test it. This increased understanding can reduce the risk of defects within the code and the tests.

  • Having testers verify and validate the software prior to release can detect failures that might otherwise have been missed, and support the process of removing the defects that caused the failures (i.e., debugging). This increases the likelihood that the software meets stakeholder needs and satisfies requirements.

In addition to these examples, the achievement of defined test objectives (see section 1.1.1) contributes to overall software development and maintenance success.

Previous1.2. Why is Testing Necessary?Next1.2.2 Quality Assurance and Testing

Last updated 4 years ago

Was this helpful?