ISTQB Foundation Level
  • ISTQB CTFL Syllabus 2018 V3.1
  • Author - Magdalena Olak
  • 1. Fundamentals of Testing
    • 1.1. What is Testing?
      • 1.1.1. Typical Objectives of Testing
      • 1.1.2. Testing and Debugging
    • 1.2. Why is Testing Necessary?
      • 1.2.1 Testing’s Contributions to Success
      • 1.2.2 Quality Assurance and Testing
      • 1.2.3 Errors, Defects, and Failures
      • 1.2.4 Defects, Root Causes and Effects
    • 1.3. Seven Testing Principles
    • 1.4. Test Process
      • 1.4.1 Test Process in Context
      • 1.4.2 Test Activities and Tasks
      • 1.4.3 Test Work Products
      • 1.4.4 Traceability between the Test Basis and Test Work Products
    • 1.5. The Psychology of Testing
      • 1.5.1 Human Psychology and Testing
      • 1.5.2 Tester’s and Developer’s Mindsets
  • 2. Testing Throughout the Software Development Lifecycle
    • 2.1. Software Development Lifecycle Models
      • 2.1.1. Software Development and Software Testing
      • 2.1.2. Software Development Lifecycle Models in Context
    • 2.2. Test Levels
      • 2.2.1. Component Testing
      • 2.2.2 Integration Testing
      • 2.2.3. System Testing
      • 2.2.4. Acceptance Testing
    • 2.3. Test Types
      • 2.3.1. Functional Testing
      • 2.3.2. Non-functional Testing
      • 2.3.3. White-box Testing
      • 2.3.4. Change-related Testing
      • 2.3.5. Test Types and Test Levels
    • 2.4. Maintenance Testing
      • 2.4.1 Triggers for Maintenance
      • 2.4.2 Impact Analysis for Maintenance
  • 3 Static Testing
    • 3.1 Static Testing Basics
      • 3.1.1 Work Products that Can Be Examined by Static Testing
      • 3.1.2 Benefits of Static Testing
      • 3.1.3 Differences between Static and Dynamic Testing
    • 3.2 Review Process
      • 3.2.1 Work Product Review Process
      • 3.2.2 Roles and responsibilities in a formal review
      • 3.2.3 Review Types
      • 3.2.4 Applying Review Techniques
      • 3.2.5 Success Factors for Reviews
  • 4 Test Techniques
    • 4.1 Categories of Test Techniques
      • 4.1.1 Categories of Test Techniques and Their Characteristics
    • 4.2 Black-box Test Techniques
      • 4.2.1 Equivalence Partitioning
      • 4.2.2 Boundary Value Analysis
      • 4.2.3 Decision Table Testing
      • 4.2.4 State Transition Testing
      • 4.2.5 Use Case Testing
    • 4.3 White-box Test Techniques
      • 4.3.1 Statement Testing and Coverage
      • 4.3.2 Decision Testing and Coverage
      • 4.3.3 The Value of Statement and Decision Testing
    • 4.4 Experience-based Test Techniques
      • 4.4.1 Error Guessing
      • 4.4.2 Exploratory Testing
      • 4.4.3 Checklist-based Testing
  • 5 Test Management
    • 5.1 Test Organization
      • 5.1.1 Independent Testing
      • 5.1.2 Tasks of a Test Manager and Tester
    • 5.2 Test Planning and Estimation
      • 5.2.1 Purpose and Content of a Test Plan
      • 5.2.2 Test Strategy and Test Approach
      • 5.2.3 Entry Criteria and Exit Criteria (Definition of Ready and Definition of Done)
      • 5.2.4 Test Execution Schedule
      • 5.2.5 Factors Influencing the Test Effort
      • 5.2.6 Test Estimation Techniques
    • 5.3 Test Monitoring and Control
      • 5.3.1 Metrics Used in Testing
      • 5.3.2 Purposes, Contents, and Audiences for Test Reports
    • 5.4 Configuration Management
    • 5.5 Risks and Testing
      • 5.5.1 Definition of Risk
      • 5.5.2 Product and Project Risks
      • 5.5.3 Risk-based Testing and Product Quality
    • 5.6 Defect Management
  • 6 Tool Support for Testing
    • 6.1 Test Tool Considerations
      • 6.1.1 Test Tool Classification
      • 6.1.2 Benefits and Risks of Test Automation
      • 6.1.3 Special Considerations for Test Execution and Test Management Tools
    • 6.2 Effective Use of Tools
      • 6.2.1 Main Principles for Tool Selection
      • 6.2.2 Pilot Projects for Introducing a Tool into an Organization
      • 6.2.3 Success Factors for Tools
Powered by GitBook
On this page

Was this helpful?

  1. 2. Testing Throughout the Software Development Lifecycle
  2. 2.3. Test Types

2.3.5. Test Types and Test Levels

It is possible to perform any of the test types mentioned above at any test level. To illustrate, examples of functional, non-functional, white-box, and change-related tests will be given across all test levels, for a banking application, starting with functional tests:

  • For component testing, tests are designed based on how a component should calculate compound interest.

  • For component integration testing, tests are designed based on how account information captured at the user interface is passed to the business logic.

  • For system testing, tests are designed based on how account holders can apply for a line of credit on their checking accounts.

  • For system integration testing, tests are designed based on how the system uses an external microservice to check an account holder’s credit score.

  • For acceptance testing, tests are designed based on how the banker handles approving or declining a credit application.

The following are examples of non-functional tests:

  • For component testing, performance tests are designed to evaluate the number of CPU cycles required to perform a complex total interest calculation.

  • For component integration testing, security tests are designed for buffer overflow vulnerabilities due to data passed from the user interface to the business logic.

  • For system testing, portability tests are designed to check whether the presentation layer works on all supported browsers and mobile devices.

  • For system integration testing, reliability tests are designed to evaluate system robustness if the credit score microservice fails to respond.

  • For acceptance testing, usability tests are designed to evaluate the accessibility of the banker’s credit processing interface for people with disabilities.

The following are examples of white-box tests:

  • For component testing, tests are designed to achieve complete statement and decision coverage (see section 4.3) for all components that perform financial calculations.

  • For component integration testing, tests are designed to exercise how each screen in the browser interface passes data to the next screen and to the business logic.

  • For system testing, tests are designed to cover sequences of web pages that can occur during a credit line application.  For system integration testing, tests are designed to exercise all possible inquiry types sent to the credit score microservice.

  • For acceptance testing, tests are designed to cover all supported financial data file structures and value ranges for bank-to-bank transfers.

Finally, the following are examples for change-related tests:

  • For component testing, automated regression tests are built for each component and included within the continuous integration framework.

  • For component integration testing, tests are designed to confirm fixes to interface-related defects as the fixes are checked into the code repository.

  • For system testing, all tests for a given workflow are re-executed if any screen on that workflow changes.

  • For system integration testing, tests of the application interacting with the credit scoring microservice are re-executed daily as part of continuous deployment of that microservice.

  • For acceptance testing, all previously-failed tests are re-executed after a defect found in acceptance testing is fixed.

While this section provides examples of every test type across every level, it is not necessary, for all software, to have every test type represented across every level. However, it is important to run applicable test types at each level, especially the earliest level where the test type occurs.

Previous2.3.4. Change-related TestingNext2.4. Maintenance Testing

Last updated 4 years ago

Was this helpful?