ISTQB Foundation Level
  • ISTQB CTFL Syllabus 2018 V3.1
  • Author - Magdalena Olak
  • 1. Fundamentals of Testing
    • 1.1. What is Testing?
      • 1.1.1. Typical Objectives of Testing
      • 1.1.2. Testing and Debugging
    • 1.2. Why is Testing Necessary?
      • 1.2.1 Testing’s Contributions to Success
      • 1.2.2 Quality Assurance and Testing
      • 1.2.3 Errors, Defects, and Failures
      • 1.2.4 Defects, Root Causes and Effects
    • 1.3. Seven Testing Principles
    • 1.4. Test Process
      • 1.4.1 Test Process in Context
      • 1.4.2 Test Activities and Tasks
      • 1.4.3 Test Work Products
      • 1.4.4 Traceability between the Test Basis and Test Work Products
    • 1.5. The Psychology of Testing
      • 1.5.1 Human Psychology and Testing
      • 1.5.2 Tester’s and Developer’s Mindsets
  • 2. Testing Throughout the Software Development Lifecycle
    • 2.1. Software Development Lifecycle Models
      • 2.1.1. Software Development and Software Testing
      • 2.1.2. Software Development Lifecycle Models in Context
    • 2.2. Test Levels
      • 2.2.1. Component Testing
      • 2.2.2 Integration Testing
      • 2.2.3. System Testing
      • 2.2.4. Acceptance Testing
    • 2.3. Test Types
      • 2.3.1. Functional Testing
      • 2.3.2. Non-functional Testing
      • 2.3.3. White-box Testing
      • 2.3.4. Change-related Testing
      • 2.3.5. Test Types and Test Levels
    • 2.4. Maintenance Testing
      • 2.4.1 Triggers for Maintenance
      • 2.4.2 Impact Analysis for Maintenance
  • 3 Static Testing
    • 3.1 Static Testing Basics
      • 3.1.1 Work Products that Can Be Examined by Static Testing
      • 3.1.2 Benefits of Static Testing
      • 3.1.3 Differences between Static and Dynamic Testing
    • 3.2 Review Process
      • 3.2.1 Work Product Review Process
      • 3.2.2 Roles and responsibilities in a formal review
      • 3.2.3 Review Types
      • 3.2.4 Applying Review Techniques
      • 3.2.5 Success Factors for Reviews
  • 4 Test Techniques
    • 4.1 Categories of Test Techniques
      • 4.1.1 Categories of Test Techniques and Their Characteristics
    • 4.2 Black-box Test Techniques
      • 4.2.1 Equivalence Partitioning
      • 4.2.2 Boundary Value Analysis
      • 4.2.3 Decision Table Testing
      • 4.2.4 State Transition Testing
      • 4.2.5 Use Case Testing
    • 4.3 White-box Test Techniques
      • 4.3.1 Statement Testing and Coverage
      • 4.3.2 Decision Testing and Coverage
      • 4.3.3 The Value of Statement and Decision Testing
    • 4.4 Experience-based Test Techniques
      • 4.4.1 Error Guessing
      • 4.4.2 Exploratory Testing
      • 4.4.3 Checklist-based Testing
  • 5 Test Management
    • 5.1 Test Organization
      • 5.1.1 Independent Testing
      • 5.1.2 Tasks of a Test Manager and Tester
    • 5.2 Test Planning and Estimation
      • 5.2.1 Purpose and Content of a Test Plan
      • 5.2.2 Test Strategy and Test Approach
      • 5.2.3 Entry Criteria and Exit Criteria (Definition of Ready and Definition of Done)
      • 5.2.4 Test Execution Schedule
      • 5.2.5 Factors Influencing the Test Effort
      • 5.2.6 Test Estimation Techniques
    • 5.3 Test Monitoring and Control
      • 5.3.1 Metrics Used in Testing
      • 5.3.2 Purposes, Contents, and Audiences for Test Reports
    • 5.4 Configuration Management
    • 5.5 Risks and Testing
      • 5.5.1 Definition of Risk
      • 5.5.2 Product and Project Risks
      • 5.5.3 Risk-based Testing and Product Quality
    • 5.6 Defect Management
  • 6 Tool Support for Testing
    • 6.1 Test Tool Considerations
      • 6.1.1 Test Tool Classification
      • 6.1.2 Benefits and Risks of Test Automation
      • 6.1.3 Special Considerations for Test Execution and Test Management Tools
    • 6.2 Effective Use of Tools
      • 6.2.1 Main Principles for Tool Selection
      • 6.2.2 Pilot Projects for Introducing a Tool into an Organization
      • 6.2.3 Success Factors for Tools
Powered by GitBook
On this page
  • Objectives of integration testing
  • Test basis
  • Test objects
  • Typical defects and failures
  • Specific approaches and responsibilities

Was this helpful?

  1. 2. Testing Throughout the Software Development Lifecycle
  2. 2.2. Test Levels

2.2.2 Integration Testing

Objectives of integration testing

Integration testing focuses on interactions between components or systems. Objectives of integration testing include:

  • Reducing risk

  • Verifying whether the functional and non-functional behaviors of the interfaces are as designed and specified

  • Building confidence in the quality of the interfaces

  • Finding defects (which may be in the interfaces themselves or within the components or systems)

  • Preventing defects from escaping to higher test levels

As with component testing, in some cases automated integration regression tests provide confidence that changes have not broken existing interfaces, components, or systems.

There are two different levels of integration testing described in this syllabus, which may be carried out on test objects of varying size as follows:

  • Component integration testing focuses on the interactions and interfaces between integrated components. Component integration testing is performed after component testing, and is generally automated. In iterative and incremental development, component integration tests are usually part of the continuous integration process.

  • System integration testing focuses on the interactions and interfaces between systems, packages, and microservices. System integration testing can also cover interactions with, and interfaces provided by, external organizations (e.g., web services). In this case, the developing organization does not control the external interfaces, which can create various challenges for testing (e.g., ensuring that test-blocking defects in the external organization’s code are resolved, arranging for test environments, etc.). System integration testing may be done after system testing or in parallel with ongoing system test activities (in both sequential development and iterative and incremental development).

Test basis

Examples of work products that can be used as a test basis for integration testing include:

  • Software and system design

  • Sequence diagrams

  • Interface and communication protocol specifications

  • Use cases  Architecture at component or system level

  • Workflows

  • External interface definitions

Test objects

Typical test objects for integration testing include:

  • Subsystems

  • Databases

  • Infrastructure

  • Interfaces

  • APIs

  • Microservices

Typical defects and failures

Examples of typical defects and failures for component integration testing include:

  • Incorrect data, missing data, or incorrect data encoding

  • Incorrect sequencing or timing of interface calls

  • Interface mismatch

  • Failures in communication between components

  • Unhandled or improperly handled communication failures between components

  • Incorrect assumptions about the meaning, units, or boundaries of the data being passed between components

Examples of typical defects and failures for system integration testing include:

  • Inconsistent message structures between systems

  • Incorrect data, missing data, or incorrect data encoding

  • Interface mismatch

  • Failures in communication between systems

  • Unhandled or improperly handled communication failures between systems

  • Incorrect assumptions about the meaning, units, or boundaries of the data being passed between systems

  • Failure to comply with mandatory security regulations

Specific approaches and responsibilities

Component integration tests and system integration tests should concentrate on the integration itself. For example, if integrating module A with module B, tests should focus on the communication between the modules, not the functionality of the individual modules, as that should have been covered during component testing. If integrating system X with system Y, tests should focus on the communication between the systems, not the functionality of the individual systems, as that should have been covered during system testing. Functional, non-functional, and structural test types are applicable.

Component integration testing is often the responsibility of developers. System integration testing is generally the responsibility of testers. Ideally, testers performing system integration testing should understand the system architecture, and should have influenced integration planning.

If integration tests and the integration strategy are planned before components or systems are built, those components or systems can be built in the order required for most efficient testing. Systematic integration strategies may be based on the system architecture (e.g., top-down and bottom-up), functional tasks, transaction processing sequences, or some other aspect of the system or components. In order to simplify defect isolation and detect defects early, integration should normally be incremental (i.e., a small number of additional components or systems at a time) rather than “big bang” (i.e., integrating all components or systems in one single step). A risk analysis of the most complex interfaces can help to focus the integration testing.

The greater the scope of integration, the more difficult it becomes to isolate defects to a specific component or system, which may lead to increased risk and additional time for troubleshooting. This is one reason that continuous integration, where software is integrated on a component-by-component basis (i.e., functional integration), has become common practice. Such continuous integration often includes automated regression testing, ideally at multiple test levels.

Previous2.2.1. Component TestingNext2.2.3. System Testing

Last updated 4 years ago

Was this helpful?