ISTQB Foundation Level
  • ISTQB CTFL Syllabus 2018 V3.1
  • Author - Magdalena Olak
  • 1. Fundamentals of Testing
    • 1.1. What is Testing?
      • 1.1.1. Typical Objectives of Testing
      • 1.1.2. Testing and Debugging
    • 1.2. Why is Testing Necessary?
      • 1.2.1 Testing’s Contributions to Success
      • 1.2.2 Quality Assurance and Testing
      • 1.2.3 Errors, Defects, and Failures
      • 1.2.4 Defects, Root Causes and Effects
    • 1.3. Seven Testing Principles
    • 1.4. Test Process
      • 1.4.1 Test Process in Context
      • 1.4.2 Test Activities and Tasks
      • 1.4.3 Test Work Products
      • 1.4.4 Traceability between the Test Basis and Test Work Products
    • 1.5. The Psychology of Testing
      • 1.5.1 Human Psychology and Testing
      • 1.5.2 Tester’s and Developer’s Mindsets
  • 2. Testing Throughout the Software Development Lifecycle
    • 2.1. Software Development Lifecycle Models
      • 2.1.1. Software Development and Software Testing
      • 2.1.2. Software Development Lifecycle Models in Context
    • 2.2. Test Levels
      • 2.2.1. Component Testing
      • 2.2.2 Integration Testing
      • 2.2.3. System Testing
      • 2.2.4. Acceptance Testing
    • 2.3. Test Types
      • 2.3.1. Functional Testing
      • 2.3.2. Non-functional Testing
      • 2.3.3. White-box Testing
      • 2.3.4. Change-related Testing
      • 2.3.5. Test Types and Test Levels
    • 2.4. Maintenance Testing
      • 2.4.1 Triggers for Maintenance
      • 2.4.2 Impact Analysis for Maintenance
  • 3 Static Testing
    • 3.1 Static Testing Basics
      • 3.1.1 Work Products that Can Be Examined by Static Testing
      • 3.1.2 Benefits of Static Testing
      • 3.1.3 Differences between Static and Dynamic Testing
    • 3.2 Review Process
      • 3.2.1 Work Product Review Process
      • 3.2.2 Roles and responsibilities in a formal review
      • 3.2.3 Review Types
      • 3.2.4 Applying Review Techniques
      • 3.2.5 Success Factors for Reviews
  • 4 Test Techniques
    • 4.1 Categories of Test Techniques
      • 4.1.1 Categories of Test Techniques and Their Characteristics
    • 4.2 Black-box Test Techniques
      • 4.2.1 Equivalence Partitioning
      • 4.2.2 Boundary Value Analysis
      • 4.2.3 Decision Table Testing
      • 4.2.4 State Transition Testing
      • 4.2.5 Use Case Testing
    • 4.3 White-box Test Techniques
      • 4.3.1 Statement Testing and Coverage
      • 4.3.2 Decision Testing and Coverage
      • 4.3.3 The Value of Statement and Decision Testing
    • 4.4 Experience-based Test Techniques
      • 4.4.1 Error Guessing
      • 4.4.2 Exploratory Testing
      • 4.4.3 Checklist-based Testing
  • 5 Test Management
    • 5.1 Test Organization
      • 5.1.1 Independent Testing
      • 5.1.2 Tasks of a Test Manager and Tester
    • 5.2 Test Planning and Estimation
      • 5.2.1 Purpose and Content of a Test Plan
      • 5.2.2 Test Strategy and Test Approach
      • 5.2.3 Entry Criteria and Exit Criteria (Definition of Ready and Definition of Done)
      • 5.2.4 Test Execution Schedule
      • 5.2.5 Factors Influencing the Test Effort
      • 5.2.6 Test Estimation Techniques
    • 5.3 Test Monitoring and Control
      • 5.3.1 Metrics Used in Testing
      • 5.3.2 Purposes, Contents, and Audiences for Test Reports
    • 5.4 Configuration Management
    • 5.5 Risks and Testing
      • 5.5.1 Definition of Risk
      • 5.5.2 Product and Project Risks
      • 5.5.3 Risk-based Testing and Product Quality
    • 5.6 Defect Management
  • 6 Tool Support for Testing
    • 6.1 Test Tool Considerations
      • 6.1.1 Test Tool Classification
      • 6.1.2 Benefits and Risks of Test Automation
      • 6.1.3 Special Considerations for Test Execution and Test Management Tools
    • 6.2 Effective Use of Tools
      • 6.2.1 Main Principles for Tool Selection
      • 6.2.2 Pilot Projects for Introducing a Tool into an Organization
      • 6.2.3 Success Factors for Tools
Powered by GitBook
On this page
  • A test process consists of the following main groups of activities:
  • Test planning
  • Test monitoring and control
  • Test analysis
  • Test design
  • Test implementation
  • Test execution
  • Test completion

Was this helpful?

  1. 1. Fundamentals of Testing
  2. 1.4. Test Process

1.4.2 Test Activities and Tasks

A test process consists of the following main groups of activities:

  • Test planning

  • Test monitoring and control

  • Test analysis

  • Test design

  • Test implementation

  • Test execution

  • Test completion

Each main group of activities is composed of constituent activities, which will be described in the subsections below. Each constituent activity consists of multiple individual tasks, which would vary from one project or release to another.

Further, although many of these main activity groups may appear logically sequential, they are often implemented iteratively. For example, Agile development involves small iterations of software design, build, and test that happen on a continuous basis, supported by on-going planning. So test activities are also happening on an iterative, continuous basis within this software development approach. Even in sequential software development, the stepped logical sequence of main groups of activities will involve overlap, combination, concurrency, or omission, so tailoring these main groups of activities within the context of the system and the project is usually required.

Test planning

Test planning involves activities that define the objectives of testing and the approach for meeting test objectives within constraints imposed by the context (e.g., specifying suitable test techniques and tasks, and formulating a test schedule for meeting a deadline). Test plans may be revisited based on feedback from monitoring and control activities. Test planning is further explained in section 5.2.

Test monitoring and control

Test monitoring involves the on-going comparison of actual progress against planned progress using any test monitoring metrics defined in the test plan. Test control involves taking actions necessary to meet the objectives of the test plan (which may be updated over time). Test monitoring and control are supported by the evaluation of exit criteria, which are referred to as the definition of done in some software development lifecycle models (see ISTQB-CTFL-AT). For example, the evaluation of exit criteria for test execution as part of a given test level may include:

  • Checking test results and logs against specified coverage criteria

  • Assessing the level of component or system quality based on test results and logs

  • Determining if more tests are needed (e.g., if tests originally intended to achieve a certain level of product risk coverage failed to do so, requiring additional tests to be written and executed)

Test progress against the plan is communicated to stakeholders in test progress reports, including deviations from the plan and information to support any decision to stop testing.

Test monitoring and control are further explained in section 5.3.

Test analysis

During test analysis, the test basis is analyzed to identify testable features and define associated test conditions. In other words, test analysis determines “what to test” in terms of measurable coverage criteria.

Test analysis includes the following major activities:

  • Analyzing the test basis appropriate to the test level being considered, for example:

    • Requirement specifications, such as business requirements, functional requirements, system requirements, user stories, epics, use cases, or similar work products that specify desired functional and non-functional component or system behavior

    • Design and implementation information, such as system or software architecture diagrams or documents, design specifications, call flow graphs, modelling diagrams (e.g., UML or entity-relationship diagrams), interface specifications, or similar work products that specify component or system structure

    • The implementation of the component or system itself, including code, database metadata and queries, and interfaces

    • Risk analysis reports, which may consider functional, non-functional, and structural aspects of the component or system

  • Evaluating the test basis and test items to identify defects of various types, such as:

    • Ambiguities

    • Omissions

    • Inconsistencies

    • Inaccuracies

    • Contradictions

    • Superfluous statements

  • Identifying features and sets of features to be tested

  • Defining and prioritizing test conditions for each feature based on analysis of the test basis, and considering functional, non-functional, and structural characteristics, other business and technical factors, and levels of risks

  • Capturing bi-directional traceability between each element of the test basis and the associated test conditions (see sections 1.4.3 and 1.4.4)

The application of black-box, white-box, and experience-based test techniques can be useful in the process of test analysis (see chapter 4) to reduce the likelihood of omitting important test conditions and to define more precise and accurate test conditions.

In some cases, test analysis produces test conditions which are to be used as test objectives in test charters. Test charters are typical work products in some types of experience-based testing (see section 4.4.2). When these test objectives are traceable to the test basis, coverage achieved during such experience-based testing can be measured.

The identification of defects during test analysis is an important potential benefit, especially where no other review process is being used and/or the test process is closely connected with the review process. Such test analysis activities not only verify whether the requirements are consistent, properly expressed, and complete, but also validate whether the requirements properly capture customer, user, and other stakeholder needs. For example, techniques such as behavior driven development (BDD) and acceptance test driven development (ATDD), which involve generating test conditions and test cases from user stories and acceptance criteria prior to coding. These techniques also verify, validate, and detect defects in the user stories and acceptance criteria (see ISTQB-CTFL-AT).

Test design

During test design, the test conditions are elaborated into high-level test cases, sets of high-level test cases, and other testware. So, test analysis answers the question “what to test?” while test design answers the question “how to test?”

Test design includes the following major activities:

  • Designing and prioritizing test cases and sets of test cases

  • Identifying necessary test data to support test conditions and test cases

  • Designing the test environment and identifying any required infrastructure and tools

  • Capturing bi-directional traceability between the test basis, test conditions, and test cases (see section 1.4.4)

The elaboration of test conditions into test cases and sets of test cases during test design often involves using test techniques (see chapter 4).

As with test analysis, test design may also result in the identification of similar types of defects in the test basis. Also, as with test analysis, the identification of defects during test design is an important potential benefit.

Test implementation

During test implementation, the testware necessary for test execution is created and/or completed, including sequencing the test cases into test procedures. So, test design answers the question “how to test?” while test implementation answers the question “do we now have everything in place to run the tests?”

Test implementation includes the following major activities:

  • Developing and prioritizing test procedures, and, potentially, creating automated test scripts

  • Creating test suites from the test procedures and (if any) automated test scripts

  • Arranging the test suites within a test execution schedule in a way that results in efficient test execution (see section 5.2.4)

  • Building the test environment (including, potentially, test harnesses, service virtualization, simulators, and other infrastructure items) and verifying that everything needed has been set up correctly

  • Preparing test data and ensuring it is properly loaded in the test environment

  • Verifying and updating bi-directional traceability between the test basis, test conditions, test cases, test procedures, and test suites (see section 1.4.4)

Test design and test implementation tasks are often combined.

In exploratory testing and other types of experience-based testing, test design and implementation may occur, and may be documented, as part of test execution. Exploratory testing may be based on test charters (produced as part of test analysis), and exploratory tests are executed immediately as they are designed and implemented (see section 4.4.2).

Test execution

During test execution, test suites are run in accordance with the test execution schedule.

Test execution includes the following major activities:

  • Recording the IDs and versions of the test item(s) or test object, test tool(s), and testware

  • Executing tests either manually or by using test execution tools

  • Comparing actual results with expected results

  • Analyzing anomalies to establish their likely causes (e.g., failures may occur due to defects in the code, but false positives also may occur (see section 1.2.3)

  • Reporting defects based on the failures observed (see section 5.6)

  • Logging the outcome of test execution (e.g., pass, fail, blocked)

  • Repeating test activities either as a result of action taken for an anomaly, or as part of the planned testing (e.g., execution of a corrected test, confirmation testing, and/or regression testing)

  • Verifying and updating bi-directional traceability between the test basis, test conditions, test cases, test procedures, and test results.

Test completion

Test completion activities collect data from completed test activities to consolidate experience, testware, and any other relevant information. Test completion activities occur at project milestones such as when a software system is released, a test project is completed (or cancelled), an Agile project iteration is finished, a test level is completed, or a maintenance release has been completed.

Test completion includes the following major activities:

  • Checking whether all defect reports are closed, entering change requests or product backlog items for any defects that remain unresolved at the end of test execution

  • Creating a test summary report to be communicated to stakeholders

  • Finalizing and archiving the test environment, the test data, the test infrastructure, and other testware for later reuse

  • Handing over the testware to the maintenance teams, other project teams, and/or other stakeholders who could benefit from its use

  • Analyzing lessons learned from the completed test activities to determine changes needed for future iterations, releases, and projects

  • Using the information gathered to improve test process maturity

Previous1.4.1 Test Process in ContextNext1.4.3 Test Work Products

Last updated 4 years ago

Was this helpful?