Loading presentation...

Present Remotely

Send the link below via email or IM

Copy

Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.

DeleteCancel

Make your likes visible on Facebook?

Connect your Facebook account to Prezi and let your likes appear on your timeline.
You can change this under Settings & Account at any time.

No, thanks

PREPARATION ISTQB FL

No description
by

christoph ramel

on 11 November 2015

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of PREPARATION ISTQB FL

preparation for
ISTQB foundation level
certification exam

static testing
test design
techniques

tool support
for testing

test
management

test organization
testing throughout
the sw lifecycle

software development models
fundamentals


of testing

why is testing necessary?
static techniques and
the test process

test development process
types of test tools
what is software testing?
7 testing principles
fundamental test process
psychology of testing
introduction
IT that makes your life easier
sources
sources
Certified Tester Foundation Level Syllabus - ISTQB Version 2011
Certified Tester Advanced Level Syllabus - ISTQB Version 2011
ISTQB Glossary of Terms used in Software Testing V2.0
IEEE 829 – Standard for Software Test Documentation
ISTQB started at 2002 – www.istqb.org
240.000 Certifications in more than 70 countries
3 levels of certification
Foundation Level
Advanced Level
Expert Level
what is ISTQB?
for whom is it?
Aimed at anyone involved in Software Testing
Testers, Test Analysts, Test Engineers, Test Consultants, Test Managers, ...
Also for other roles who would like to have basic understanding of software Testing
Holders of this certification will be able to go on to a higher-level software testing qualification
exam
Based on syllabus
Answers may require to use more than one section of the syllabus
Multiple choice questions
Exam taken as part of an accredited training course or taken independently
40 questions
40 points available - each correctly answered question is worth one point
Time allowed: 60 (+15) minutes
A score of at least 65% (26 points or more) is required to pass
No minus points for incorrect answers
what it is it?
for whom is it?
the exam
during exam
No material except English dictionary
Provider (tutor) not present
All rules are explained in detail (in local language) by invigilator before examination
Answer sheets gathered by invigilator
Questions cannot be kept by student
question type A
What does a tester do during "Static testing"?


A. Reviews requirements and compares them with the design.
B. Runs the tests on the exact same setup each time.
C. Executes test to check that the hardware has been set up correctly.
D. Runs the same tests multiple times, and checks that the results are statistically meaningful.
correct answer: A
question type ROMAN
When should regression testing normally be performed?

i. Every week
ii. After the software has changed
iii. On the same day each year
iv. When the environment has changed
v. Before the code has been written

A. ii & iv are true, i, iii & v are false
B. i & ii are true, iii, iv & v are false
C. ii, iii & iv are true, i & v are false
D. ii is true, i, iii, iv & v are false.
correct answer: A
question type PICK-N
Given the following list of test design techniques, which TWO would be categorized as white box?

A Boundary value analysis
B Decision table testing
C Decision testing
D State transition testing
E Statement testing
F Equivalence partitioning
correct answer: C & E
based on ISTQB syllabus 2011
test levels
test types
maintenance testing
review process
static analysis by tools
categories of test design techniques
black-box techniques
white-box techniques
experience-based techniques
choosing test techniques
test planning & estimation
test progress monitoring & control
configuration management
risk & testing
incident management
potential benefits
& risks

introducing a tool
into an organization

software system context
Software systems are deceptively easy
no production errors
no physical limitations
easy re-design
high functionality increase
Software systems are liable for human errors
Most of people had experience with the software which did not work properly
cost of defects
Can be very high

Ariane-5, 1996: buffer overload
Pentium, 1994: incorrect division algorithm
Patriot-Scud, 1991: rounding error
Mars, 1999: pounds vs. Newtons
In Poland: elections in 2002
causes of software defects
human errors
human nature
time pressure
high complexity of source code
high complexity of infrastructure
changing technologies
requirements changes
interactions with many other systems
Environmental conditions (radiation, magnetism, electronic fields, pollution)
error/defect/failure
 A human being can make an error (mistake)

 ... which produces defect (fault, bug) in the program code or in a document

 ... which can lead to failure if a defect in code is executed

Defect in software, systems or documents may result in failures, but not all defects do so!
why is it called bug?
defect
A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition.
A defect, if encountered during execution, may cause a failure of the component or system.
bug
see defect
TERMS
role of software testing
Decreases the risk of failure during software operations
Increases software quality
Mandatory to fulfill contractual or legal requirements

Testing what has been changed +

Regression testing to the parts of the system which have not been changed

Scope of maintenance testing depends on:

Risk of change
Size of the system
Size of change

Impact analysis – determining how the existing system may be affected by changes

Used to assess how much regression testing to do
Maintenance testing

Software once deployed in service for years or decades

System, its configuration data, environment – corrected, changed or extended

Planning of releases in advance is crucial

Planned releases and hot fixes

Done on existing operating system

Longest and most costly phase in system life-cycle

Triggered by modifications, migration or retirement
Maintenance testing

Testing of software structure/architecture

Structural knowlegde used to design test cases

With the access to the code (white-box testing, glass-box testing, grey-box testing)

To measure thoroughness of testing through assessment of coverage of a type of structure

Coverage:
Extent that a structure has been exercised by a test suite
Expressed by percentage of the items being covered

If coverage < 100% -> more tests can be designed
Structural testing
Often the responsibility of the customers or users of the system (other stakeholders might be involved)
The goal is to establish confidence in the system, its parts, or its non-funcitonal characteristics
Finidng defects is not the main focus
To assess system’s readiness for deployment and use
Not necessarily final level of testing (e.g. Large-scale system integration tests)
Acceptance Testing (2/4)
Testing behavior of a whole system/product, end-to-end functionality

Testing scope clearly addressed in the Master and in Level Test plan for that test level

Test environment corresponds to final target or production envrionment as much as possible

Functional and non-functional requirements of the system

Often carried out by independent test team

Often incomplete or undocumented requirements

First specification based techinques, next structure based techniques
System Testing (2/3)
Test basis:
System and software requirement specification
Use cases
Functional specification
Business processes
Risk analysis reports
Models of system behavior
Other high-level text description of system behavior, interactions with operating system and system resources

Test object:
System, user and operation manuals
System configuration and configuration data
System Testing (1/3)
Integration strategies:
Top-down
Bottom-up
Big bang
Incremental
Mixed – top-down incremental
Strategy based on system architecture, functional tasks, transaction processing sequences, and other aspects of the system or components
Rather incremental than big-bang
Integration strategies
Test basis:
Software and system design
Architecture
Workflows
Use cases

Typical test objects
Subsystems
Database implementation
Infrastructure
Interfaces
System configuration and configuration data
Integration testing (1/2)
Component testing = unit testing = module testing

Searches for defects in software modules, programs, classes that are separately testable

Can be done in the isolation from the rest of the system

Stubs, drivers, simulators may be used

Functional as well as non-functional tests (e.g. searching for memory leaks, robustness testing)

Structural testing (e.g. Decision coverage)
Component testing (2/3)
Testing does not exist in isolation
Test activities related to other software development process activities:
Quality assurance (e.g. Quality planning)
Project schedule
Requirements, design, implementation
Integration and its strategy
Configuration management, change management
Different software development models require different approaches to testing
Software development models

At any or all test levels

For any or all test types

Depending on changes


Challenges:

Specification out of date or missing

No testers with domain knowledge
Maintenance testing

Testing of data migration into new system

Archiving (if long data-retention required)
Retirement

E.g. from one platform to another

E.g. data migration from other system to maintained system

Operational tests of a new environment + tests of the changed software

Migration testing = conversion testing
Migration


Planned enhancements

Corrective and emergency changes

Changes of environment (e.g. operating system or database upgrades, upgrades of Commercial-Off-The-Shelf software)
Modifications

Tests must be repeatable to be used in confirmation and regression testing

Regression tests:

Performed on all test levels
Functional, non-functional and structural testing
Test suites run many times and evolve slowly
Strong candidate for automation
Testing related to changes

Confirmation testing

After defect is detected and fixed
Software re-test to confirm that original defect has been removed

Regression testing

Repeated testing of an already tested program
After modification to discover all defects introduced or uncovered
Defects in software being tested or another software component
Performed when software or its environment is changed
The extent is based on risk of not finding defects
Testing related to changes

Best used after specification based techniques

Performed at all test levels

Most often in component testing and component integration testing (supported by tools)

May be based on architecture of the system (calling hierarchy)

Also applied to system integration testing and acceptance testing (e.g. to business models, menu structures) – black-box coverage
Structural testing
Includes:
Performance testing – check response times, transaction times, response times under load, test: load generation and time measurement
Load testing – check if system works correctly with required load
Stress testing – load higher than maximum required
Usability testing – ease of learning, pleasure, ergonomy, ease of adaptation
Maintainability testing – ease of changing
Reliability testing – accessibility
Portability testing – ease of porting to different platforms
Non-functional testing (2/2)

Testing of non-functional software characteristics

Testing „how” the system works

May be performed on all test levels

Tests required to measure characteristics of a system and software that can be quantified on a varying scale (e.g. response times in performance testing)

External behavior of the software (uses black-box test design techiques)

May be based on quality model e.g. ISO 9126
Non-functional testing (1/2)

Specification based techniques to derive test conditions and test cases
Natural languages in specifications
Coverage measurement imprecise

Model based techniques also used
Coverage measurements possible

Special types of functional testing:

Security testing e.g. detection of threats

Interoperability testing – evaluates the capability of the software product to interact with one or more specified components or systems
Testing of function (Functional Testing) (2/2)

Testing of „what” the system/component does
Based on functions and features which system must provide and their interoperability with specific systems

External behavior of the system (black-box testing)

May be performed on all test levels

Test basis:
Requirements specification
Use cases
Functional specification
Experience
Testing of function (Functional Testing) (1/2)
Group of test activities based on a specific reason or target for testing

Test type is focues on a particular test objective:

A function of a system
Non-functional quality characteristic
Structure or architecture of a system
Change related


In any model of a software may be developed and used
Test types
Contract and regulation acceptance
Criteria should be defined in contract or in regulations (legal or safety)

For market software (COTS):
Alpha testing: at developing organization’s site
Beta testing: by people at their own locations
Done by potential customers (not development team)
Acceptance Testing (4/4)
User acceptance testing
Verifies fitness for use
Typically by end-users
According to business rules and use cases
Various user classes and other stakeholders
Operational acceptance testing
Performed by system administrators
Backup and restore
User administration
Acceptance Testing (3/4)
Test basis:
User requirements
System requirements
Use cases
Business processes
Risk analysis reports

Typical test objects:
Business processes on fully integrated system
Operational and maintenance processes
User procedures
Forms
Reports
Configuration data
Acceptance Testing (1/4)

Only tests that could not be performed on lower levels

Real-life data

Requirements-based and model-based techniques

Simulation of integration with other systems

Good candidate for test automation
System Testing (3/3)
Tests interfaces between components, interactions with other parts of the system (OS, filesystem, hardware) and interfaces between systems

There may be more than one level of integration testing:

Component integration testing – interactions between software components – done after component testing

System integration testing – interactions between systems or between hardware and software – done after system testing

Functional tests as well as non-functional tests (e.g. Performance)

Testers concentrate solely on integration itself

Testers understand the architecture and influence integration planning
Integration testing (2/2)
Access to the code being tested

Support of development environment (unit test framework, debugging tools)

Usually involves programer who wrote the code

Defects fixed as soon as they are found (no records in defect tracking tool)

One of approaches – test driven development TDD (test-first approach):

Prepare and automate test cases before coding

Highly iterative

Development of test cases -> coding -> execute component tests -> fix defects until all tests are passed
Component testing (3/3)
Test Basis:
Components requirements
Detailed design
Code
Data model


Test objects:
Components
Programs
Data conversion programs
Database modules

Two standards on component testing: IEEE 1008, BS 7925-2
Component testing (1/3)
Component testing

Integration testing (Integration in the small)

System testing

(optionally) System integration testing (Integration in the large)

Acceptance testing
Test Levels
For each test level the following can be identified:

Generic objectives

Work products for deriving test cases (test basis)

Test object (i.e. what is being tested)

Typical defects and failures to be found

Test harness requirements and tool support

Specific approaches and requirements
Test Levels
For every development activity there is a corresponding testing activity
Each test level has own objectives
Test analysis and design begin during corresponding development activity
Testers should be involved early in reviewing documents
For every model
Spiral model
Iterative-incremental development model

Preparation for ISTQB FL Certification – Part 2
RUP model
Iterative-incremental development model
Establishing requirements, designing, building and testing in a series of short development cycles

Examples:

Spiral
Prototyping
Rapid Applicaiton Development
Rational Unified Process
Agile development models

Regression testing increasingly important, less risky integration, more complex organization
Iterative-incremental development model
2.4 Maintenance testing
2.3 Test types
2.2 Test levels
2.1 Software development models
Part 2 – Testing throughout SW lifecycle
2.4 Maintenance testing
2.3 Test types
2.2 Test levels
2.1 Software development models
Part 2 – Testing throughout SW lifecycle
2.4 Maintenance testing
2.3 Test types
2.2 Test levels
2.1 Software development models
Part 2 – Testing throughout SW lifecycle
2.4 Maintenance testing
2.3 Test types
2.2 Test levels
2.1 Software development models
Part 2 – Testing throughout SW lifecycle
2.4 Maintenance testing
2.3 Test types
2.2 Test levels
2.1 Software development models
Part 2 – Testing throughout SW lifecycle
6. Tool support for testing
5. Test management
4. Test design techniques
3. Static techniques
2. Testing throughout the SW lifecycle
1. Fundamentals of testing
Introduction
Course Plan
6. Tool support for testing
5. Test management
4. Test design techniques
3. Static techniques
2. Testing throughout the SW lifecycle
1. Fundamentals of testing
Introduction
Course Plan
Accept. Test
System Test
Integr. Test
Module Test
Coding
Modules
Design
Architecture
Desing
System
Design
Reqs.
Specification
Every development level has corresponding test level
Still synchronous and not-parallel
V-model
Maintenance
Testing
Implement.
Design
Analysis
Planning
Synchronous, not parallel
Tests occur very late in the software life-cycle after development is finished
Waterfall Model
Referencing a variable with undefined value
Inconsistent interfaces between models and components
Variables that are not used or improperly defined
Unreachable (dead) code
Missing and erroneous logic
Overly complicated constructs
Programming standards violations
Security vulnerabilities
Syntax violations
Typical defects found
Objective: find defects in software code or models
Without actually executing the code
Performed by tools
Can locate defects which are hard to find by dynamic testing
Finds defects rather than failures
Tool analyzes program code (control flow, data flow) as well as generated output as HTML or XML
Static analysis

Technical review:
Documented, defined
Includes peers, technical experts with optional management participation
May be performed as peer review (w/o management)
Ideally led by trained moderator (not the author)
Pre-meeting preparation by reviewers
Types of reviews (4/6)
Follow-up
Checking that defects have been addressed
Gathering metrics
Checking on exit criteria



Using checklists for different perspectives can make reviews more efficient:
User
Maintainer
Tester
Typical Requirements checklist
Formal review (3/5)
Planning
Defining review criteria
Selecting the personel
Allocating resources
Defining entry and exit criteria
Selecting which parts of the document to review
Checking entry criteria

Kick-off
Distributing documents
Explaining the objectives, process and documents to the participants
Formal review (1/5)
Can vary from informal to very formal

The formality depends on:
Agreed objective of the review
Maturity of the development process
Legal or regulatory requirements
Need for an audit trail
The way review is led depends on its objectives:
Find defects
Gain understanding
Educate team members
Discussion and decision by consensus
Review process
Any work product:
Requirements specifications
Design specifications
Code
Tests plans
Test specifications
Test cases
Test scripts
User guides
Web pages
What to review?
Testing without executing the code
Manual examination (review) or automated analysis (static analysis)
Code or other project documentation
Static techinques sometimes not considered testing
Often omitted
Static testing
Tools typically used by developers
During component and integration testing
During checking in the code to configuration managment tools
By designers during software modeling


Static analysis tools may produce large number of warning messages - they need to be well managed


Compilers offer support for static testing (including calculation of metrics)
Static analysis usage
Early detection of defects prior to test execution
Early warning about suspicious aspects
Identification of defects not easily found by dynamic testing
Detecting inconsistencies and dependencies in software models (e.g. Links)
Improved maintainability of code or design
Prevention of defects if lessons are learned in development
Benefits of static analysis
Clear predefined objectives
The right people (for the objectives) are involved
Defects found are welcome and expressed objectively
People issues and psychological aspects are dealt with
Atmosphere of trust
Proper review techniques applied
Checklists or roles if appropriate
Trainging in review techniques
Management support for good review process
Emphasis on learning and process improvement
Reviews success factors
Inspection:
Led by trained moderator
Usually conducted as a peer examination
Defined roles
Includes metrics gathering
Formal process based on rules and checklists
Entry and exit criteria
Pre-meeting preparation
Inspection report incl. list of findings
Formal follow-up process
Optional reader
Main purpose: finding defects
Types of reviews (6/6)

Technical review continued:
Optional use of checklists
Preparation of review report
May vary from quite formal to very formal
Main purpose: discussing, making decisions, evaluating alternatives, finding defects, solving techincal problems, checking conformance with specifications, plans, regulations and standards
Types of reviews (5/6)
Walkthrough:
Meeting led by author
Scenarios, dry runs, peer group
Open-ended sessions:
optional pre-meeting preparation
optional preparation of review report
Optional scribe
May vary from quite informal to very formal
Main purposes: learning, gaining understanding, finding defects
Types of reviews (3/6)
Informal review:
No formal process
E.g. Pair programming or technical lead reviewing desings, code
Results may be documented
Varies in usefulness depending on reviewers
May purpose: inexpensive way to get some benefit
Types of reviews (2/6)
Informal review
Walkthrough
Technical review
Inspection
Types of reviews (1/6)
Roles and responsibilities:
Author:
Writer or person responsible for the document

Reviewers:
Individuals with specific technical or business knowledge
Chosen to represent different perspectives and roles

Scribe (recorder):
Documents the issues
Documents problems and open points
Formal review (5/5)
Roles and responsibilities:
Manager
Decides on execution of the review
Allocates time in project schedules
Determines if review objectives have been met

Moderator
Leads the review
Plans and runs the meeting
Does follow-up activities
May mediate between various points of view
Success factor for the review
Formal review (4/5)
Individual preparation
Preparing for the review meeting by reviewing the documents
Noting potential defects, questions and comments

Examination/evaluation/recording of results (review meeting)
Discussing or logging, with documented results or minutes
Noting defects, making recommendations, making decisions about defects
Examining/evaluating and recording issues

Rework
Fixing defects found
Recording updated status of defects
Formal review (2/5)
Have the same objectives as other test types – finding defects


Complementary to other test techniques

Find causes of failures (defects) than failures themselves

Typical defects easier to find:
Deviations from standards
Requirements defects
Design defects
Insufficient maintainability
Incorrect interface specifications
Review objectives
Early defect detection and correction
Can find defects unlikely to be found in dynamic testing (e.g. omissions)
Development productivity improvement
Reduced development timescales
Reduced testing cost and time
Lifetime cost reductions
Fewer defects
Improved communication
Benefits of reviews
Way of testing work products (also code)
Well before dynamic test execution
Defects detected early in the life-cycle cheaper to remove
In most cases manually but tool support exists
The main activity – review work product and make comments about it
Review

Preparation for ISTQB FL Certification – Part 3
3.3 Static analysis
3.2 Reviews – process
3.1 Reviews – general information
Part 3 – Static testing
3.3 Static analysis
3.2 Reviews – process
3.1 Reviews – general information
Part 3 – Static testing
3.3 Static analysis
3.2 Reviews – process
3.1 Reviews – general information
Part 3 – Static testing
6. Tool support for testing
5. Test management
4. Test design techniques
3. Static techniques
2. Testing throughout the SW lifecycle
1. Fundamentals of testing
Introduction
Course Plan
6. Tool support for testing
5. Test management
4. Test design techniques
3. Static techniques
2. Testing throughout the SW lifecycle
1. Fundamentals of testing
Introduction
Course Plan
Visible areas
Mostly used features
Features with big number of defects
Areas with complex code
Areas frequently changed
Domain knowlegde missing
New technology
Areas of interest
LCSAJ: Linear Code Sequence and Jump
Strongest coverage

Data flow testing
Test cases designed based on variable usage within the code
Other Structure-based Techniques
Modified Condition Decision Testing
Requires test cases to show that each Boolean operand (A, B and C) can independently affect the outcome of the decision.

If A or (B and C) then
do_something;
else
do_something_else;
end if;
Other Structure-based Techniques
If there is at least one statement in every branch:
# of test cases needed for Statement Coverage = # of test cases needed for branch coverage

If any branch has no statement:
# of test cases needed for Statement Coverage < # of test cases needed for branch coverage
Decision Testing vs. Statement testing
Decision coverage (Branch coverage) – Number of all decision outcomes (e.g. True and False options of an IF statement) covered by test cases divided by the number of all possible decision outcomes in the code under test

Decision testing technique – derives test cases to execute specific decision outcomes

Decision coverage (Branch coverage) stronger than statement coverage (100% decision coverage guarantees 100% statement coverage, but not vice versa)
Decision Testing / Branch Testing
Assessment of the percentage of executable statements that have been exercised by a test case suite
Statement testing technique derives test cases to execute specific statements and increase statement coverage

Statement coverage – number of executable statements covered by test cases divided by the number of all executable statements in the code under test
Statement Testing
Based on identified structure of the software or the system
Applicable to all test levels:
Component level: the structure of a software component
Integration level: a call tree (diagram in which modules call other modules)
System level: menu structure, business process, web page structure

Test cases are derived using design, code or structure information
Structure-based (White-box) techniques (1/2)
Test cases derived from Use Cases
May be combined with other specification-based test techniques
Use cases describe „process flows” through a system based on its actual likely use
Test cases most usefull in uncovering defects in real-life process flows

Very useful for designing acceptance tests with customer/user participation
May uncover integration defects caused by interaction between different components
Use Case Testing
State Transition Testing Example – Trainer state diagram
Coverage = at least one test per column = covering all combinations of triggering conditions
May be applied to all situations when the action of the software depends on several logical decisions
Strength: creates combinations of conditions that otherwise might not have been exercised during testing
Decision Table Testing (2/2)
Very good way to capture system requirements that contain logical conditions, and to document internal system design
May be used to record complex bussiness rules that a system is to implement
Specification is analyzed, conditions and actions of the system are identified
Input conditions and actions often stated in Boolean values (true or false)
Decision table – triggering conditions + resulting actions for each combination of conditions
Each column – unique combination of conditions + result in the execution of that conditions
Decision Table Testing (1/2)
Equivalence Partitioning
Boundary Value Analysis
Decision Table Testing
State Transision Testing
Use Case Testing
Black-box techniques
Knowledge and experience of people used to derive test cases
Testers, developers, users, etc.
Knowledge about:
The software, its usage and its environment
The likely defects and their distribution
Experience-based test design techniques
Also called specification-based techiques (or model-based techniques in case models are available)
Way to derive test conditions or test cases based on analysis of the test basis documentation
Functional and non-functional tests
For all test levels
Does not use any informaiton regarding internal structure of component of system
Black-box test design techniques (1/2)
Test cases are:
Developed
Implemented
Prioritized
Organized in test procedure specification (IEEE 829-1998)

Test procedure:
Specifies the sequence of actions for the execution of a test
If run using a test execution tool sequence of actions specified in a test script (automated test procedure)
Test implementation (1/2)
Test cases and test data created and specified
Test case:
Set of input values
Execution preconditions
Expected results
Execution postconditions

Standard for Software Test Documentation (IEEE 829-1998) – content of test design specifications (containing test conditions) and test case specifications
Test design
From very informal (little or no documentation) to very formal
Level of formality depends on:
The context of testing
Maturity of testing and development process
Time constraints
Safety or regulatory requirements
People involved
Test development process

Preparation for ISTQB FL Certification – Part 4
Decision which test technique to choose depends on:
Time and budget
Development life-cycle
Use Case models
Previous experience with types of defects found
Some techniques more applicable to certain situations and test levels, others applicable to all test levels
Combination of test tehniques used during test case creation to ensure adequate coverage of the object under test
Choosing test technique (2/2)
Decision which test technique to choose depends on:
Type of the system
Regulatory standards
Customer or contractual requirements
Level of risk, type of risk
Test objective
Documentation available
Knowledge of the testers
Choosing test technique (1/2)
Concurrent test design, test execution, test logging and learning
Based on test charter containing test objectives
Carried out within time-boxes
Very useful in case of:
Few or inadequate specifications
Severe time pressure
Very useful for:
Complementing more formal testing approaches
Check on the test process to ensure that the most critical defects are found
Exploratory testing
Commonly used experience-based technique
Testers anitcipate defects based on experience

Structured approach: enumerate a list of possible defects and design tests that attack these defects – fault attack
Failure lists built based on experience, available defect and failure data, and common knowlegde about why software fails
Error guessing
Tests dervied from tester’s skill and intuition and his experience with similar applications or technologies
Useful in identyfing special tests not easily captured by formal techniques (esp. when applied after more formal approaches)
Varying degree of effectiveness depending on testers’ experience
Experience-based Techniques
Mainly at component level but not only
Can also be applied to other test levels

E.g. Integration level: percentage of modules, components or classes that have been exercised by a test case suite expressed as module, component or class coverage.
Test coverage
Branch Condition Combination Testing
Every combination of Boolean operands must be covered

if A or (B and C) then
do_something;
else
do_something_else;
end if;

2n – number of test cases to achieve coverage (n-number of operands)
100% of Branch Condition Combination Coverage ensures 100% of Branch Coverage
Other Structure-based Techniques
Branch Condition Testing
Every boolean operand must have both TRUE and FALSE covered

if A or (B and C) then
do_something;
else
do_something_else;
end if;

TC1: A = FALSE, B=FALSE, C=FALSE
TC2: A = TRUE, B = TRUE, C = TRUE
TC1 + TC2 - 100% Branch Condition Coverage
Other Structure-based Techniques
The minimum number of test cases needed to achieve branch coverage is less or equal to McCabe index

Cyclomatic Complexity: CC = E – N + 1
E – number of edges
N – number of nodes
McCabe Cyclomatic Complexity Index
Tests designed to ensure certain code coverage
Code coverage can be unserstood in different ways: statement coverage, decision coverage, etc.
Systematic approach exists for increasing coverage
If too small coverage achieved – expand test suite
Increase until satisfactory coverage is achieved
Structure-based (White-box) techniques (2/2)
Use case describes interactions between actors which produce a result of value to a system user or the customer
System level (system functionality level) and abstract level (business process level) use cases
Use case contains:
Preconditions which need to be met
Postconditions which are observable result and final state of the system after use case has been completed
Mainstream scenario and alternative scenario
Use Case Testing

State coverage
0-switch coverage
1-switch coverage
2-switch coverage
Typical paths
Critical paths
State Transition Coverage
http://www.ertin.com/pr_state_diagrams.html
State Transition Testing Example – Trainer state diagram

Technique used in:
Embedded software industry
Technical automation
Modeling business object having specific states
Testing screen-dialogue flows
State Transition Testing (3/3)

Software may have different response depending on current state

Tests may be designed to:
Cover typical sequence of states
Cover every state
Exercise every transition
Exercise specific sequences of transitions
Test invalid transitions
State Transition Testing (2/3)
Model based test design technique
Software behavior modeled as state transition diagram:
States
Transitions between states
Input or events that trigger state changes
Actions which may result from those transitions

States are separate, identifiable and finite in number
From the model test cases derived – state transition table created
State Transition Testing (1/3)
http://it.toolbox.com/blogs/enterprise-solutions/building-decision-tables-15903
Decision Table Testing Example (2/2)
Company X sells goods to wholesale and retail outlets. Wholesale customers receive a two percent discount on all orders. The company also encourages both wholesale and retail customers to pay cash on delivery by offering a two percent discount for this method of payment.  Another two percent discount is given on orders of 50 or more units.

http://it.toolbox.com/blogs/enterprise-solutions/building-decision-tables-15903 
Decision Table Testing Example (1/2)
Can be applied at any test level
Relatively easy to apply with high defect-finding capability
Detailed specifications helpful in determining boundaries

Treated as an extension of equivalence partitioning

Can be used on any equivalence classes – user input on screen, time ranges, table ranges etc.
Boundary Value Analysis (2/2)
Behavior at the edge of each equivalence partition is more likely to be incorrect than behavior within the partition
Boundaries are areas where testing is likely to yield defects
Maximum and minimum values of a partition are its boundary values
Boundary of an valid partition = valid boundary value
Boundary of an invalid partition = invalid boundary value

Tests designed to cover both valid and invalid boundary values
Test for each boundary value is chosen
Boundary Value Analysis (1/2)
Testing of a function which adds 2 integer numbers

Should I test 11 + 22 or 13 + 34?
How many additions should I perform?

In order to correctly define partitions we need knowlegde about the code (white-box)
Equivalence partitions continued
Testing an entry field for age (10-99)
Attaching file (1kB – 10MB)
Input field for Departure Time (0000 – 2359)

Output:
Printout of document title (1 – 48 characters)
Display of text in the field ( 1- 3000 characters, 20 per line)
Equivalence partitions
Inputs divided into groups that are expected to exhibit similar behavior (they are likely to be processed in the same way)

Equivalence partitions both for valid (values that should be accepted) and invalid data (values that should be rejected)

Partitions also for: outputs, internal values, time-related values, interface parameters
Equivalence partitioning (1/2)
Also called structural or structure-based techniques
Based on an analysis of the structure of the component or system
Informaiton „how” the software is constructed used to derive the test cases
Coverage for existing test cases can be measured
Further test cases can be derived systematically to increase coverage
White-box test design techniques
Model-based techiques:
Models (formal or informal) used to specify the problem to be solved, the software or its components
Test cases can be derived systematically from these models
Black-box test design techniques (2/2)
Purpose: identify test conditions, test cases and test data

Black-box techniques
White-box techinques
Experience-based techniques

Techniques in one category and with elements of more than one category
Test design techniques
Various test procedures (and automated test scripts) are formed into a test execution schedule
Test execution schedule – defines the order in which test procedures or test scripts are executed, when and by whom

It takes into account:
Regression tests
Priorization
Technical and logical dependencies
Test implementation (2/2)
Part of the specification of a test case
Include: outputs, changes to data and states, any other consequences of a test
If not defined: plausible but erronerous result may be interpreted as correct one
Ideally defined prior to test execution
Expected results
Test basis analyzed to determine ‚what to test’ – to identify test conditions
Test condition – item or event that could be verified by one or more test cases (e.g. Function, transaction, quality characteristic, structural element)
Establisching traceability from test conditions back to requirements and specifications. Useful for:
Impact analysis when requirements change
Requirements coverage for a set of tests
Detailed test approach is chosen based on identified risks – test design techniques selected
Test analysis
TC1: x=4, y=2
TC2: x=5, y=2
Statement coverage:
8/8=100%
Branch coverage:
3/4=75%

TC1: x=4, y=2
TC2: x=5, y=2
Branch coverage:
4/4 = 100%
int x
int y

If (x < 5)
print „x is smaller than 5”
else
print „x is bigger or equal to 5”
end
If ( y = 2)
print „y is 2”
end
Decision Coverage / Branch Coverage
http://etutorials.org/Programming/UML/Chapter+3.+Use+Cases/
Use Case Example
Modified Condition Decision Testing Continued
For A operand:

For B operand

For C operand

Set ot test cases
Other Structure-based Techniques
Boundary value = 99
Boundary value = 10
Valid
Testing an entry field for age (10-99)






TCs for valid Equivalence Partitionins: e.g. 80
TCs for valid Boundary Values: 10 and 99
TCs for valid and incalid EP: e.g. 5, 80, 102
TCs for valid and invalid BV: 9, 10, 99, 100
TCs for EP+BV (called also full BV): 9, 10, 11, 98, 99, 100
Boundary Value Analysis - examples
4.6 Choosing test techniques
4.5 Experience-based techniques
4.4 White-box techniques
4.3 Black-box techniques
4.2 Test design techniques
4.1 Test development process
Part 4 – Test design techinques
TC1: x=4, y=3
TC2: x=5, y=2
Statement coverage:
8/8=100%
int x
int y

If (x < 5)
print „x is smaller than 5”
else
print „x is bigger or equal to 5”
end
If ( y = 2)
print „y is 2”
end
Statement Coverage Continued
Invalid equivalence partitions
Valid equivalence partition
Boundary value = 99
Boundary value = 10
Equivalence partition
Tests designed to cover all valid and invalid partitions
Applicable at all levels of testing
To achieve input and output coverage goals
Equivalence partitioning (2/2)
4.6 Choosing test techniques
4.5 Experience-based techniques
4.4 White-box techniques
4.3 Black-box techniques
4.2 Test design techniques
4.1 Test development process
Part 4 – Test design techinques
4.6 Choosing test techniques
4.5 Experience-based techniques
4.4 White-box techniques
4.3 Black-box techniques
4.2 Test design techniques
4.1 Test development process
Part 4 – Test design techinques
4.6 Choosing test techniques
4.5 Experience-based techniques
4.4 White-box techniques
4.3 Black-box techniques
4.2 Test design techniques
4.1 Test development process
Part 4 – Test design techinques
4.6 Choosing test techniques
4.5 Experience-based techniques
4.4 White-box techniques
4.3 Black-box techniques
4.2 Test design techniques
4.1 Test development process
Part 4 – Test design techinques
4.6 Choosing test techniques
4.5 Experience-based techniques
4.4 White-box techniques
4.3 Black-box techniques
4.2 Test design techniques
4.1 Test development process
Part 4 – Test design techinques
6. Tool support for testing
5. Test management
4. Test design techniques
3. Static techniques
2. Testing throughout the SW lifecycle
1. Fundamentals of testing
Introduction
Course Plan
6. Tool support for testing
5. Test management
4. Test design techniques
3. Static techniques
2. Testing throughout the SW lifecycle
1. Fundamentals of testing
Introduction
Course Plan
TC1: x=4, y=3
Statement coverage: 5/8=62,5%

TC2: x=5, y=2
Statement coverage:
6/8=75%
int x
int y

If (x < 5)
print „x is smaller than 5”
else
print „x is bigger or equal to 5”
end
If ( y = 2)
print „y is 2”
end
Statement Coverage

Preparation for ISTQB FL Certification – Part 5
Example incident life-cycle
Presentation
May be raised during:
Development
Review
Testing
Use of software prtoduct

May be raised for:
Issues in the code or working system
Any type of documentation (requirements, development documents, test documents, Help, installation guides)
Incidents
Presentation
Discrepancies between actual and expected outcomes need to be logged as incidents
Incident must be investigated and may turn out to be a defect
Appropriate actions to dispose incidents and defects must be defined
Incidents and defects must be tracked from discovery and classification to correction and confirmation of the solution
To manage all incidents to completion - incident management process and rules established for classification
Incident Management
Presentation
In Risk-based approach identified risks used to:
Determine test techniques to be employed
Determine extent of testing to be carried out
Prioritize testing in an attempt to find the critical defects as early as possible
Determine whether non-testing activities could be employed to reduce risk (e.g. training for inexperienced designers)
Risk-based testing (2/2)
Presentation
Failure-prone software delivered
The potential that software could cause harm to an individual or company
Poor software characteristics (functionality, reliability, usability and performance)
Poor data integrity and quality (e.g. data migration issues, data conversion problems, data transport problems, violation of data standards)
Software that does not perform its intended functions
Product Risks Examples
Presentation
Potential failure areas (adverse future events or hazards) in the software or system are known as product risks, as they are risk for the quality of the product
Risks are used to decide:
Where to start testing
Where to test more

Product Risk - special type of risk to the success of the project
Product Risks
Presentation

Risks that surround the project's capability to deliver its objectives

When analyzing, managing and mitigating these risks test manager is following well-established project management principles

The 'Standard for Software Test Documentation' (IEEE 829-1998) outline for Test Plans requires risks and contingencies to be stated
Project Risks
Presentation
For tester:
CM helps to uniquely identify (and to reproduce) the tested item, test documents, the tests and the test harnesses

During test planning:
CM procedures and infrastructure (tools) should be chosen, documented and implemented
Configuration management (2/2)
Presentation
Often gathered in the form of Summary Report (outline in IEEE 829-1998)
Metrics collected during at and the end of test level to assess:
Adequacy of test objectives for that test level
Adequacy of test approaches taken
Effectiveness of testing with the respect to test objectives
Test Reporting (2/2)
Presentation
Summarizing information about testing endeavor
Includes:
What happened during a period of testing
Analyzed information and metrics to support recommendations, future decisions and actions:
assessment of defects remaining
economic benefits of continued testing
outstanding risks
level of confidence in tested software
Test Reporting (1/2)
Presentation
Purpose: provide feedback and visibility about test activities
Metrics collected manually or automatically
Used to measure exit criteria (e.g. coverage)
Used to assess progress against planned schedule and budget
Test Progress Monitoring
Presentation
Depends on the context
May consider:
risks, hazards and safety
available resources and skills
technology
nature of the system
test objectives
regulations
Test Approach (2/2)
Presentation
Implementation of Test Strategy for a specific project
Defined and refined in test plans and test designs
Includes decisions made based on project's goal and risk assessment

Starting point for:
planning the test process
selecting the test design techniques
selecting the test types to be applied
defining entry and exit criteria
Test Approach (1/2)
Presentation
Characteristics of the development process:
stability of the organization
tools used
test process
skills of the people involved
time pressure
Testing effort may depend on (2/3)
Presentation
Characteristics of the product:
quality of the specification and other information used for test models
size of the product
complexity of the problem domain
requirements for reliability and security
requirements for documentation
Testing effort may depend on (1/3)
Presentation
Test Plan
1. Test Plan Identifier 2. References 3. Introduction 4. Test Items 5. Software Risk Issues 6. Features to be Tested 7. Features not to be Tested 8. Approach 9. Item Pass/Fail Criteria 10. Suspension Criteria and Resumption Requirements
IEEE 829-1998 Standard for Software Test Documentation
Presentation
Running The Tests
Test Log: Record the details of tests in time order.
Test Incident Report: Record details of events that need to be investigated.

Completion of Testing
Test Summary Report: Summarize and evaluate tests.
IEEE 829-1998 Standard for Software Test Documentation
Presentation
Review and contribute to test plans
Analyze, review and assess user requirements, specifications and models for testability
Create test specifications
Set up test environment (often coordinating with system administration and network management)
Prepare and acquire test data
Tester typical tasks (1/2)
Presentation
Adapt planning based on test results and progress and take any actions necessary to compensate problems
Set up adequate configuration management of testware for traceability
Introduce suitable metrics for measuring test progress and evaluating the quality of testing and product
Decide what should be automated, to what degree and how
Select tools to support testing and organize any training in the tool use for testers
Decide about the implementation of test environment
Write test summary reports based on information gathered during testing
Test Leader typical tasks (2/2)
Presentation
Activities and tasks performed depend on:
project and product context
people in the roles
organization
Test leader and tester
Presentation
Benefits:
Independent testers see other and different defects
Independent testers are unbiased
Can verify assumptions people made during specification and implementation of the system
Test Independence
Presentation

Independent testers may define test processes and rules when given clear management mandate
Testing tasks may be done by people in specific testing role or people in another role:
project manager, quality manager, developer, business and domain expert, infrastructure or IT operations
Test Organization and Independence (3/3)
Options for independence:
No independence, developers test their own code
Independent testers within development team
Independent test team within organization group (reporting to project management)
Independent testers from business organization or user community
Independent test specialists for specific test types: usability testers, security testers, certification testers (certify software against standards and regulations)
Independent testers outsourced or externat to the organization
Test Organization and Independence (1/3)
Presentation
Severity of the impact on the system
Urgency/priority to fix
Status of the incident
Conclusions, recommendations and approvals
Global issues e.g. other areas that might be affected by the change
Change history
References, including the identity of the test case specification that revealed the problem
Incident report content (2/2)
Presentation
Date of issue, issuing organization and author
Expected and actual results
Identification of the test item and test environment
Software life-cycle process in which the incident was observed
Description of the incident to enable reproduction and resolution (incl. logs, database dumps, screenshots)
Scope or degree of on stakeholders interests
Incident report content (1/2)
Presentation
Objectives:
Provide information for developers and other parties about the problem to enable identification, isolation and correction if necessary
Provide Test Leaders a means of tracking the quality of the system under test and the progress of testing
Provide ideas for test process improvement

Structure of an incident report also covered in the 'Standard for Software Test Documentation' IEEE 829-1998
Incident reports
Presentation
To ensure that the product failure is minimized
Asses (and reassess on regular basis) what can go wrong (risks)
Determine what risks are important to deal with
Implement actions to deal with those risks
Risk managment activities
Presentation
Provides proactive opportunities to reduce the levels of product risk, starting in the initial stages of the project
It involves:
Identification of product risks
Their use in guiding test planning and control, specification, preparation and execution of tests
Draws on the collective knowledge and insight of the project stakeholders
Goal: Determine the risk and the levels of testing required to address those risks
Risk-based testing (1/2)
Presentation
Testing is used to reduce the risk of an adverse effect occurring or to reduce an impact of an adverse effect
Testing as a risk-control activity
Provides feedback about residual risks by measuring the effectiveness of critical defect removal and of contingency plans
Testing supports identification of new risks
Helps to determine what risks should be reduced
May lower uncertainty about risks
Product Risks – Testing significance
Presentation
Supplier issues:
Failure of a third-party
Contractual issues
Project Risks Examples (3/3)
Presentation
Technical issues:
Problems in defining the right requirements
The extent to which requirements cannot be met given existing constraints
Test environment not ready on time
Late data conversion, migration planning and development and testing data conversion/migration tools
Low quality of the design, code, configuration data, test data and tests
Project Risks Examples (2/3)
Presentation
Organizational factors:
Skill, training and staff shortages
Personnel issues
Political issues such as:
Problems with testers communicating their needs and test results
Failure by the team to follow up on information found in testing and reviews
Improper attitude toward or expectations of testing (e.g. not appreciating the value of finding defects during testing)
Project Risks Examples (1/3)
Presentation
Risk - a chance of an event, hazard, threat, or situation occurring and resulting in undesirable consequences or a potential problem

The level of risk is determined by:
Likelihood of an adverse event happening
Impact - harm resulting from that event
Risk and Testing
Presentation
Purpose: establish and maintain the integrity of the products (components, data, documentation) of the software or system through the project and product life cycle

For testing may involve ensuring:
All items of testware are identified, version controlled, tracked for changes, related to each other and related to development items (test objects) so that traceability may be maintained throughout the test process
All identified documents and software items are referenced unambiguously in test documentation
Configuration management (1/2)
Presentation
Test control activities examples:
Making decisions based on information from test monitoring
Re-prioritizing tests when an identified risk occurs
Changing the test schedule due to availability or unavailability of test environment
Setting an entry criterion requiring fixes to have been re-tested (confirmation tested) by a developer before accepting them into a build
Test Control (2/2)
Presentation

Describes any guiding or corrective actions taken as a result of information and metrics gathered and reported
Actions may:
cover any test activity
affect any other software life-cycle activity or task
Test Control (1/2)
Presentation

Test coverage of requirements, risks or code
Subjective confidence of testers in the product
Dates of test milestones
Testing costs, incl. cost compared to the benefit of finding new defect or to run next test case
Test Metrics (2/2)
Presentation

Percentage of work done in test cases preparation
Percentage of work done in test environment preparation
Test case execution (e.g. number of TCs run/not run, test cases passed/failed)
Defect information (e.g. defect density, defects found and fixed, failure rate, re-test results)
Test Metrics (1/2)
Presentation
Dynamic and heuristic - e.g. exploratory testing (testing more reactive to events than pre-planned, concurrent execution and evaluation)

Consultative - e.g. test coverage primarily driven by the advice and guidance of technology or business domain experts outside the test team

Regression-averse - e.g. reuse of existing test material, extensive test automation of functional regression tests and standard test suites
Test Approaches (2/2)
Presentation
Analytical - e.g. risk-based testing (testing applied to areas of greatest risk)

Model-based - e.g. stochastic testing using statistical information about failure rates or usage

Methodical - e.g. failure-based (incl. error guessing and fault attacks), experience-based, checklist based, quality characteristic based

Process- or Standard-compliant - e.g. industry specific standards, agile methodologies
Test Approaches (1/2)
Presentation
A high-level description of the test levels to be performed and the testing within those levels for an organization or programme (one or more projects).
Test Strategy
Presentation
Outcome of testing:
number of defects
amount of rework required
Testing effort may depend on (3/3)
Presentation
Two approaches:
Metrics-based approach - estimating testing effort based on metrics of former or similar projects or based on typical values
Expert-based approach - estimating the tasks based on estimates made by the owner of the tasks or expert

After estimation resources can be identified and a schedule can be drawn up
Test Estimation
Presentation
Define when to stop testing (e.g. at the end of a test level) or when a set of tests has achieved a specific goal
Typically cover the following:
Thoroughness measures, such as coverage of code, functionality or risk
Estimates of defect density or reliability measures
Cost
Residual risks, such as defects not fixed or lack of test coverage in certain areas
Schedules such as those based on time to market
Exit Criteria
Presentation
Define when to start testing (e.g. at the beginning of a test level) or when a set of tests is ready for test execution

Typically cover the following:
Test environment availability and readiness
Test tools readiness in the test environment
Testable code availability
Test data availability
Entry Criteria
Presentation
Assigning resources for the different activities defined
Defining the amount, level of detail, structure and templates for the test documentation
Selecting metrics for monitoring and controlling test preparation and execution, defect resolution and risk issues
Setting the level of detail for test procedures in order to provide enough information to support reproducible test preparation and execution
Test Planning Activities (2/2)
Presentation
Determining the scope and risks and identifying objectives of testing
Defining the overall approach for testing (incl. definition of test levels and entry and exit criteria)
Integrating and coordinating testing activities into the software life-cycle activities
Making decisions about what to test, what roles will perform the test activites, how the test activities should be done and how the test results will be evaluated
Scheduling test analysis and design activities
Scheduling test implementation, execution and evaluation
Test Planning Activities (1/2)
Presentation
Test Plan

11. Test Deliverables 12. Remaining Test Tasks 13. Environmental Needs 14. Staffing and Training Needs 15. Responsibilities 16. Schedule 17. Planning Risks and Contingencies 18. Approvals 19. Glossary
IEEE 829-1998 Standard for Software Test Documentation
Presentation
Preparation of Tests
Test Plan: Plan how the testing will proceed.
Test Design Specification: Decide what needs to be tested.
Test Case Specification: Create the tests to be run.
Test Procedure: Describe how the tests are run.
Test Item Transmittal Report: Specify the items released for testing.
IEEE 829-1998 Standard for Software Test Documentation
Presentation
As project and test planning progresses more information becomes available and more detail can be included in the plan
Feedback from test activities used to recognize changing risks and adjust planning

May be documented in:
Master test plan
Test plans for Test Levels (e.g. System Test Plan, Acceptance Test Plan)
Test Planning (2/2)
Presentation
Continuous activity performed in all life-cycle processes and activities
Influenced by:
test policy of the organization
scope of testing
objectives
risks
constraints
criticality
testability
availability of resources
Test Planning (1/2)
Presentation
Tester role may be taken over by other roles based on used test levels and risks related to the product and project

Typically:
Component and integration test level - developers
System test level - independent testers
Acceptance test level - business experts and users
Operational acceptance testing - operators
Test responsibility on test levels
Presentation
Implement tests on all test levels, execute and log the tests, evaluate the results and document the deviations from expected results
Use test administration or management tools and test monitoring tools as required
Automate tests (may be supported by developer or a test automation expert)
Measure performance of components and systems (if applicable)
Review tests developed by others
Tester typical tasks (2/2)
Presentation
Coordinate test strategy and plan with project managers and others
Write or review a test strategy of a project and the test policy of the organization
Contribute testing perspective to other project activities (e.g. integration planning)
Plan the tests: selecting test approaches, estimating time, effort and cost of testing, acquiring resources, defining test levels, cycles and planning incident management
Initiate the specification, preparation, implementation and execution of tests, monitor the test results and check the exit criteria
Test Leader typical tasks (1/2)
Presentation
Called also test manager or test coordinator
Role often performed by project manager, development manager, quality assurance manager, manager of a test group
In larger projects: test leader and test manager
Test Leader
Presentation
Drawbacks:
Isolation from the development team
Developers may loose sense of responsibility for quality
Independent testers may be seen as bottleneck or blamed for delays in release
Test Independence
Presentation
Independent testers increase effectiveness of finding defects by testing
For large, complex or safety critical projects:
multiple levels of testing
some of them done by independent testers
Development staff may participate in testing - lower levels
Lack of objectivity limits effectiveness
Test Organization and Independence (2/3)
5.6 Incident management
5.5 Risk and testing
5.4 Configuration management
5.3 Test monitoring and control
5.2 Test planning and estimation
5.1 Test organization
Part 5 – Test management
5.6 Incident management
5.5 Risk and testing
5.4 Configuration management
5.3 Test monitoring and control
5.2 Test planning and estimation
5.1 Test organization
Part 5 – Test management
5.6 Incident management
5.5 Risk and testing
5.4 Configuration management
5.3 Test monitoring and control
5.2 Test planning and estimation
5.1 Test organization
Part 5 – Test management
6. Tool support for testing
5. Test management
4. Test design techniques
3. Static techniques
2. Testing throughout the SW lifecycle
1. Fundamentals of testing
Introduction
Course Plan
5.6 Incident management
5.5 Risk and testing
5.4 Configuration management
5.3 Test monitoring and control
5.2 Test planning and estimation
5.1 Test organization
Part 5 – Test management
5.6 Incident management
5.5 Risk and testing
5.4 Configuration management
5.3 Test monitoring and control
5.2 Test planning and estimation
5.1 Test organization
Part 5 – Test management
6. Tool support for testing
5. Test management
4. Test design techniques
3. Static techniques
2. Testing throughout the SW lifecycle
1. Fundamentals of testing
Introduction
Course Plan
Repetitive work reduced
Running regression tests
Re-entering the same test data
Checking against coding standards
Greater consistency and repeatability
Tests executed by a tool in the same order and frequency
Tests derived from requirements
Objective assessment
Static measures
Coverage
Ease of access to information about tests or testing
Statistics and graph about test progress
Incident rates and performance
Potential Benefits
Data Quality Assessment
Data conversion/migration projects, applications like data warehouses
Attributes of data can vary in terms of criticality and volume
Tools employed for data quality assessment
Review and verify data conversion and migration rules
Ensure that processed data is correct, complete and complies with a pre-defined context-specific standard
Usability Testing
Tool Support for Specific Testing Needs
Monitor and report on how a system behaves under a variety of simuated usage conditions:
number of concurent users
users ramp-up pattern
frequency and relative percentage of transactions

Simulation of load achieved by means of creating virtual users carrying out a selected set of transactions, spread across various test machines (known as load generators)
Performance/Load/Stress Testing
Find defects which are evident only when software is executing e.g. time dependencies, memory leaks
Typically used in component and component integration testing and when testing middleware
Dynamic Analysis Tools (D)
Used to evaluate security characteristics of software
It includes evaluating the ability of software to protect:
Data confidality
Integrity
Authentication
Authorization
Availability
Non-repudiation
Mostly focused on partcular technology, platform and purpose
Security Testing Tools
Facilitates testing of components or parts of the system
Simulate environment in which test object will run
Provision of mock objects as stubs or drivers
Test Harness/Unit Test Framework Tools (D)
Enable tests to be executed automatically or semi-automatically
Stored inputs and expected outcomes
Use of scripting language
Provide test log for each test run
Can be used to record tests
Support scripting language or GUI-based configuration for parametrization of data and other customization in the tests
Test Execution Tools
Used to validate software models (e.g. Physical data model for a relational database)
Enumerate inconsistencies and find defects
Can aid in generating test cases based on the model
Modeling Tools (D)
Help Developers and Testers to find defects prior to dynamic testing
Support for enforcing coding standards (including secure coding), analysis of structures and dependencies
Can help in planning or risk analysis by providing metrics for the code (e.g. Complexity)
Static Analysis Tools (D)
Cost effective way of finding more defects at an earlier stage in the development process

Review Tools

Static Analysis Tools (D)

Modeling Tools (D)
Tool support for Static Testing
Store and manage incident reports
Defect failures
Change Requests
Perceived problems
Annomalies

Help managing the life-cycle of incidents
Optionally with support for statistical analysis
Incident Management Tools (Defect Tracking Tools)
Interfaces for executing tests
Interfaces for tracking defects
Interfaces for managing requirements
Support for quantitative analysis
Support for reporting of the test objects
Support tracing test objects to requirements specifications
Independent version control capability or an interface to external one
Test Management Tools
Different meaning:

Reusable and extensible testing libraries that can be used to build testing tools (called test harnesses)

Type of design of test automation (e.g. data-driven, keyword-driven)

Overall process of execution of testing
Test framework

Tools that are used in reconnaissance, or in simple terms:
e.g. exploration (e.g. Tools that monitor file activity for an application)

Any tool that aids in testing (e.g. a spreadsheet)
Tool support for testing (2/2)
Presentation
Evaluation of the vendor (incl. training, support and commercial aspects) or service support suppliers (non-commercial tools)
Identification of internal requirements for coaching and mentoring in the use of the tool
Evaluation of training needs considering the current test team's test automation skills
Estimation of a cost-benefit ratio based on a concrete business case
Main considerations in selecting a tool
Presentation
Interface with other tools or spreadsheets to produce useful information in the format that fits the needs of the organization
Test Management Tools
Presentation
Applied to source code can enforce coding standards
Can also generate large quantity of messages
Warning messages do not stop the code compilation
They should be addressed to ease maintenance of code in the future
Gradual implementation of the analysis tool + initial filters to exclude some messages
Static Analysis Tools
Presentation
Execute test objects using automated test scripts
Significant effort in order to achieve significant benefits
Capture-replay:
seems attractive
does not scale to large numbers of automated test scripts
captured script - linear representation with specific data and actions as part of each script
may be unstable when unexpected event occurs
Test Execution Tools
Presentation
Neglecting version control of test assets within the tool
Neglecting relationships and interoperability issues between critical tools, such as requirements management tools, defect tracking tools and tools from multiple vendors
Risk of tool vendor going out of business, retiring the tool or selling the tool to a different vendor
Poor response from vendor for support, upgrades and defect fixes
Risk of suspension of open-source/free tool project
Unforeseen, such as inability to support a new platform
Potential risks (2/2)
Continously analyze, verify and report on usage of specific system resources
Give warnings of possible service problems
Monitoring Tools
Dynamic Analysis Tools (D)
Performance Testing/Load Testing/Stress Testing Tools
Monitoring Tools
Tool support for Performance and Monitoring
Intrusive or non intrusive
Measurement of percentage of specific types of code structures that have been exercised by a set of tests (e.g. statements, branches, decisions, module or function calls)
Coverage Measurement Tools (D)
Determine differences between files, databases or test results
Test Execution Tools include dynamic test comparators
Post-execution comparison may be done by a separate comparison tool
May use a test oracle (esp. if it is automated)
Test Comparators
Test Execution Tools
Test Harness/Unit Test Framework Tools (D)
Test Comparators
Coverage Measurement Tools (D)
Security Testing Tools (D)
Tool support for Test Execution and Logging
Test Desing Tools
Used to generate test inputs, executable tests, test oracles from requirements, graphical user interfaces, design models or code

Test Data Preparation Tools
Manipulate databases, files or data transmissions to set up test data to be used during the execution of tests
To ensure security through data anonymity
Tool support for Test Specification
Assist with review processes, checklists, guidelines
Used to store and communicate review comments and report on defects and effort
Sometimes aid for online reviews for large or geographically dispersed teams
Review Tools
Not strictly test tools
Necessary for storage and version management of testware and related software

Mandatory when configuring more than one hardware/software environment in terms of operating system versions, compilers, browsers
Configuration Management Tools
Store requirement statements
Store attributes for the requirements (incl. Priority)
Provide unique identifiers
Support tracing requirements to individual tests
May help to identify incosistent or missing requirements
Requirements Management Tools
Tools applying to all test activities over the entire software life-cycle

Test Management Tools
Requirements Management Tools
Incident Management Tools (Defect Tracking Tools)
Configuration Management Tools
Tool support for management of testing and tests
Some tools can be intrusive – they affect actual outcome of the test e.g.:
Actual timing different due to additional instruction to be executed by the tool
Different measure of the code coverage

The consequences of the intrusive tools – probe effect
Probe effect
Different criteria for classification
Purpose
Commercial, free, open-source, shareware
Technology used
Etc.

Our classification -> testing activities which tool supports
Some tools support one activity, other more than one testing activity
Tools from one provider designed to work together often bundled into one package
Test Tool Classification
Improve the efficiency of test activities by autmating repetitive tasks or supporting manual test activities like test planning, test design, test reporting and monitoring

Automate activities that require significant resources when done manually (e.g. Static testing)

Automate activities that cannot be done manually (e.g. Large-scale performance testing of client-server application)

Increase reliability of testing (e.g. by automating large data comparisons or simulating behaviour)
Purpose for tool support depends on the context

Tools directly used in testing:
Test execution tools
Test data generation tools
Result comparison tools
Tools supporting managing the testing process:
Tools used to manage: tests, test results, test data, requriements, incidents, defects
Tools used for reporting and monitoring test execution
Tool support for testing (1/2)
Presentation
Rolling out the tool to the rest of the organization incrementally
Adapting and improving processes to fit with the use of the tool
Providing training and coaching/mentoring for new users
Defining usage guidelines
Implementing a way to gather usage information from actual use
Monitoring tool use and benefits
Providing support for the test team for a given tool
Gathering lessons learned from all teams
Success factors (1/2)
Presentation
Introduction of the tool into organization starts with it
Objectives:
Learn more detail about the tool
Evaluate how the tool fits with existing processes and practicesm and determine what would need to change
Decide on standard ways of using, managing, storing and maintaining the tool and the test asses
Asses whether the benefits will be achieved at reasonable cost
Pilot project
Presentation
Assessment of organizational maturity, strengths and weaknesses
Identification of opportunities for an improved test process supported by the tools
Evaluation against clear requirements and objective criteria
A proof-of-concept, by using a test tool during the evaluation phase to:
establish if test tool performs effectively with the software under test and current infrastructure
identify changes needed to infrastructure to effectively use the tool
Main considerations in selecting a tool
Presentation
Technical expertise in scripting language needed for all approaches (testers or test automation specialists)
Expected results for each test need to be stored for later comparison
General rules
Presentation
Spreadsheet contains keywords describing actions to be taken (action words) and test data
Testers can define tests using the keywords
Keywords are tailored to the application being tested
Keyword-driven testing
Presentation
Separates test inputs (data)
Uses more generic script
Read input data and execute the same script with different data
Testers not familiar with scripting language can only create test data

Data generated using algorithms based on configurable parameters at run time
E.g. script generating random user ID
Data-driven testing
Presentation
Unrealistic expectations (incl. functionality and ease of use)
Underestimating the time, cost and effort for the initial introduction of a tool (e.g. training, external expertise)
Underestimating the time and effort needed to achieve significant and continuing benefits from the tool (e.g. need for changes in the testing process, continuous improvement of the way the tool is used)
Underestimating the effort required to maintain test assets generated by the tool
Over-reliance on the tool (replacement of test design or usage of automated testing where manual testing would be better)
Potential risks (1/2)
6.4 Introducting a tool into an organization
6.3 Special considerations
6.2 Potential benefits and risks
6.1 Typs of test tools
Part 6 – Tool support for testing
6.4 Introducting a tool into an organization
6.3 Special considerations
6.2 Potential benefits and risks
6.1 Types of test tools
Part 6 – Tool support for testing
6.4 Introducting a tool into an organization
6.3 Special considerations
6.2 Potential benefits and risks
6.1 Typs of test tools
Part 6 – Tool support for testing
6.4 Introducting a tool into an organization
6.3 Special considerations
6.2 Potential benefits and risks
6.1 Typs of test tools
Part 6 – Tool support for testing
6. Tool support for testing
5. Test management
4. Test design techniques
3. Static techniques
2. Testing throughout the SW lifecycle
1. Fundamentals of testing
Introduction
Course Plan
6. Tool support for testing
5. Test management
4. Test design techniques
3. Static techniques
2. Testing throughout the SW lifecycle
1. Fundamentals of testing
Introduction
Course Plan
Full transcript