Loading presentation...

Present Remotely

Send the link below via email or IM


Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.


Testing Mythology

No description

Ashwin Dalvi

on 8 July 2014

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of Testing Mythology

Peer Review
Test Report
A document that summarizes the outcome of testing in terms of items tested, summary of results , effectiveness of testing and lessons learned.
The report is sent to the management for analysis.
Business Requirement Specifications
• What must be delivered to provide value
• Has both func. and non-func requirements
Verification and Validation

Requirement Specifications
• What must be delivered as a working application
• Has mainly functional, but some non-functional requirements
High Level
• Overview of entire system
• Abstract view of the code
Low Level
• Detailed description of elements in the system
• Low level of abstraction
Coding /
• Translation of LLD to Code
• Little to no abstraction
• Isolated view of a module
• Detailed testing by WB methods
• Slightly larger groups than in unit testing
• Test one entire functionality in a component
• Checks whether the data flow between the modules / compoents is smooth and correct
• Modules here are considered mostly as BB
• System is considered as a whole for testing
• Checks whether system as a BB matches functional and non-functional requirements
• Verifies whether the product is viable for market entry
• Generally does not focus much on the system, but rather on how the system is accepted in the market
• The evaluation of whether or not a product, service, or system complies with a regulation, requirement, specification, or imposed condition.
• Internal process
• The assurance that a product, service, or system meets the needs of the customer and other identified stakeholders.
• Generally an external process.
Test Levels
Unit Level
Component Level
Integration Level
System Level
User Acceptance Level
The goal of unit testing is to isolate each part of the program and show that individual parts are correct in terms of requirements and functionality.
This is a broader form of Unit Level Testing which guages whether the individual functions work correctly.
The testing of combined parts of an application to determine if they function correctly together.
The application is tested thoroughly to verify that it meets the functional and technical specifications.
By performing acceptance tests on an application the testing team will deduce how the application will perform in production.
White Box Testing
What is White Box Testing?
It is a method of testing software that tests internal structures or workings of an application, as opposed to its functionality.

Internal perspective of the system, as well as programming skills, are used to design test cases.
Categories of White Box Testing
Levels of White Box Testing?
Unit testing
Integration Testing
Regression Testing
Basic procedure
1> Input, involves different types of requirements, functional specifications, detailed designing of documents, proper source code, (Preparation stage).

2> Processing Unit, involves performing risk analysis to guide whole testing process, proper test plan, execute test cases and communicate results.This is the phase of building the test cases.

3> Output, prepare final report that encompasses all of the above preparations and results
Test Design Techniques
Experience Level Testing
Experience Based Technique
People’s knowledge, skills and background are of prime importance to the test conditions and test cases.
Error Guessing
The tester uses experience to guess the potential errors that might have been made and determines the methods to uncover the resulting defects
The tester simultaneously learns about the product and its defects, plans the testing work to be done, designs and executes the tests, and reports the results
Dynamic Techniques
Black Box Testing
White Box Testing
(Structure - Based)
Techniques based on analysis of the structure of the component
Experience Based
The knowledge and experience of people are used to derive the test cases
Dynamic Testing refers to testing methods in which the code is compiled and run actually.
Does not use any information regarding the internal structure of the component or system to be tested
Requires knowledge of how component is implemented and test cases derived from this knowledge
Black Box Technique
View the Software as Black box and tester is concerned what the software does rather than how.
Equivalence Partitioning
Inputs to the software or system are divided into groups that are expected to exhibit similar behavior
Boundary Value Analysis
Decision Table
State Transition
Outputs are triggered by changes to the input conditions or changes to 'state' of the system
Use Case Testing
Equivalence partitions (or classes) can be found for both valid data and invalid data.
Partitions can also be identified for outputs, internal values, time-related values and for interface parameters.
Boundary Values - Equivalence partition Maximum and minimum values
Valid Boundary Value- boundary value of Valid equivalence partition
Invalid Boundary Value- boundary value of Invalid equivalence partition
Different combination inputs with their associated outputs
Decision Tables associate conditions with actions to perform
Total no of combinations of rules = 2^(no. of inputs)
Test Engineering Mythology and Processes
State Graph (State Transition Diagram)- States, Transitions between states, Events triggering state change, Actions resulted from transitions.
State Table: Shows the relationship between the states and inputs which helps in finding invalid transitions
Two points
That's where the fun is!
1. Meets the business and technical requirements that guided its design and development, and
2. Works as expected.
Control Flow

Mutation Testing
Performance Testing
Data Flow
Security Testing
Statement Coverage
Branch Coverage
Testing shows presence of errors
Condition Coverage
Method Coverage
In general, testing proves the presence of errors. Sufficient testing reduces the likelihood of existing, not discovered error conditions within the test object. It does not verify that no more bugs exist, even if no more errors can be found. Testing is not a proof that the system is free of errors.
Exhaustive testing is not possible
An exhaustive test which considers all possible input parameters, their combinations and different pre-conditions can not be accomplished (except for trivial test objects). Test are always spot tests. Therefore, the effort must be managed by risk, priorities and thoughtful selection.
Test early and regularly
Testing activities should begin as early as possible within the software life cycle. They should be repeated regularly and have its’ own agenda. Early testing helps detecting errors at an early stage of the development process which simplifies error correction (and reduces the costs for this work).
Accumulation of errors
There is no equal distribution of errors within one test object. The place where one error occurs, it’s likely to find some more. The testing process must be flexible and respond to this behavior.
Fading effectiveness
The effectiveness of tests fades over time. If test-cases are only repeated, they do not expose new errors. Errors, remaining within untested functions may not be discovered. In order to prevent this effect, test-cases must be altered and reworked time by time.
Testing depends on context
No two systems are the same and therefore can not be tested the same way. Testing intensity, the definition of end criteria etc. must be defined individually for each system depending on its testing context.
False conclusion: no errors equals usable system
Error detection and error fixing does not guarantee a usable system matching the users expectations. Early integration of users and rapid prototyping prevents unhappy clients and discussions.
Defect Management
Tests can be derived from use cases
Use Case - Interactions between actors i.e. users and systems
Planning and Control
The scope , risks , identify the objectives of testing and test approach.
Use case has preconditions which need to be met for the use case to work successfully and also has post conditions for termination
The process of recognizing, investigating, taking action and disposing of defects
Analysis and Design
Review the test basis.
It involves recording defects, classifying them and identifying the impact.
Implementation and Execution
Take the test conditions into test cases and procedures and other testware such as scripts for automation, the test environment and any other test infrastructure.
Evaluating exit criteria and Reporting
Based on the risk assessment of the project we will set the criteria for each test level against which we will measure the “enough testing”.
These criteria vary from project to project and are known as exit criteria.
Test Closure activities
To check which planned deliverables are actually delivered and to ensure that all incident reports have been resolved.
Use Case - widely used in system and acceptance test.
Defect prevention

1. Identify Critical Risks.
2. Estimate Expected Impact
3. Minimize Expected Impact

Deliverable baseline
Knowledge of the software
Knowledge of the Business Area / Target market of the software developed
Knowledge of functionality to be tested - types of testing
Knowledge of the logistics and heuristics of testing
Implement the test policy and/or the test strategy.
Schedule test analysis and design tasks, test implementation, execution and evaluation.
Set coverage criteria.
Identify test conditions.
design the tests.
Evaluate testability of the requirements and system.
Design the test environment set-up and identify and required infrastructure and tools.
A baseline is a reference point in the software development life cycle marked by the completion and formal approval of a set of predefined work products.
The process involves:
Identify Key Deliverables
: Select those deliverables that will be baselined and the point within the development process where the deliverable will be baselined.
Define Standards for Each Deliverable
: Set the requirements for each deliverable and the criteria that must be met before the deliverable can be baselined.
Defect Discovery

To finalize and archive testware such as scripts, test environments, etc. for later reuse.
Test Design Techiques
Find Defect
Report Defect
Acknowledge Defect
Software testing also identifies
defects, flaws, or errors in the application code that must be fixed.
Defect Resolution
Once the developers have acknowledged a valid defect:
1. Determine the priority of the defect.
2. Developers schedule when to fix a defect.
3. Then developers should fix defects in order of importance.
4. Developers notify all relevant parties how and when the defect was repaired.

Management Reporting
Static Techniques
Dynamic Techniques
Loss of money
Loss of time
Damage to business reputation
Injury or death
Information is collected during the defect management process:
To report on the status of individual defects.
To provide tactical information and metrics to help project management make more informed decisions. e.g., redesign of error prone modules, the need for more testing, etc
To provide insight into areas where the process could be improved to either prevent defects or minimize their impact.
Test Conditions
(Test Objectives)
Anything that could be tested.
Test Conditions - Should have traceability (can be retraced back to specifications and requirements (test basis)
Reasons of Traceability:
Test Results
Effective Impact analysis as requirements change
Determining requirements coverage for a set of tests
Test Cases and Test Script
Version History
Test Case - Input data, expected output, Preconditions, Scheduling considerations, how to test, priority.
The control flowgraph is annotated with information about how the program variables are defined and used.
Different criteria exercise with varying degrees of precision how a value assigned to a variable is used along different control flow paths.
Test Script- Steps to perform the test.
The control flow of the program is represented in a flow graph
We consider various aspects of this flow graph in order to ensure that we have an adequate set of test cases.
General Relation Between Test Conditions, Test Cases, requirements and Test Script
Statement coverage is a measure of the percentage of statements that have been executed by test cases
Objective is to achieve 100% statement coverage through your testing.

Branch coverage is a measure of the percentage of the decision points (Boolean expressions) of the program have been evaluated as both true and false in test cases
For decision/branch coverage, we evaluate an entire Boolean expression as one true-or-false predicate even if it contains multiple logical-and or logical-or operators.

Test Conditions

Condition coverage is a measure of percentage of Boolean sub-expressions of the program that have been evaluated as both true or false outcome [applies to compound predicate] in test cases.
Test Cases
Test Script
Method coverage is a measure of the percentage of methods that have been executed by test cases.
Objective is to achieve a 100% method coverage so that the working of all the methods can be verified.
Mutation testing involves modifying a program's source code or byte code in small ways.
Each mutated version is called a mutant.
Tests detect and reject mutants by causing the behavior of the original version to differ from the mutant. This is called killing the mutant.
Test suites are measured by the percentage of mutants that they kill.
Static Techniques
Test techniques in which
Code is not executed
Typically used to find errors in in documents such as design, requirements etc.
Static Analysis by Tools
Code is check for structural defects
Informal process of seeing the code
The author of the work product explains the product to his team.
Participants can ask questions if any.
Technical Review
A team consisting of your peers, review the technical specification of the software product and checks whether it is suitable for the project.
Led by moderators.
Inexpensive way to get some benefit
Formal process based on rules and checklists.
Inspection report including list of findings
Primarily checking of the code, manually reviewing the code or document to find errors
Done with the help of specialized tools
Performance testing
is in general testing performed to determine how a system performs in terms of responsiveness and stability under a particular workload.
Security testing
is a process intended to reveal flaws in the security mechanisms of an information system that protect data and maintain functionality as intended.
Examples include: Vulnerability Scanning, Penetration testing, Ethical Hacking etc
Models are used for the specification of the problem and test cases are derived systematically from it.
Includes functional testing
Does not use any information regarding the internal structure of the component or system to be tested
ok then

Good night

Fuck you
Such awesome
Mohit roxxx
I love lag
venus williamss
Dabeli ka chaska
I enjoy pizza
I will give party tomorrow
You can eat what you want
Such awesome
Kill Bill
Khata kab kholega
Good morning
Good evening
Ghost is my friend
I love GoT
Joffrey is my Hero
Ned Stark suxxx
Laptop || PC
Selenium is the future
Contents of Test Report
(Semi formal review)
(Formal Review)
Defect Life Cycle
Defect Priority and Defect Severity
Defect Priority indicates the importance or urgency of fixing a defect.
Defect Priority may be classified as Urgent, High, Medium, Low
Defect Severity is a classification of software defect to indicate the degree of negative impact on the quality of software.
The severity can be classified as Critical, Major, Minor or Trivial
Priority vs Severity
Test Report
Classification of Bugs
Testing Statistics
White-box testing's basic procedures involve the understanding of the source code that you are testing at a deep level
Customer is valued if they have made at least $500 total purchases
Test Conditions:
1. Verify that customer is valued when total purchases are less than $500.

2. Verify that customer is valued when total purchases are greater than or equal to $500.
Test cases:
Test ID Input Data (Total Purchases) Expected Ouput
1 $0 Not valued customer
2 $100 Not valued customer
3 $499 Not valued customer
4 $500 Valued customer
5 $501 Valued customer
6 $1000 Valued customer

Test Script:
Script 1 – Not Valued Customer
From the main menu navigate to the customer search screen
Enter the customer's account number and click on search
Verify if "Not Valued Customer" is displayed

Script 2 – Valued Customer
From the main menu navigate to the customer search screen
Enter the customer's account number and click on search
Verify if "Valued Customer" is displayed
Try Again 1
Try Again 2
Close Application
Wrong Credentials
Wrong Credentials
Wrong Credentials
Correct Credentials
Correct Credentials
Correct Credentials
States CC WC
S1 Start S4 S2
S2 TA1 S4 S3
S3 TA2 S4 S5
S4 Access ? ?
S5 CA - -
Main Success Scenario
A: Enter Agent name
S: Validate Password
S: Allow Access

Password not valid
S: Display message and ask to retry

Password not valid 3 times
S: Close Application

A: Actor
S: System
Checks for discrepancies in the specifications and standards followed.
Follows strict process to find the defects
Login Button




Full transcript