Send the link below via email or IMCopy
Present to your audienceStart remote presentation
- Invited audience members will follow you as you navigate and present
- People invited to a presentation do not need a Prezi account
- This link expires 10 minutes after you close the presentation
- A maximum of 30 users can follow your presentation
- Learn more about this feature in our knowledge base article
Do you really want to delete this prezi?
Neither you, nor the coeditors you shared it with will be able to recover it again.
Make your likes visible on Facebook?
Connect your Facebook account to Prezi and let your likes appear on your timeline.
You can change this under Settings & Account at any time.
Software Quality Assurance Training Material 1
Transcript of Software Quality Assurance Training Material 1
b) The importance of software quality assurance
c) What are we going to do about it? Introduction to the Software QA Project Today we will begin with the software quality assurance training Session 1: Overview of Testing Session 2: ISTQB Testing Framework So let's get started! There will be 2 separate training sessions Well... What is it? The purpose of software testing is to... Verify system behaviour Detect errors Validate user requirements Reduce risk Improve quality by finding defects Ensure functionality and usability Increase customer satisfaction Ensure compliance with regulations Reduce legal liability Introduction This material is intended to provide our consultants with additional knowledge about software quality assurance through testing It is intended for both Business Solutions Consultants and Software Solutions Consultants Its purpose is to up-skill our consultants with the key skills required to enhance their knowledge about the software quality assurance process After viewing this presentation, our consultants should have an understanding of general software testing methods It is an investigation conducted to provide stakeholders with information about the quality of the product or service under test. "all code is guilty,
until proven innocent" Defects are introduced in the various phases of the software development lifecycle. Defects can be detected at the stage that they are introduced or in subsequent stages. The earlier a defect is found, the less it costs to fix. A defect not detected until deployment can damage the business brand and be very costly. Have a look at how the comparative cost of a defect increases the later it is found QUIZ TIME
Can you fill in the missing words of these testing principles? ANSWERS The optimal amount of testing, based on a risk assessment, needs to be decided. Testing is both time-consuming and expensive, there needs to be a trade-off between costs and level of defects allowed. Running the same set of tests over and over again reduces the effectiveness of detecting new defects. If the test has not found a defect, the approach can be altered. Testing shows the presence of bugs Testing can identify that one or more problems exist, but not that the artefact is problem free. Test early The earlier testing begins the more effective and the less costly it is. The cost of finding a defect increases tenfold as time progresses. There are some general testing principles to help and guide testers Now let's have a look at what these and other testing principles mean The Pesticide Paradox Exhaustive testing is impossible Defect clustering A small number of modules can exhibit the majority of the defects detected.
Testing activities should reflect the spread of defects, and target areas of the application where a high proportion of defects can be found for further analysis of the cause/problem. Testing approaches and tools should be tailored to the complexity and context of the application. Using a complex tool for a simple application will be unnecessarily expensive; while using a simple tool for a complex application will not necessarily find all the defects. Testing is
context dependant Software with no known errors is not necessarily ready for deployment. It may simply mean that the defects have not been found. Absence of errors fallacy There are different types of software testing. These are: Functional Tests
Testing after code has changed Try to match up the test description with the correct test type. ANSWER What is automated testing? When
why should automated testing
used? When the benefits gained is higher than the costs of acquiring the tool and running of the tests e.g. where the tests are:
Too complex or too numerous to run manually
Must be repeated multiple times
Do not require any ‘exploratory’ behaviour (which is better performed by humans)
and where the project is long. Benefits of Automated Testing Saves time and money
Increases test coverage
Does what manual testing cannot
Helps developers and testers
Provides developers with the confidence to change the code
Improves team morale Certain kinds of testing are more suited to be automated than others e.g. Code-facing tests Smoke tests Regression testing Load tests These testing levels can then be broken down further... What is Software Testing? Have you had an experience with software that did not work as expected? Did it make you feel like this? Software that does not work as expected can have a large impact on organisations too So it is important that we do our utmost... ...to ensure that our clients receive quality software that works as defined. So an undetected bug is not only costly to us... ...but can lead to project failure negatively affect our reputation ...but can lead to unhappy clients Comprehensive software testing is fundamental to producing high quality software that meets user requirements. So there is no doubt that disruptions loss of money loss of time to name a few... And not only is it important to simply find the defects but also that we find the defects early Break away.... a. Test cases are derived systematically from models of the system
b. Test cases are derived systematically from the tester’s experience
c. Test cases are derived systematically from the delivered code
d. Test cases are derived from the developer’s experience Q1: Which of the following describes white-box test case design techniques? A1: The correct answer is C. Answer (a) relates to specification-based testing (i.e. black-box testing), answer (b) relates to experience-based testing and answer (d) could relate either to debugging or to experience-based techniques. A2: System testing Q2: Performance, usability, scenario and security testing are all examples of what level of testing? A3: Q3: What is the purpose of the V-Model? What does it show? The V-model is a framework illustrating the integration of testing activities into each phase of the SDLC from requirements specification to maintenance.
The V-model shows the following:
•on the left, the waterfall development model
•in the middle, testing planning should start with each work product
•on the right, the testing activities 1. development - this is where the software is developed (can include unit testing)
2. testing - where testers test quality of the system, open bugs and look at bug fixes, this must resemble the production environment
3. UAT testing environment - where users (or client testers) test the software
4. pre-production or staging - assemble, test and review new versions of software are assembled, tested and reviewed before going into production
5. production/live - testing conducted in the live environment
6. sandbox - the testing of untested pieces of code in isolation, and thereby protecting the live system. A4: Q4: What are examples of different types of testing environments? Have you encountered any of these at BSG/clients? If so, which ones? Functional Tests Purpose Models Levels Examines the specific functionality of a system Process flow
Plain language specification Security
Usability Levels Usability
(under load and stress) Purpose Examines the behavioural aspect of a system Non-Functional Tests Models Structural Tests Any test level Focuses on the structural aspect of the system and so checks the thoroughness of the testing applied.
Applies to both functional and non-functional requirements.
e.g. testing the architectural definition of the system. Levels Control flow
Menu structure Purpose Models Testing after the code or environment has changed This is essentially a re-test to confirm that the bug has been removed. Includes regression testing, which is used where the unchanged code is tested to ensure that the change hasn’t caused further defects What is automated testing? Here are a few testing concepts to increase your testing knowledge One Two Three Static and Dynamic Tests White and Black Box Techniques Negative and Positive Testing is testing conducted prior to the code being executed.
This is primarily focused on reducing human error. is testing using test data while the system is executed. Static Testing Errors in a specification document which can be reduced through reviews Dynamic Testing i.e. once the software/system has been run. Black Box Techniques Also known as 'Specification-Based Techniques' White Box Techniques Also known as 'Structure-Based Technique' or 'Glass box testing' E.g. If a number is entered instead of a letter, is an error message displayed? Negative testing ensures that the system can handle unexpected input or user behaviour Positive testing tests whether the system works as expected. Test Types Test Levels there are also different types of test levels This is ensuring that the requirements captured meet the client’s needs, through: checks that the work-product meets the requirements set out for it Verification Validation ensures that the behaviour of the work-product matches the client needs Developers conduct unit testing to test each unit of the software as implemented in the source code*. INTERFACE TESTING DATA STRUCTURE TESTING BOUNDARY TESTING INDEPENDENT PATHS TESTING Checks that information properly enters and exits the unit Tests that local data maintains its integrity during all steps in an algorithm's execution This is primarily concerned with the errors that can happen at the limits Ensures that all statements in a module have been executed at least once INTEGRATION TESTING Demonstrates that all new components/systems developed integrate seamlessly with each other. COMPONENT INTEGRATION TESTING This focuses on the interactions between software components and is conducted after component (unit) testing. This is usually conducted by the developers in conjunction with unit testing. SYSTEM INTEGRATION TESTING During this testing level, testers focuses on the interactions between different systems and may be done after system testing of each individual system. SYSTEM TESTING This is typically performed by testers.
System testing will focus on the behaviour of the software system and takes an approach more structured around the user perspective. FUNCTIONAL TESTING Testing the software to ensure that it meets its intended functional requirements. This is typically a black box test. MAINTAINABILITY TESTING This test focuses on verifying the specification for the required maintainability of the software as detailed in the specification document. PERFORMANCE TESTING This level of testing is performed to determine how some aspects of a system perform under a particular workload. This will include:
Volume Testing SCENARIO TESTING The use of simple or complex scenarios, or hypothetical stories, enables the tester to practically work through the system. SECURITY TESTING The objective of this testing is to ensure that the application’s systems control and security features are functional and of an acceptable level. PORTABILITY TESTING Testing conducted to ensure that the software can be ported to specified hardware or software platforms. RELIABILITY TESTING Continuously testing (using) the system for an extended period of time to ensure that the system is still able to function. USABILITY TESTING Usability testing is the formal process of testing the user experience through features available from the user interface. ACCEPTANCE TESTING USER ACCEPTANCE TESTING (UAT) This testing is conducted to determine whether or not the system satisfies the acceptance criteria and ensures that the requirements' objectives are met. OPERATIONAL ACCEPTANCE TESTING (OAT) Also known as Operational Readiness Testing. This involves ensuring that processes and procedures exist to enable the system to be used and maintained. INSTALLATION (INSTALL-ABILITY) TESTING Installation testing will use a model of the installation requirements. (DISASTER) RECOVERY TESTING This is a process of verifying the success of the restoration procedures executed after a critical IT failure or disruption occurs. These set of tests include:
Failover Testing CONTRACT ACCEPTANCE TESTING ALPHA TESTING This is testing of the operational system conducted at the developer’s site by independent, internal testers prior to release to external customers. Testing is conducted to ensure that any criteria for acceptance of the system outlined in the contract have been met. REGULATION ACCEPTANCE TESTING Testing conducted to ensure that the system complies with any governmental, legal or safety standards. BETA TESTING Also known as field testing. This is testing conducted by a group of customers in their locations who provide feedback prior to the system being released. OTHER TEST TYPES SMOKE TESTING Preliminary smoke tests are executed to validate the behaviour of a system after undergoing code changes. The tests are designed to provide some assurance that changes in the code function do not destabilize an entire build. REGRESSION TESTING Regression testing is carried out to re-test a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made. Let's have a little.... This brings us to the end of our training last, but not least 1. What is software testing? 2. What is the purpose of software testing? 3. Why is testing so important? 4. Principles of testing 5. Concepts of Testing 6. Test Types and Levels 7. Automated testing The cost of fixing a bug is logarithmic. It increases tenfold as time passes. Source:http://www.compaid.com/caiinternet/ezine/cost_of_quality_1.pdf Source:http://expertscolumn.com/content/software-testing-fundamentals This is shown by the following example... Law of diminishing returns The number of bugs found decreases as testing progresses.
At some point, the cost of finding the bugs is higher than letting a customer/client find them. Please visit the Wiki or join our TIGs for further discussion and information around testing. Typically, the business analysts and testers would be responsible for this. Typically conducted by system end-users or testers. This focuses more on the 'how' of testing. Note: This is one of the many variants of the V-Model, and is used by ISTQB The V-model shows the following:
on the left, the waterfall development model
in the middle, testing planning should start with each work product
on the right, the testing activities V-Model The V-model is a framework illustrating the integration of testing activities into each phase of the SDLC from requirements specification to maintenance. Test levels are the of testing, for example, are you testing the:
integration between the units?
system (performance, functionality etc.) ?
acceptance of the system? "what" Testing (traditional) SDLC Testing (iterative) SDLC There is a traditional testing SDLC and an iterative SDLC The execution of test scripts without human intervention, i.e. using a test automation tool.
This can vary from testing the software's features to verifying that the right actions took place. Four Testing environments What is a test environment? An environment containing hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test. development - this is where the software is developed (can include unit testing)
testing - where testers test quality of the system, open bugs and look at bug fixes, this must resemble the production environment
UAT testing environment - where users (or client testers) test the software
pre-production or staging - assemble, test and review new versions of software are assembled, tested and reviewed before going into production
production/live - testing conducted in the live environment
sandbox - the testing of untested pieces of code in isolation, and thereby protecting the live system. e.g. e.g. test cases test scripts bug defect error regression testing test driven development a set of inputs, expected outcomes and pre-conditions designed to test a particular objective set of instructions used to execute a specific test by an automated test tool (associated with Agile methodologies) all code must be unit tested and must pass all the time retesting a previously tested section to ensure that no bugs/defects have been introduced as a result of a change made a mistake shown under test, but not always a mistake by the developer non-conformance to requirements/specification a fault in the program which causes the software to operate in an unanticipated/unexpected manner Testing conducted on the software without reference to the code/internal structure of the software.
It tests the behavior of the system against the specification and focuses more on the outputs of the software. Testing conducted on the software based on an analysis of the code/internal structure of the component or system. *Note: this does not refer only to JUnit and NUnit testing