Loading presentation...

Present Remotely

Send the link below via email or IM

Copy

Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.

DeleteCancel

Make your likes visible on Facebook?

Connect your Facebook account to Prezi and let your likes appear on your timeline.
You can change this under Settings & Account at any time.

No, thanks

SOFTWARE TESTING

No description
by

Jacob Chan

on 5 September 2013

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of SOFTWARE TESTING

SOFTWARE TESTING
Testing
- Checks if product meets client's requirements and needs

- Validation and verification of the product (if it meets client's needs, and most especially if it works properly)

- Considered as the specification of the entire system (mainly because all specifications are confirmed in this step, NOT in coding part of the SDLC)
Characteristics of Software Testing

- Written BEFORE
code development

- Automated

- Executable in the
shortest time
possible
LEVELS OF SOFTWARE TESTING
Unit Testing
Acceptance Testing
Integration Testing
System Testing
- Testing specific sections of code (every class/method for OOP)
- Executed the most often by DEVELOPERS
- Must run 100% successfully
- Must be automated (pass/fail)
- If pass, refactor
- If fail, debug and retest
- Are there bugs in the code?
Why do unit test?
- Safety in the refactoring process
- Documentation
- Mainly used front and back-end integration (combination of modules/objects from unit tests)
- Allows exposure of interface defects
- Sees how modules (back-end) interacts with the user interfaces (front-end)
- Integration either iteratively or altogether (Big Bang)
- Do they work together?
Four approaches of Integration Testing
- Top-down: Front-end to back-end
- might miss out on other back-end functionalities
- Bottom-up: back-end to front-end
- easier tracking of progress
- Sandwich: combination of top-down and bottom-up
- Big Bang: combines all modules together
- faster testing time, but can deter software quality (might miss small but major bugs)
- Testing an entire system as a whole
- Takes all of the components that passed integration testing and combines all of those in a system for testing (consistency check)
- Software-hardware integration
- minimum system requirements
- virus-free or free from corruption
- memory allocation
- Does the system satisfy the system requirements?
- Crash testing the program in real-time environment
- Check if customer requirements are met by the development team
- Written as user stories and is provided before implementation
- Developers make an automatic test based on the user stories
- All acceptance testing is executed iteratively
Why do acceptance test?
- Check if system implements desired features correctly
- Provide non-ambiguous specifications
- Measurement of percent passing of acceptance test will determine percent completeness of the entire project
Types of Acceptance Testing
- User Acceptance Testing
- see if requirements are met by the system
- Factory Acceptance Test (involves equipment to be used by the industry)
Operational Acceptance Testing
- testing for operational readiness
- system can work in certain conditions
- users can be trained in using the system
- Focus: SUPPORTABILITY
- may include backup facilities, disaster management, and maintenance
Contract and Regulation Acceptance Testing
- Testing based on certain criteria (contract)
- Testing based on legal standards (regulation)
- check for piracy
- check for safety and stability of the system
- check if it follows standards from international governing bodies (IEEE, ISO, etc.)
Alpha/Beta Testing
- testing of the system by the internal staff prior to release (Alpha)
- testing of the system by some volunteers that are not staff, or some customers (Beta)
- feedbacks are provided by the volunteers.
Software Testing Methods
White Box Testing
Gray Box Testing
Black Box Testing
- Testing internal structure systems of an application
- Focus on the internal workings of the system
- control flow
- data usage
- statements
- memory costs
- usually done in unit testing
- errors are usually detected more often in this type of testing
- concerned with "how it works?"
Why do White-box Testing?
- specific
- hidden errors are shown
- program optimization

Why not do White-box Testing?
- time-consuming
- unrealistic
- makes testing phase complex
- as a result, SKILL and knowledge of the tools and requirements, respectively, is a MUST!
- focus on functionality of the system as a whole
- checks if the system does what it does
- tester determines which is right and wrong input without any knowledge of what is happening inside
- state transition
- can be used for checking user interface (if it is functional or not, not necessarily how it works; that is white box testing)
- .exe files are usually famous for black-box tools
- concerned on the "will it work" of the system
Why use Black-box Testing?
- user-friendly
- generic (can be a good or bad side, depending on who is testing)
- quick, so results can be delivered faster
- cheap, and does not demand much resources

Why not use Black-box Testing?
- too generic, which may skip mild to severe bugs
- everything is "blackboxed," so if you are a tester petty to details, this testing is not for you.
- combination of white and black box testing
- detect defects in case of wrong inputs and other improper usage of the application
- effective way in testing because it combines the functionality of black box testing with the internal design of white box testing
- high level and
- concerned with "is it valid" of the system
Software Testing Techniques
Compatibility Test
- Goal: test if a system is compatible,
given a certain environment
- factors include operating systems,
bandwidth, browser compatibility,
as well as hardware capacity (like can
it be handled with a given memory)
Regression Test
- Goal: find bugs in functional and non-functional parts of the system
- usually done after modifications or debugs in the system
- ideally, no new bugs SHOULD occur during this testing
- also used for tracking coding quality (like execution time, code length, etc.)
Sanity Testing
- Goal: not see any faults in the program (so all parts of the system should be working correctly)
- checks how true the output of the system is
- determines if further testing is needed, provided that no other bugs exist; otherwise, errors are caught, and thus the test fails
- quick, but not detailed form of testing
Smoke Testing
- Goal: not detect any major errors in the system (including internal errors)
- "everything should fall into place" type of testing
- if one part of the system fails, everything else fails
Destructive Testing
- Goal: make the program fail to see performance of the system
- easier to carry out and interpret error messages
- used to check how program will handle errors/wrong inputs, and if it will print the wrong output
- used for mass produced systems/programs
- example: stress test, crash test
Security Testing
- Goal: check if system is protecting data from potential hacking
- usually found in login/logout pages
- checks also if the system is readily available anytime to the user
- assure systems do their functionality correctly
Usability Testing
- Goal: test how interactive the system is
- focuses on testing interfaces
- response times
- user-friendliness
- accuracy
Performance Testing
- Goal: see general performance measurement of the system (responsiveness, stability under certain conditions)
- see how well a system performs given certain conditions (including hardware, outside conditions, or even certain memory loads)
Accessibility Test
- Goal: check if the system complies with local and international standards
- System should meet not only client's standards, but also standards from governing bodies (government, software organizations/institutions)
- helps in preventing piracy
Software Testing Tools
PHP Unit
- unit testing software
- isolate units so that the individual parts work correctly
Development Testing
- Goal: avoid errors as much as possible in all aspects (coding, compliance to standards and client demands, security, reliability)
- fast, cheap, efficient way of testing software
- expose and fix such defects as much as possible while complying with the standards
- improve quality of the system by improving further the system based on the client's demand and compliance to regulatory boards
A/B Testing
- aka "random experimentation"
- split testing, wherein there are two versions of the testing phase
- Goal: identify changes between two versions given that one behavior is modified
- see how this difference in behavior can affect the entire system
Selenium
- open source integration testing tool
- good for checking for browser compatibility
- automation of web applications
- record and playback feature
- widget in some web browsers (like Firefox)
Continuous Integration
- merging ALL developer working copies with a shared mainline (Wikipedia)
- modular programming but will merge into one big system after each "module" passes all unit tests
- starts upon development of the system, and continues until testing phase is complete
- efficient debug time with good unit testing
- test each committed piece of code immediately. If it passes all unit tests, it is integrated immediately to the mainline
- BUT: requires initial time to setup, and also well-developed test suites
Jenkins
- open-source, continuous integration tool written in Java
- Required plugins
- Checkstyle
- Clover PHP
- HTML publisher
- JDepend
- Plot
- PMD
- Violations
- xUnit
- PEAR is required PHP tool for Jenkins to work with PHP
Full transcript