Loading presentation...

Present Remotely

Send the link below via email or IM

Copy

Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.

DeleteCancel

Make your likes visible on Facebook?

Connect your Facebook account to Prezi and let your likes appear on your timeline.
You can change this under Settings & Account at any time.

No, thanks

PhD Presentation - Mandinga - v2

No description
by

Federico Toledo

on 3 March 2014

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of PhD Presentation - Mandinga - v2

Test Execution
Models and Metamodels
MANDINGA
MANDINGA: Methodology for Automation Testing Integrating Functional and Non-Functional Aspects

Conclusion and Future Work
Systematic Literature Review for performance testing of Information Systems (with model driven approaches)
Goals achieved
Framework to test Information Systems
Functional and performance testing
Model driven approach
Contributions
Goal 1

Automate the test case design of Information Systems
Future Work
Continue the development and experimentation with the Mandinga methodology
Introduction
Motivation
Context
Goals
Goal 1

Automate the design of
functional test cases
for Information Systems
Goal 2

Reduce cost and increase flexibility of
performance testing

Goal 3

Integrated view: functional and non-functional aspects
PhD Student:
Federico Toledo Rodríguez

Supervisors:
Macario Polo Usaola
Beatriz Pérez Lamancha

Thank you!!
Knowledge transferred to the industry
Implementation on a commercial tool
Used in real projects
Standard extension mechanism: UML Profiles
Stereotypes
Tagged values
Restrictions
Extension of metamodels
UML-TP
UDMP
PMM
Only UML-TP considers non-functional aspects

Current support of UML-TP for performance testing is not enough
Information System Models
UML model of the Information System Under Test
Data Model
Graphic User Interface Model
Model Driven Approaches
MARTE
Modeling and Analysis of Real-Time and Embedded Systems
UML Profile
Standard supported by OMG

PMM
Properties metamodel
Proposed in CNR
Functional Testing
Specific for (it has its own metamodel)
Performance Testing
From
Non-Functional Properties to Performance Test Model
Non-functional requirements (PMM)
considered into the test model (UML-TP)
Generation of Executable
Performance Test Cases
From Selenium to OpenSTA
Integrated approach
One unified vision (model) of the system requirements and its verification, considering functional and non-functional (performance and availability) aspects.
Research methodology
List of publications
Publications per topic
PhD Thesis
March 7th, 2014
Ciudad Real, Spain

Industrially applicable results
Data Model
There is no standard

UDMP: UML Data Modeling Profile
Proposed by IBM
Entity-relationship in UML
Graphic User Interface Model
The standard is more complex than what we needed
We designed a simpler version named "GUIMP: Graphic User Interface Model Profile"
Business Rules
OCL
Integrated to UML
Standard by OMG
Functional specification of the requirements of the system under test
State of Art and Practice
Systematic Literature Review for information systems testing
To manage the models:
UML SDK in Eclipse
Information System Model construction
Test Model Derivation
Test Code Generation
Eclipse Plug-in
Final degree project (2012-2014)
Jesús Núñez
Automated testing tools
Functional testing services
Performance testing services
Focus:
Information Systems with databases
Mobile and web
Standard supported by
Non-functional aspects
Focus on performance and availability
Standard by OMG

Defines the main concepts related with testing
Model driven approach?
Test Architecture
Test Behavior
Test Data
Timing
UML Testing Profile
Criteria to cover system code considering SQL sentences (white box approach)

Generation of
input test data
database states
from the definition of restrictions on test data
Taken from the database
Provided by the user
None of them
Verifies whether the system under test correctly deals with the database structure
"Negative" testing
Only a few provide a model-driven approach
Benchmark generation: comparing web servers, hardware, etc.

Performance tests generation from requirements

Models to predict performance and Software Performance Engineering (SPE)
Models to represent performance tests?
Framework
Functional Testing
Performance Testing
PMM
Relational Web
Problem to Solve
Case Study
Bancard (bancard.com.py)
Paraguayan financial company
Platform migration
High risk in the release of the new version
4 modules, more than 1500 entities
Eclipse Plug-in integrating M2M and M2T
38-42
37-70
19-27
70-74
77-133
137-164
Chapter 8

199-207
17-33
71-196
Examples:
Under the expected
workload
Average response time of "Add product" should be less 10 seconds
95% of the executions of "create invoice" should pass
Academy
Industry
State of the Art.
Information Systems Model
78-86
State of the Art.
Information Systems Model
44-46
State of the Art.
Functional Testing Model
46-48
State of the Art.
Functional Testing
65-67
State of the Art.
Performance Testing
63-65
State of the Art.
Performance Testing
50-56
State of the Art.
Performance Testing
56-60
What is interesting to test?
22
Some examples
Enter data through the user interface not respecting database restrictions
duplicated keys
null when not allowed
invalid data types
invalid foreign keys
Relationship boundaries
an invoice without products
an invoice with lots of products
Business rules
trying to enter a negative price
Test case design based on the database structure
Test case execution through the user interface

We are not testing the database!
CRUD: create, read, update, delete

Correspondence between entities and pages for their CRUD
Functional testing automation
Performance Testing
Alarcos
Abstracta
GeneXus team tests GeneXus with GXtest Generator
Analysis of Errors
Inconsistencies between database and logic layer
Non-existent resources
Non-editable fields in the update page
Derived attributes could be edited
Entities invoked in an unexpected way
Some test situations also helped to detect errors in the data model

ATL to generate GUI Model
Structure
Navigation

Correspondence between entities and the pages offering CRUD operations
Reverse engineering with Relational Web
Export to RSA format
UML is not the same standard for everyone
It was necessary to adapt the XMI
Import from RSA
Test Patterns
ATL transformations
Looking for patterns to test
Export to RSA
Import from RSA
Acceleo scripts
Model to text
Test case behavior
Test data
Adaptation layer (test components)
MIM: Model Implementation Matching
GUI structure
GUI Navigation
PMM
UML-TP Extension
Result:
Test model considering workload and non-functional validations
Case Studies
New: extended for JMeter and for Mobile apps
One line of Selenium equivalent to 200 lines in OpenSTA
Test Model
Coverage criteria proposed by Andrews et al.
CRUD (create, read, update, delete) criterion
Testing the life cycle of each entity
C · R · [U · R]* · D · R
Valid and invalid test data (equivalence class partitioning criterion)
Data Types
Database restrictions
Business rules restrictions**
ATL transformations looking for
substructures in the data model
considering GUI elements
business rules
Interesting test situations for CRUD operations
Test criteria for class diagrams
AEM Criterion: Association-end multiplicity.
CA Criterion: Class attribute.
Useful to test relationships with foreign keys
Considering the columns of each table
Test Components
Data-driven testing
Same behavior for different platforms
Oracles
Creation with invalid data
The test should verify that the instance was not created
The user should be notified
Creation with valid data
The test should verify that the instance was created with appropriated data
Test Data
Test Behavior
Even though it was designed for a specific purpose within this thesis, it can be used for any combination of Model-to-model and Model-to-Text transformation
Coverage criteria for data generation based on database structures
Coverage criteria for PMM models for performance testing
Goal 2

Reduce cost and increase flexibility of performance testing
Goal 3

Integrated view: functional and non-functional aspects
Applied in the industry with the implementation on GXtest
Smart Monkey Testing: Monkop http://monkop.com/
Integrating Mandinga approach
UML-TP RFP
Model-driven approaches
The model is executable
Also
Initial set of regression test cases at zero cost
UML and extensions
Standard transformation languages
Eclipse platform

Generated Functional Tests
Entities life cycle (CRUD) based on data model
System test, through the user interface
Well-known execution environment
Parametrized test cases (data-driven testing)
Extensible to different platforms

Generated Performance Tests
Easier to understand and maintain
Faster and more flexible methodology
Lessons learned
Annex I
Chapter 3, 4, 5, 7
Chapters 3, 6, 7
Chapters 3, 5, 6, 7
Coverage Criteria
Proof of concepts
Use in real projects
Adaptation
Analysis and refinement
Action-research
Reduce testing costs
Improve time to market
If you develop with models you have to test with models
Model-to-model transformation languages
Model-to-text transformation languages
Standard: QVT (Query View Transformation)
De facto
standard:
MOFM2T standard
Pragmatic implementation:
We need a language to specify tests
Events and Operators
It is possible to relate events with operators
After, before, sequence, between, etc.

Create Invoice =
Payment AFTER
Register Client BEFORE
(SEQ (Add Product, 10))
29
39-44
State of the Art.
Information Systems Model
Chapter 4

77-96
Chapter 5

99-126
Chapter 5


126-133
Chapter 5


103-120
"Test adequacy criteria for UML design models."
Software Testing, Verification and Reliability 13.2 (2003): 95-127.
Chapter 5


123-126
Chapter 5


101-103
Chapter 7


188-191
Chapter 7


179-187
Chapter 6


137-157
Chapter 6


157-164
Chapter 6


149-151
The effort required with our framework was reduced from 2 to 5 times
Traditional approach: 6 to 10 hours per script
Our approach: 1 to 5 hours per script
Flexibility
Maintenance in traditional approach: rebuild the script from scratch
Our approach: adjust Selenium script, regenerate

Chapter 7


191-195
Chapter 8

200
Chapter 8


204-206
adapted
Information to derive test cases
Architecture
Behavior
Test Data
Automation of test execution
Improves test case execution time
It is easy to repeat it:
in different platforms
with different data
in different moments
Techniques to find errors under reduced cost
Typically associated to a Coverage Criteria

Looking for the inputs and situations that have more probability to find bugs
test cases
+
test data
Easy to repeat
Easy to control
More measurable
Cheaper
workload
equivalent workload
Criteria
PKV: Primary Key Violation
FKV: Foreign Key Violation
URV: Unique Restriction Violation
NNV: Not-Null Violation
DTV: Data Types Violation
Coverage criteria defined on the PMM operators
Full transcript