Loading presentation...

Present Remotely

Send the link below via email or IM

Copy

Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.

DeleteCancel

Make your likes visible on Facebook?

Connect your Facebook account to Prezi and let your likes appear on your timeline.
You can change this under Settings & Account at any time.

No, thanks

Evaluating Symbolic Execution-based Test Tools

Slides for the paper presented at ICST 2015.
by

Zoltan Micskei

on 16 April 2015

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of Evaluating Symbolic Execution-based Test Tools

Features
Evaluating Symbolic Execution-
based Test Tools

Lajos Cseppentő, Zoltán Micskei
Motivation
Case studies
How can the different test input generator tools be compared and evaluated?
Approach
Results
N/A
no support
EX
exception
T/M
time/memory limit
NC
coverage not maximal
C
covered everything
Next steps
Other tools (SE,
SBST, random...)
Extend evaluation
(mutation, fault-based)
http://sette-testing.github.io
B
Basic constructs
S
Structures
O
Objects
G
Generics
L
Class Library
Others
SE testing tools
Fine-grained feedback?
See: http://mit.bme.hu/~micskeiz/pages/code_based_test_generation.html
K. Lakhotia, P. McMinn, and M. Harman, “An empirical investigation into branch coverage for C programs using CUTE and AUSTIN,” J. Syst. Softw., vol. 83, no. 12, pp. 2379–2391, Dec. 2010.
X. Qu and B. Robinson, “A case study of concolic testing tools and
their limitations,” in Int. Symp. on Empirical Software Engineering and
Measurement, ser. ESEM’11, 2011, pp. 117–126.
S. J. Galler and B. K. Aichernig, “Survey on test data generation tools,” STTT, vol. 16, no. 6, pp. 727–751, 2014.
G. Fraser and A. Arcuri, “Sound empirical evidence in software testing,” in Int. Conf. on Software Engineering, ICSE’12, 2012, pp. 178–188.
P. Braione et al., “Software testing with code-based test generators: data and lessons learned from a case study with an industrial software component,” Software Qual J, vol. 22, no. 2, pp. 311–333, 2014.
Language reference
Challenges
Code snippets
Test generator tool
Test input generation
Test inputs
Achieved coverage
C/C++
Java
C#
path explosion
complex arithmetic
external functions
floats
etc.
SETTE framework
Statement coverage
Compared to coverage of manually selected inputs
B1 Primitive types and operators
B2 Conditionals
B3 Loops
B4 Arrays
B5 Functions and recursion
B6 Exceptions
Subjects
Automatic
CATG
jPET
SPF
Manual
EvoSuite
Pex
30 sec limit for 1 snippet
executions performed 3x
Experiments
Categorization
Setup
Basic 169
Structures 17
Objects 35
Generics 10
Library 57
Others 12
Total 300
CATG
jPET
SPF
Pex
EvoS.
Basic
Struct.
B1
Objects
Gen.
Library
O
B2
B3
B4
B5
B6
Full transcript