Send the link below via email or IMCopy
Present to your audienceStart remote presentation
- Invited audience members will follow you as you navigate and present
- People invited to a presentation do not need a Prezi account
- This link expires 10 minutes after you close the presentation
- A maximum of 30 users can follow your presentation
- Learn more about this feature in our knowledge base article
Evaluating Symbolic Execution-based Test Tools
Transcript of Evaluating Symbolic Execution-based Test Tools
Evaluating Symbolic Execution-
based Test Tools
Lajos Cseppentő, Zoltán Micskei
How can the different test input generator tools be compared and evaluated?
coverage not maximal
Other tools (SE,
SE testing tools
K. Lakhotia, P. McMinn, and M. Harman, “An empirical investigation into branch coverage for C programs using CUTE and AUSTIN,” J. Syst. Softw., vol. 83, no. 12, pp. 2379–2391, Dec. 2010.
X. Qu and B. Robinson, “A case study of concolic testing tools and
their limitations,” in Int. Symp. on Empirical Software Engineering and
Measurement, ser. ESEM’11, 2011, pp. 117–126.
S. J. Galler and B. K. Aichernig, “Survey on test data generation tools,” STTT, vol. 16, no. 6, pp. 727–751, 2014.
G. Fraser and A. Arcuri, “Sound empirical evidence in software testing,” in Int. Conf. on Software Engineering, ICSE’12, 2012, pp. 178–188.
P. Braione et al., “Software testing with code-based test generators: data and lessons learned from a case study with an industrial software component,” Software Qual J, vol. 22, no. 2, pp. 311–333, 2014.
Test generator tool
Test input generation
Compared to coverage of manually selected inputs
B1 Primitive types and operators
B5 Functions and recursion
30 sec limit for 1 snippet
executions performed 3x