ICST 2021
Mon 12 - Fri 16 April 2021

ICST 2021 invites high quality submissions in all areas of software testing, verification, and validation. Papers for the research track should present novel and original work that pushes the state-of-the-art. Case studies and empirical research papers are also welcome.

Dates
You're viewing the program in a time zone which is different from your device's time zone - change time zone

Tue 13 Apr
Times are displayed in time zone: Brasilia, Distrito Federal, Brazil change

09:00 - 09:15
09:00
15m
Day opening
Intro & Welcome
Research Papers
09:15 - 10:15
Testing Concurrent and Quantum SystemsResearch Papers at Porto de Galinhas
Chair(s): Marcos Lordello Chaim
09:15
30m
Paper
Synthesizing Multi-threaded Tests from Sequential Traces to Detect Communication Deadlocks
Research Papers
09:45
30m
Paper
Assessing the Effectiveness of Input and Output Coverage Criteria for Testing Quantum Programs
Research Papers
Shaukat AliSimula Research Laboratory, Norway, Paolo ArcainiNational Institute of Informatics , Xinyi Wang, Tao YueNanjing University of Aeronautics and Astronautics
10:45 - 12:00
Keynote 1Research Papers at Porto de Galinhas
Chair(s): Ana PaivaFaculty of Engineering of the University of Porto
10:45
75m
Keynote
Keynote Talk - Fuzzing, symbolic execution, and synthesis for testing
Research Papers
I: Corina S. PasareanuCarnegie Mellon University Silicon Valley, NASA Ames Research Center
13:00 - 14:30
Testing and LearningResearch Papers at Porto de Galinhas
Chair(s): Andrea StoccoUniversità della Svizzera italiana (USI)
13:00
30m
Paper
Fail-Safe Execution of Deep Learning based Systems through Uncertainty Monitoring
Research Papers
Michael WeissUniversità della Svizzera Italiana (USI), Paolo TonellaUSI Lugano, Switzerland
Pre-print
13:30
30m
Paper
A Search-Based Testing Framework for Deep Neural Networks of Source Code Embedding
Research Papers
Maryam Vahdat Pour, Zhuo Li, Lei MaUniversity of Alberta, Hadi HemmatiUniversity of Calgary
Pre-print
14:00
30m
Paper
Learning-Based Fuzzing of IoT Message Brokers
Research Papers
Bernhard AichernigGraz University of Technology, Edi Muskardin, Andrea PferscherInstitute of Software Technology, Graz University of Technology
Pre-print Media Attached File Attached
15:00 - 16:30
Models, Testing and VerificationResearch Papers at Porto de Galinhas
Chair(s): Mike PapadakisUniversity of Luxembourg, Luxembourg
15:00
30m
Paper
Modeling with Mocking
Research Papers
Jouke StoelCWI, Jurgen VinjuCWI, Netherlands, Tijs van der StormCWI & University of Groningen, Netherlands
Pre-print
15:30
30m
Paper
Uncertainty-aware Exploration in Model-based Testing
Research Papers
Matteo CamilliFree University of Bozen-Bolzano, Angelo GargantiniUniversity of Bergamo, Patrizia ScandurraUniversity of Bergamo, Italy, Catia TrubianiGran Sasso Science Institute
Pre-print
16:00
30m
Paper
Demystifying the Challenges of Formally Specifying API Properties for Runtime Verification
Research Papers
Leopoldo TeixeiraFederal University of Pernambuco, Breno MirandaFederal University of Pernambuco, Henrique RebeloUniversidade Federal de Pernambuco, Marcelo d'AmorimFederal University of Pernambuco
Pre-print

Wed 14 Apr
Times are displayed in time zone: Brasilia, Distrito Federal, Brazil change

09:15 - 10:45
Slicing and Static AnalysisResearch Papers at Porto de Galinhas
Chair(s): Leopoldo TeixeiraFederal University of Pernambuco
09:15
30m
Paper
Efficiently Finding Data Flow SubsumptionsDistinguished Paper Award
Research Papers
09:45
30m
Paper
MANDOLINE: Dynamic Slicing of Android Applications with Trace-Based Alias AnalysisDistinguished Paper Award
Research Papers
Khaled Ahmed, Mieszko Lis, Julia RubinUniversity of British Columbia, Canada
Pre-print
10:15
30m
Paper
Address-Aware Query Caching for Symbolic Execution
Research Papers
David TrabishTel Aviv University, Israel, Shachar ItzhakyTechnion, Noam Rinetzky
Pre-print
11:15 - 12:30
Keynote 2Research Papers at Porto de Galinhas
Chair(s): Fabiano FerrariFederal University of São Carlos
11:15
75m
Keynote
Keynote Talk - Some challenges and pitfalls in engineering contemporary software systems under the plague of defects
Research Papers
I: Guilherme Horta TravassosFederal University of Rio de Janeiro
14:00 - 15:00
Test ReuseResearch Papers at Porto de Galinhas
Chair(s): Paolo TonellaUSI Lugano, Switzerland
14:00
30m
Paper
Self determination: A comprehensive strategy for making automated tests more effective and efficient
Research Papers
Kesina Baral, Jeff OffuttGeorge Mason University, Fiza Mulla
14:30
30m
Paper
Artefact Relation Graphs for Unit Test Reuse Recommendation
Research Papers
Robert WhiteUniversity College London, UK, Jens KrinkeUniversity College London, Earl T. BarrUniversity College London, UK, Federica SarroUniversity College London, Chaiyong RakhitwetsagulMahidol University, Thailand
15:30 - 17:00
Faults and Fault InjectionResearch Papers at Porto de Galinhas
Chair(s): André T. EndoFederal University of Technology - Paraná (UTFPR)
15:30
30m
Paper
An Empirical Study of Flaky Tests in Python
Research Papers
Martin GruberBMW Group, Stephan LukasczykUniversity of Passau, Florian Kroiß, Gordon FraserUniversity of Passau
Pre-print
16:00
30m
Paper
Fast Kernel Error Propagation Analysis in Virtualized Environments
Research Papers
Nicolas CoppikTU Darmstadt, Oliver SchwahnTU Darmstadt, Neeraj Suri
16:30
30m
Paper
Dissecting Strongly Subsuming Second-Order Mutants
Research Papers
João Paulo DinizFederal University of Minas Gerais, Brazil, Chu-Pan WongCarnegie Mellon University, USA, Christian KaestnerCarnegie Mellon University, Eduardo FigueiredoFederal University of Minas Gerais, Brazil

Thu 15 Apr
Times are displayed in time zone: Brasilia, Distrito Federal, Brazil change

09:15 - 10:45
Autonomous and Cyber-Physical SystemsResearch Papers at Porto de Galinhas
Chair(s): Paolo ArcainiNational Institute of Informatics
09:15
30m
Paper
IoTBox: Sandbox Mining to Prevent Interaction Threats in IoT Systems
Research Papers
Hong Jin Kang, David LoSingapore Management University, Sheng Qin Sim
09:45
30m
Paper
Quality Metrics and Oracles for Autonomous Vehicles Testing
Research Papers
Gunel JahangirovaUSI Lugano, Switzerland, Andrea StoccoUniversità della Svizzera italiana (USI), Paolo TonellaUSI Lugano, Switzerland
Pre-print
10:15
30m
Paper
Anomaly Detection with Digital Twin in Cyber-Physical Systems
Research Papers
Xu Qinghua, Shaukat AliSimula Research Laboratory, Norway, Tao YueNanjing University of Aeronautics and Astronautics
11:15 - 12:30
Keynote 3Research Papers at Porto de Galinhas
Chair(s): Robert HieronsUniversity of Sheffield
11:15
75m
Talk
Keynote Talk - Testing Machine Learning-Enabled Systems
Research Papers
I: Lionel BriandUniversity of Luxembourg and University of Ottawa
13:00 - 14:00
ICST Steering Committee meetingResearch Papers at Porto de Galinhas
13:00
60m
Meeting
ICST Steering Committee meeting
Research Papers
14:00 - 15:00
Program RepairResearch Papers at Porto de Galinhas
Chair(s): Angelo GargantiniUniversity of Bergamo
14:00
30m
Paper
Automatic Program Repair as Semantic Suggestions: An Empirical Study
Research Papers
Diogo Campos, André RestivoLIACC, Universidade do Porto, Porto, Portugal, Hugo Sereno FerreiraFaculty of Engineering, University of Porto, Portugal, Afonso RamosFaculty of Engineering of the University of Porto
14:30
30m
Paper
Exploring True Test Overfitting in Dynamic Automated Program Repair using Formal Methods
Research Papers
Amirfarhad NilizadehUniversity of Central Florida, Gary LeavensUniversity of Central Florida, Xuan-Bach D. LeSingapore Management University, Singapore, Corina S. PasareanuCarnegie Mellon University Silicon Valley, NASA Ames Research Center, David CokCEA, LIST, Software Safety and Security Laboratory
15:30 - 17:00
Empirical and User StudiesResearch Papers at Porto de Galinhas
Chair(s): Michael FeldererUniversity of Innsbruck
15:30
30m
Paper
A Large-scale Study on API Misuses in the Wild
Research Papers
Xia LiKennesaw State University, Jiajun JiangTianjin University, China, Samuel BentonThe University of Texas at Dallas, Yingfei XiongPeking University, Lingming ZhangUIUC
16:00
30m
Paper
System and Software Testing in Automotive: an Empirical Study on Process Improvement Areas
Research Papers
16:30
30m
Paper
Simulation for Robotics Test Automation: Developer Perspectives
Research Papers
Afsoon AfzalCarnegie Mellon University, Deborah S. KatzCarnegie Mellon University, Claire Le GouesCarnegie Mellon University, Christopher Steven TimperleyCarnegie Mellon University

Submitting to ICST2021: Q&A

Please note that the Double Blind Review (DBR) process is not used by all tracks, e.g., Industry Track. Check in the call for papers whether DBR is used or not.

Q: How does one prepare an ICST 2021 submission for double-­blind reviewing?

In order to comply, you do not have to make your identity undiscoverable; the double­-blind aspect of the review process is not an adversarial identity discovery process. Essentially, the guiding principle should be to maximize the number of people who could plausibly be authors, subject to the constraint that no change is made to any technical details of the work. Therefore, you should ensure that the reviewers are able to read and review your paper without having to know who any of the authors are. Specifically, this involves at least the following four points:

  1. Omit all authors’ names, affiliations, emails and related information from the title page as well as in the paper itself.
  2. Refer to your own work in the third person. You should not change the names of your own tools, approaches or systems, since this would clearly compromise the review process. It breaks the constraint that “no change is made to any technical details of the work”. Instead, refer to the authorship or provenance of tools, approaches or systems in the third person, so that it is credible that another author could have written your paper.
  3. Do not rely on supplementary material (your web site, GitHub repository, YouTube channel, a companion technical report or thesis) in the paper. Supplementary information might result in revealing author identities.
  4. Anonymize project and grant names and numbers or those of funding agencies or countries as well as any acknowledgements of support to the work you report on.

We further expect you to follow the excellent advice on anonymization from ACM.

When anonymizing your email, affiliations, name, etc., try to refrain from being overly creative or “funny” by coming up with your own, anonymized versions. For emails preferably use author1@anon.org, author2@anon.org, etc., since initial DBR screening will be done by an automated tool.

Q: I previously published an earlier version of this work in a venue that does not have double-blind. What should I do about acknowledging that previous work?

Double-blind does not and cannot mean that it is impossible for the referees to discover the identity of the author. However, we require authors to help make it easy for author identity to not play a role in the reviewing process. Therefore, we ask that in the materials you submit to us to be reviewed author identity is not revealed.

If the work you are submitting for review has previously been published in a non-peer-reviewed venue (e.g., arXiv departmental tech report), there is no need to cite it, because unrefereed work is not truly part of the scientific literature. If the previous work is published in a peer-reviewed venue, then it should be cited, but in the third person so that it is not clear whether or not this work was done by the author of the submitted paper or some other set of authors unknown. However, if citing in the third person would still risk that it is easy to identify the authors please err on the side of caution by also anonymising the papers being extended (both when cited and in the reference list).

Q: Our submission makes use of work from a PhD/master’s thesis dissertation/report which has been published. Citing the dissertation might compromise anonymity. What should we do?

It is perfectly OK to publish work from a PhD/master’s thesis, and there is no need to cite it in the version submitted for review because prior dissertation publication does not compromise novelty. In the final (post-review, camera ready) version of the paper, please do cite the dissertation to acknowledge its contribution, but in the refereed version of the paper that you submit, please refrain from citing the dissertation.

However, you need not worry whether or not the dissertation has appeared, since your job is only to help the committee review your work without awareness of author identity, but not to make it impossible for them to discover the identity of authors. The referees will be trying hard not to discover the authors’ identity, so they will likely not be searching the web to check whether there is a dissertation related to this work.

Q: I am submitting to the industry track. Should I double-blind my submission?

No you should not. Since industry papers typically relies heavily on the industrial or practical context in which the work was carried out it would be too much to ask to require this context to be anonymized.

Q: I want to include a link to an online appendix in my submission. How should I do this?

Ideally the information in the appendix should be anonymous and it should be uploaded to an anonymous service such as for example figshare or create a new github (or other) sharing account that is not associated with your real name. These sites will give you a link that is anonymous. Later, if the paper is accepted you can turn that link into a non-anonymized link or just put the appendix on your own site and change the link in the camera ready version of the paper. An alternative solution is to not include the link in the submission; normally papers should be possible to review based on only the material of the paper itself.

To upload material on figshare please create an account there, then add a new item, use the keywords “Supplemental Materials” and add the other item-specific data and then select “Make file(s) confidential” and select “Generate private link”. Copy the url generated there and then “Save changes”. Your file(s) can now be accessed anonymously at the given url so you can put it in your ICST submission.

Q: What if we want to cite some unpublished work of our own (as motivation for example)

If the unpublished paper is an earlier version of the paper you want to submit to ICST and is currently under review, then you have to wait until your earlier version is through its review process before you can build on it with further submissions (this would be considered double-submission and violates ACM plagiarism policy and procedures). Otherwise, if the unpublished work is not an earlier version of the proposed ICST submission, then you should simply make it available on a website, for example, and cite it in the third person to preserve anonymity, as you are doing with others of your works.

Q: Can I disseminate non-blinded version of my submitted work by discussing it with colleagues, giving talks, publishing it at ArXiV, etc.?

You can discuss and present your work that is under submission at small meetings (e.g., job talks, visits to research labs, a Dagstuhl or Shonan meeting), but you should avoid broadly advertising it in a way that reaches the reviewers even if they are not searching for it. For example, you are allowed to put your submission on your home page and present your work at small professional meetings. However, you should not discuss your work with members of the program committee, publicize your work on mailing lists or media that are widely shared and can reach the program committee, or post your work on ArXiV or a similar site just before or after submitting to the conference.

Call for Papers

ICST 2021 (https://icst2021.icmc.usp.br/) invites high quality submissions in all areas of software testing, verification, and validation. Papers for the research track should present novel and original work that advances the state-of-the-art. Case studies and empirical research papers are also welcome.

Topics of Interest

Topics of interest include, but are not limited to the following:

  • Fuzz testing
  • Manual testing practices and techniques
  • Search based software testing
  • Security testing
  • Model based testing
  • Test automation
  • Static analysis and symbolic execution
  • Formal verification and model checking
  • Software reliability
  • Testability and design
  • Testing and development processes
  • Testing education
  • Testing in specific domains, such as mobile, web, embedded, concurrent, distributed, cloud, GUI and real-time systems
  • Testing for learning-enabled software, including deep learning
  • Testing/debugging tools
  • Theory of software testing
  • Empirical studies
  • Experience reports

Each submission will be reviewed by at least three members of the ICST Program Committee.

Papers that have a strong industrial/practical component and focus more on impact rather than (technical) novelty are encouraged to consider the industry track instead.

Submission Format

Full Research papers as well as Industry papers must conform to the two-column IEEE conference publication format, not exceed 10 pages, including all text, figures, tables, and appendices; two additional pages containing only references are permitted. It must conform to the the IEEE Conference Proceedings Formatting Guidelines (please use the letter format template and conference option). The ICST 2021 research track only accepts full research papers. Short papers are not accepted to the research track.

The submission must also comply with the ACM plagiarism policy and procedures. In particular, it must not have been published elsewhere and must not be under review elsewhere while under review for ICST. The submission must also comply with the IEEE Policy on Authorship.

Lastly, the ICST 2021 Research papers track will employ a double-blind review process. Thus, no submission may reveal its authors’ identities. The authors must make every effort to honor the double-blind review process. In particular, the authors’ names must be omitted from the submission and references to their prior work should be in the third person. Further advice, guidance and explanation about the double-blind review process can be found in the Q&A page.

Submissions to the Research Papers Track that meet the above requirements can be made via the Research Papers Track submission site by the submission deadline.

Any submission that does not comply with the above requirements may be rejected by the PC Chairs without further review.

If a submission is accepted, at least one author of the paper is required to attend the conference and present the paper in person for the paper to be published in the ICST 2021 conference proceedings.

New from 2020. Submissions must supply all information that is needed to replicate the results, and therefore are expected to include or point to a replication package with the necessary software, data, and instructions. Reviewers may consult these packages to resolve open issues. There can be good reasons for the absence of a replication package, such as confidential code and/or data, the research being mostly qualitative, or the paper being fully self-contained. If a paper does not come with a replication package, authors should comment on its absence in the submission data; reviewers will take such comments into account.

Keynote Talks

Title
A Large-scale Study on API Misuses in the Wild
Research Papers
A Search-Based Testing Framework for Deep Neural Networks of Source Code Embedding
Research Papers
Pre-print
Address-Aware Query Caching for Symbolic Execution
Research Papers
Pre-print
An Empirical Study of Flaky Tests in Python
Research Papers
Pre-print
Anomaly Detection with Digital Twin in Cyber-Physical Systems
Research Papers
Artefact Relation Graphs for Unit Test Reuse Recommendation
Research Papers
Assessing Oracle Quality with Checked CoverageMost Influential Paper Award
Research Papers
Link to publication DOI
Assessing the Effectiveness of Input and Output Coverage Criteria for Testing Quantum Programs
Research Papers
Automatic Program Repair as Semantic Suggestions: An Empirical Study
Research Papers
Award Session
Research Papers
Demystifying the Challenges of Formally Specifying API Properties for Runtime Verification
Research Papers
Pre-print
Dissecting Strongly Subsuming Second-Order Mutants
Research Papers
Efficiently Finding Data Flow SubsumptionsDistinguished Paper Award
Research Papers
Experiences of System-Level Model-Based GUI Testing of an Android ApplicationMost Influential Paper Award
Research Papers
Link to publication DOI
Exploring True Test Overfitting in Dynamic Automated Program Repair using Formal Methods
Research Papers
Fail-Safe Execution of Deep Learning based Systems through Uncertainty Monitoring
Research Papers
Pre-print
Fast Kernel Error Propagation Analysis in Virtualized Environments
Research Papers
ICST Steering Committee meeting
Research Papers
Intro & Welcome
Research Papers
IoTBox: Sandbox Mining to Prevent Interaction Threats in IoT Systems
Research Papers
Keynote Talk - Fuzzing, symbolic execution, and synthesis for testing
Research Papers
Keynote Talk - Some challenges and pitfalls in engineering contemporary software systems under the plague of defects
Research Papers
Keynote Talk - Testing Machine Learning-Enabled Systems
Research Papers
Learning-Based Fuzzing of IoT Message Brokers
Research Papers
Pre-print Media Attached File Attached
MANDOLINE: Dynamic Slicing of Android Applications with Trace-Based Alias AnalysisDistinguished Paper Award
Research Papers
Pre-print
Modeling with Mocking
Research Papers
Pre-print
Quality Metrics and Oracles for Autonomous Vehicles Testing
Research Papers
Pre-print
Self determination: A comprehensive strategy for making automated tests more effective and efficient
Research Papers
Simulation for Robotics Test Automation: Developer Perspectives
Research Papers
Synthesizing Multi-threaded Tests from Sequential Traces to Detect Communication Deadlocks
Research Papers
System and Software Testing in Automotive: an Empirical Study on Process Improvement Areas
Research Papers
Uncertainty-aware Exploration in Model-based Testing
Research Papers
Pre-print
  • Fast Kernel Error Propagation Analysis in Virtualized Environments.
    Nicolas Coppik, Oliver Schwahn, Neeraj Suri

  • Assessing the Effectiveness of Input and Output Coverage Criteria for Testing Quantum Programs.
    Shaukat Ali, Paolo Arcaini, Xinyi Wang, Tao Yue

  • Anomaly Detection with Digital Twin in Cyber-Physical Systems.
    Qinghua Xu, Shaukat Ali, Tao Yue

  • Address-Aware Query Caching for Symbolic Execution.
    David Trabish, Shachar Itzhaky, Noam Rinetzky

  • A Large-scale Study on API Misuses in the Wild.
    Xia Li, Jiajun Jiang, Samuel Benton, Yingfei Xiong, Lingming Zhang

  • Modeling with Mocking.
    Jouke Stoel, Jurgen Vinju, Tijs van der Storm

  • Fail-Safe Execution of Deep Learning based Systems through Uncertainty Monitoring.
    Michael Weiss, Paolo Tonella

  • System and Software Testing in Automotive: an Empirical Study on Process Improvement Areas.
    Giuseppe Lami, Fabio Falcini

  • Self determination: A comprehensive strategy for making automated tests more effective and efficient.
    Kesina Baral, Jeff Offutt, Fiza Mulla

  • An Empirical Study of Flaky Tests in Python.
    Martin Gruber, Stephan Lukasczyk, Florian Kroiß, Gordon Fraser

  • Automatic Program Repair as Semantic Suggestions: An Empirical Study.
    Diogo Campos, André Restivo, Hugo Ferreira Sereno, Afonso Ramos

  • Learning-Based Fuzzing of IoT Message Brokers.
    Bernhard K. Aichernig, Edi Muskardin, Andrea Pferscher

  • Artefact Relation Graphs for Unit Test Reuse Recommendation.
    Robert White, Jens Krinke, Earl Barr, Federica Sarro, Chaiyong Ragkhitwetsagul

  • Efficiently Finding Data Flow Subsumptions.
    Marcos Lordello Chaim, Kesina Baral, Jeff Offutt, Mario Concilio Neto, Roberto Araujo

  • Dissecting Strongly Subsuming Second-Order Mutants.
    João P. Diniz, Chu-Pan Wong, Christian Kästner, Eduardo Figueiredo

  • MANDOLINE: Dynamic Slicing of Android Applications with Trace-Based Alias Analysis.
    Khaled Ahmed, Mieszko Lis, Julia Rubin

  • Simulation for Robotics Test Automation: Developer Perspectives.
    Afsoon Afzal, Deborah S. Katz, Claire Le Goues, Christopher Steven Timperley

  • Uncertainty-aware Exploration in Model-based Testing.
    Matteo Camilli, Angelo Gargantini, Patrizia Scandurra, Catia Trubiani

  • Exploring True Test Overfitting in Dynamic Automated Program Repair using Formal Methods.
    Amirfarhad Nilizadeh, Gary T. Leavens, Xuan-Bach D. Le, Corina Pasareanu, David Cok

  • IoTBox: Sandbox Mining to Prevent Interaction Threats in IoT Systems.
    Hong Jin Kang, David Lo, Sheng Qin Sim

  • Demystifying the Challenges of Formally Specifying API Properties for Runtime Verification.
    Leopoldo Teixeira, Breno Miranda, Henrique Rebêlo, Marcelo d’Amorim

  • A Search-Based Testing Framework for Deep Neural Networks of Source Code Embedding.
    Maryam Vahdat Pour, Li Zhuo, Lei Ma, Hadi Hemmati

  • Quality Metrics and Oracles for Autonomous Vehicles Testing.
    Gunel Jahangirova, Andrea Stocco, Paolo Tonella

  • Synthesizing Multi-threaded Tests from Sequential Traces to Detect Communication Deadlocks.
    Dhriti Khanna, Rahul Purandare, Subodh Sharma

Distinguished Papers:

  • “MANDOLINE: Dynamic Slicing of Android Applications with Trace-Based Alias Analysis”, by Khaled Ahmed, Mieszko Lis, and Julia Rubin
  • “Efficiently Finding Data Flow Subsumptions”, by Marcos Lordello Chaim, Kesina Baral, Jeff Offutt, Mario Concilio Neto, and Roberto Araujo

Most Influential Papers:

  • “Assessing Oracle Quality with Checked Coverage”, by David Schuler and Andreas Zeller
  • "Experiences of System-Level Model-Based GUI Testing of an Android Application”, by Tomi Takala, Mika Katara, and Julian Harty