ICST 2021
Mon 12 - Fri 16 April 2021

4th IEEE Workshop on NEXt level of Test Automation

Find full CFP here: http://www.testomatproject.eu/nexta2021

April 16, 2021, Virtual Co-located with IEEE International Conference on Software Testing, Verification and Validation (ICST 2021)

Theme and Goals

NEXTA’21 is the fourth edition of the IEEE Workshop on Test Automation - highly relevant for both research and industry. Thus, NEXTA aims to attract both academic researchers and industry practitioners. Test automation has been an acknowledged software engineering best practice for years. However, the topic involves more than the repeated execution of test cases that often comes first to mind. Simply running test cases using a unit testing framework is no longer enough for test automation to keep up with the ever-shorter release cycles driven by continuous deployment and technological innovations such as microservices and DevOps pipelines. Now test automation needs to rise to the next level by going beyond mere test execution. The NEXTA workshop will explore how to advance test automation to further contribute to software quality in the context of tomorrow’s rapid release cycles. Take-aways for industry practitioners and academic researchers will encompass test case generation, automated test result analysis, test suite assessment and maintenance, and infrastructure for the future of test automation.

Topics of Interest

NEXTA solicits contributions targeting all aspects of test automation, from initial test design to automated verdict analysis. Topics of interest include, but are not limited to, the following:

  • Test execution automation
  • Test case generation
  • Automatic test design generation
  • Analytics, learning and big data in relation to test automation
  • Automatic aspects management in test, progress, reporting, planning etc.
  • Visualization of test
  • Evolution of test automation
  • Test suite architecture and infrastructure
  • Test environment, simulation, and other contextual issues for automated testing
  • Test tools, frameworks, and general support for test automation
  • Testing in an agile and continuous integration context, and testing within DevOps
  • Orchestration of test
  • Metrics, benchmarks, and estimation on any type of test automation
  • Any type of test technologies relying on automation of test
  • Process improvements and assessments related to test automation
  • Test automation maturity and experience reports on test automation
  • Automatic retrieval of test data and test preparation aspect
  • Maintainability, monitoring and refactoring of automated test suites
  • Training and education on automated testing
  • Automated test for product lines and high-variability systems
  • Test automation patterns
  • Automated test oracle

Accepted Papers for NEXTA 2021 Workshop

Friday 16 April (Detailed Program to be announced)

“QRTest: Automatic Query Reformulation for Information Retrieval Based Regression Test Case Prioritization” by Maral Azizi, East Carolina University, US

“AI-based Test Automation: A Grey Literature Analysis” by Filippo Ricca, DIBRIS, Università di Genova, Italy; Alessandro Marchetto, Independent Researcher, Italy and Andrea Stocco, Università della Svizzera italiana (USI), Switzerland

“An Empirical Study of Parallelizing Test Execution Using CUDA Unified Memory and OpenMP GPU Offloading” by Taghreed Bagies and Ali Jannesari, both from Iowa State University, US

“Using Advanced Code Analysis for Boosting Unit Test Creation” by Miroslaw Zielinski, Parasoft Corporation, Poland and Rix Groenboom, Parasoft Corporation, Netherlands

You're viewing the program in a time zone which is different from your device's time zone change time zone

Fri 16 Apr

Displayed time zone: Brasilia, Distrito Federal, Brazil change

09:00 - 09:10
Welcome to NEXTANEXTA at Tamandaré
09:00
10m
Talk
Welcome to NEXTA
NEXTA
Sigrid Eldh , Sahar Tahvili Ericsson AB, Vahid Garousi Queen's University Belfast, Michael Felderer University of Innsbruck, Kristian Sandahl Linköping University
09:10 - 09:35
Active Machine Learning to Test Autonomous DrivingNEXTA at Tamandaré

Keynote Speaker: Karl Meinke Session Chair: Michael Felderer

09:10
25m
Keynote
Active Machine Learning to Test Autonomous Driving
NEXTA
Karl Meinke The Royal Institute of Technology
09:35 - 10:00
AI-based Test Automation: A Grey Literature AnalysisNEXTA at Tamandaré

Authors: Filippo Ricca, DIBRIS, Università di Genova, Italy; Alessandro Marchetto, Independent Researcher, Italy and Andrea Stocco, Università della Svizzera italiana (USI), Switzerland Abstract: This paper provides the results of a survey of the grey literature concerning the use of artificial intelligence to improve test automation practices. We surveyed more than 1,200 sources of grey literature (e.g., blogs, white-papers, user manuals, StackOverflow posts) looking for highlights by professionals on how AI is adopted to aid the development and evolution of test code. Ultimately, we filtered 136 relevant documents from which we extracted a taxonomy of problems that AI aims to tackle, along with a taxonomy of AI-enabled solutions to such problems. Manual code development and automated test generation are the most cited problem and solution, respectively. The paper concludes by distilling the six most prevalent tools on the market, along with think-aloud reflections about the current and future status of artificial intelligence for test automation. Session Chair: Michael Felderer

09:35
25m
Paper
AI-based Test Automation: A Grey Literature Analysis
NEXTA
Filippo Ricca Università di Genova
10:00 - 10:25
Flaky Mutants; Another Concern for MutationTestingNEXTA at Tamandaré

Authors: Sten Vercammen, Serge Demeyer, Markus Borg and Robbe Claessens Abstract: Mutation testing is the state-of-the-art technique for assessing the fault detection capability of a test suite. An underlying assumption, rarely mentioned, is that the system under test behaves completely deterministically. This is rarely the case, as each mutant changes the code, it is highly likely that some introduce non-determinism. We call these flaky mutants. As they are only detected intermittently, they cause unreliable mutation testing scores, waste developer time, possibly unfruitful tests, and potential loss in confidence in the mutation testing technique. We want to raise awareness of this issue as we found that these flaky mutants are easy to create and occur in real projects. We also share some thoughts on how to tackle this issue. Mutation testing is the state-of-the-art technique for assessing the fault detection capability of a test suite. An underlying assumption, rarely mentioned, is that the system under test behaves completely deterministically. This is rarely the case, as each mutant changes the code, it is highly likely that some introduce non-determinism. We call these flaky mutants. As they are only detected intermittently, they cause unreliable mutation testing scores, waste developer time, possibly unfruitful tests, and potential loss in confidence in the mutation testing technique. We want to raise awareness of this issue as we found that these flaky mutants are easy to create and occur in real projects. We also share some thoughts on how to tackle this issue. Session Chair: Kristian Sandahl

10:00
25m
Keynote
Flaky Mutants; Another Concern for MutationTesting
NEXTA
Sten Vercammen University of Antwerp, Belgium, Serge Demeyer University of Antwerp, Belgium, Markus Borg RISE Research Institutes of Sweden
10:25 - 10:50
Using Advanced Code Analysis for Boosting Unit Test CreationNEXTA at Tamandaré

Authors: Miroslaw Zielinski, Parasoft Corporation, Poland and Rix Groenboom, Parasoft Corporation, Netherlands. Abstract: Unit testing is a popular testing technique, widespread in enterprise IT and embedded/safety-critical. For enterprise IT, unit testing is considered to be good practice and is frequently followed as an element of test-driven development. In the safety-critical world, there are many standards, such as ISO 26262, IEC 61508, and others, that either directly or indirectly mandate unit testing. Regardless the area of the application, unit testing is very time-consuming and teams are looking for strategies to optimize their efforts. This is especially true in the safety-critical space, where demonstration of test coverage is required for the certification. In this presentation, we share the results of our research regarding the use of advanced code analysis algorithms for augmenting the process of unit test creation. The discussion includes automatic discovery of inputs and responses from mocked components that maximize the code coverage and automated generation of the test cases. Session Chair: Kristian Sandahl

10:25
25m
Paper
Using Advanced Code Analysis for Boosting Unit Test Creation
NEXTA
11:00 - 11:25
QRTest: Automatic Query Reformulation for Information Retrieval Based Regression Test Case PrioritizationNEXTA at Tamandaré

Author: Maral Azizi, East Carolina University, US. Abstract: The most effective regression testing algorithms have long running times and often require dynamic or static code analysis, making them unsuitable for the modern software development environment where the rate of software delivery could be less than a minute. More recently, some researchers have developed information retrieval-based (IR-based) techniques for prioritizing tests such that the higher similar tests to the code changes have a higher likelihood of finding bugs. A vast majority of these techniques are based on standard term similarity calculation, which can be imprecise. One reason for the low accuracy of these techniques is that the original query often is short, therefore, it does not return the relevant test cases. In such cases, the query needs reformulation. The current state of research lacks methods to increase the quality of the query in the regression testing domain. Our research aims at addressing this problem and we conjecture that enhancing the quality of the queries can improve the performance of IR-based regression test case prioritization (RTP). Our empirical evaluation with six open source programs shows that our approach improves the accuracy of IR-based RTP and increases regression fault detection rate, compared to the common prioritization techniques. Session Chair: Sahar Tahvili

11:00
25m
Talk
QRTest: Automatic Query Reformulation for Information Retrieval Based Regression Test Case Prioritization
NEXTA
11:25 - 11:50
An Empirical Study of Parallelizing Test Execution Using CUDA Unified Memory and OpenMP GPU OffloadingNEXTA at Tamandaré

Authors: Taghreed Bagies and Ali Jannesari, Lowa State University, US. Abstract: The execution of software testing is costly and time-consuming. To accelerate the test execution, researchers have applied several methods to run the testing in parallel. One method of parallelizing the test execution is by using a GPU to distribute test case inputs among several threads running in parallel. In this paper, we investigate three programming models CUDA Unified Memory, CUDA Non-Unified Memory, and OpenMP GPU offloading to parallelize the test execution and discuss the challenges using these programing models. We use eleven benchmarks and parallelize their test suites by using these models. We evaluate their performance in terms of execution time, analyze the results, and report the limitations of using these programming models. Session Chair: Vahid Garousi

11:25
25m
Paper
An Empirical Study of Parallelizing Test Execution Using CUDA Unified Memory and OpenMP GPU Offloading
NEXTA
Taghreed Bagies King Abdulaziz University
11:50 - 12:20
Advancing Test Automation Using Artificial Intelligence (AI)NEXTA at Tamandaré

Keynote speaker: Jeremy S. Bradbury, PhD, Associate Professor, Computer Science, Associate Dean, School of Graduate and Postdoctoral Studies Ontario Tech University. Abstract: In recent years, software testing automation has been enhanced through the use of Artificial Intelligence (AI) techniques including genetic algorithms, machine learning and deep learning. The use cases for AI in test automation range from providing recommendations to the complete automation of software testing activities. To demonstrate the breadth of application, I will present several recent examples of how AI can be leveraged to support automated testing in rapid release cycles. Furthermore, I will discuss my own successes and failures in using AI to advance test automation as well as share the lesson I have learned. Session Chair: Sahar Tahvili

11:50
30m
Keynote
Advancing Test Automation Using Artificial Intelligence (AI)
NEXTA
Jeremy Bradbury Ontario Tech University
12:20 - 12:30
ClosingNEXTA at Tamandaré
12:20
10m
Talk
Closing
NEXTA

About NEXTA 2021

4th IEEE Workshop on NEXt level of Test Automation

Find full CFP here: http://www.testomatproject.eu/nexta2021

April 16, 2021, Virtual Co-located with IEEE International Conference on Software Testing, Verification and Validation (ICST 2021)

Theme and Goals

NEXTA’21 is the fourth edition of the IEEE Workshop on Test Automation - highly relevant for both research and industry. Thus, NEXTA aims to attract both academic researchers and industry practitioners. Test automation has been an acknowledged software engineering best practice for years. However, the topic involves more than the repeated execution of test cases that often comes first to mind. Simply running test cases using a unit testing framework is no longer enough for test automation to keep up with the ever-shorter release cycles driven by continuous deployment and technological innovations such as microservices and DevOps pipelines. Now test automation needs to rise to the next level by going beyond mere test execution. The NEXTA workshop will explore how to advance test automation to further contribute to software quality in the context of tomorrow’s rapid release cycles. Take-aways for industry practitioners and academic researchers will encompass test case generation, automated test result analysis, test suite assessment and maintenance, and infrastructure for the future of test automation.

Topics of Interest

NEXTA solicits contributions targeting all aspects of test automation, from initial test design to automated verdict analysis. Topics of interest include, but are not limited to, the following:

  • Test execution automation
  • Test case generation
  • Automatic test design generation
  • Analytics, learning and big data in relation to test automation
  • Automatic aspects management in test, progress, reporting, planning etc.
  • Visualization of test
  • Evolution of test automation
  • Test suite architecture and infrastructure
  • Test environment, simulation, and other contextual issues for automated testing
  • Test tools, frameworks, and general support for test automation
  • Testing in an agile and continuous integration context, and testing within DevOps
  • Orchestration of test
  • Metrics, benchmarks, and estimation on any type of test automation
  • Any type of test technologies relying on automation of test
  • Process improvements and assessments related to test automation
  • Test automation maturity and experience reports on test automation
  • Automatic retrieval of test data and test preparation aspect
  • Maintainability, monitoring and refactoring of automated test suites
  • Training and education on automated testing
  • Automated test for product lines and high-variability systems
  • Test automation patterns
  • Automated test oracle

Authors should submit a PDF version of their paper through the NEXTA 2021 paper submission site: easychair All accepted papers will be part of the ICST joint workshop proceedings published in the IEEE Digital Library.

NEXTA 2021 Preliminary Program (Half-day workshop)

9.00 (BRAZIL TIME)

Welcome to NEXTA - Introduction to Workshop

Chairs

Active Machine Learning to Test Autonomous Driving

Prof. Karl Meinke, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Sweden

Abstract: Autonomous driving represents a significant challenge to all software quality assurance techniques, including testing. Generative machine learning (ML) techniques including active ML have considerable potential to generate high-quality synthetic test data that can complement and improve on existing techniques such as hardware-in-the-loop and road testing.

AI-based Test Automation: A Grey Literature Analysis

Filippo Ricca*, Alessandro Marchetto** and Andrea Stocco***; *DIBRIS, Università di Genova, Italy, **Italy, ***Università della Svizzera italiana (USI), Schweiz

Abstract: This paper provides the results of a survey of the grey literature concerning the use of artificial intelligence to improve test automation practices. We surveyed more than 1,200 sources of grey literature (e.g., blogs, white-papers, user manuals, StackOverflow posts) looking for highlights by professionals on how AI is adopted to aid the development and evolution of test code. Ultimately, we filtered 136 relevant documents from which we extracted a taxonomy of problems that AI aims to tackle, along with a taxonomy of AI-enabled solutions to such problems. Manual code development and automated test generation are the most cited problem and solutions, respectively. The paper concludes by distilling the six most prevalent tools on the market, along with think-aloud reflections about the current and future status of artificial intelligence for test automation.

Flaky Mutants; Another Concern for MutationTesting

Sten Vercammen, Serge Demeyer, Markus Borg* and Robbe Claessens, Unviversity of Antwerpen, Belgium and *RISE, Sweden

Using Advanced Code Analysis for Boosting Unit Test Creation

Miroslaw Zielinski* and Rix Groenboom**, *Parasoft, Poland, **Parasoft, Netherlands

Abstract: Unit testing is a popular testing technique, widespread in enterprise IT and embedded/safety-critical. For enterprise IT, unit testing is considered to be good practice and is frequently followed as an element of test-driven development. In the safety-critical world, there are many standards, such as ISO 26262, IEC 61508, and others, that either directly or indirectly mandate unit testing. Regardless of the area of the application, unit testing is very time-consuming and teams are looking for strategies to optimize their efforts. This is especially true in the safety-critical space, where demonstration of test coverage is required for the certification. In this presentation, we share the results of our research regarding the use of advanced code analysis algorithms for augmenting the process of unit test creation. The discussion includes the automatic discovery of inputs and responses from mocked components that maximize the code coverage and automated generation of the test cases.

QRTest: Automatic Query Reformulation for Information Retrieval Based Regression Test Case Prioritization

Maral Azizi, East Carolina University, United States

Abstract: The most effective regression testing algorithms have long running times and often require dynamic or static code analysis, making them unsuitable for the modern software development environment where the rate of software delivery could be less than a minute. More recently, some researchers have developed information retrieval-based (IR-based) techniques for prioritizing tests such that the higher similar tests to the code changes have a higher likelihood of finding bugs. A vast majority of these techniques are based on standard term similarity calculation, which can be imprecise. One reason for the low accuracy of these techniques is that the original query often is short, therefore, it does not return the relevant test cases. In such cases, the query needs reformulation. The current state of research lacks methods to increase the quality of the query in the regression testing domain. Our research aims at addressing this problem and we conjecture that enhancing the quality of the queries can improve the performance of IR-based regression test case prioritization (RTP). Our empirical evaluation with six open source programs shows that our approach improves the accuracy of IR-based RTP and increases regression fault detection rate, compared to the common prioritization techniques.

An Empirical Study of Parallelizing Test Execution Using CUDA Unified Memory and OpenMP GPU Offloading

Taghreed Bagies and Ali Jannesari, Iowa State University, United States

Abstract: The execution of software testing is costly and time-consuming. To accelerate the test execution, researchers have applied several methods to run the testing in parallel. One method of parallelizing the test execution is by using a GPU to distribute test case inputs among several threads running in parallel. In this paper, we investigate three programming models CUDA Unified Memory, CUDA Non-Unified Memory, and OpenMP GPU offloading to parallelize the test execution and discuss the challenges using these programming models. We use eleven benchmarks and parallelize their test suites by using these models. We evaluate their performance in terms of execution time, analyze the results, and report the limitations of using these programming models.

Advancing Test Automation Using Artificial Intelligence (AI)

Assoc.Prof Jeremy Bradbury, Ontario Tech University, Canda

Abstract: In recent years, software testing automation has been enhanced through the use of Artificial Intelligence (AI) techniques including genetic algorithms, machine learning, and deep learning. The use cases for AI in test automation range from providing recommendations to the complete automation of software testing activities. To demonstrate the breadth of application, I will present several recent examples of how AI can be leveraged to support automated testing in rapid release cycles. Furthermore, I will discuss my own successes and failures in using AI to advance test automation as well as share the lesson I have learned.

Closing

Chairs

Questions? Use the NEXTA contact form.