ICST 2021
Mon 12 - Fri 16 April 2021

This is the 4th edition of INTUITESTBEDS (International Workshop on User Interface Test Automation and Testing Techniques for Event Based Software). INTUITESTBEDS is a merge of two workshops with very similar goals and topics, INTUITEST – International Workshop on User Interface Test Automation (organized 3 times before the merge), and TESTBEDS – International Workshop on TESting Techniques for event BasED Software (organized 7 times before the merge).

The workshop aims to bring together the researchers, practitioners and tool developers working on topics related to:

  1. automated testing of applications through the user interfaces, including but not limited to graphical user interfaces, user interfaces of mobile devices and applications, and user interfaces of web applications, and
  2. testing of other types of event-driven software, such as network protocols, embedded software, IoT applications and devices, web services and device drivers.

Venue

INTUITESTBEDS will be organized as a workshop of ICST 2021 (IEEE International Conference on Software Testing, Verification and Validation). The whole conference and all the workshop are going to be organized remotely.

Workshop site URL

For additional info and past editions please visit the INTUITESTBEDS web site at: https://www.intuitestbeds.org/

You're viewing the program in a time zone which is different from your device's time zone - change time zone

Conference Day
Fri 16 Apr

Displayed time zone: Brasilia, Distrito Federal, Brazil change

09:00 - 09:15
Starting INTUITESTBEDSINTUITESTBEDS at Paiva
09:00
15m
Day opening
Welcome and Opening Message
INTUITESTBEDS

09:15 - 10:15
KeynoteINTUITESTBEDS at Paiva

Invited Keynote Dr. Mariano Ceccato University of Verona

Security Testing of Android Apps

Android facilitates apps interoperation and integration through inter-process communication mechanism, by allowing an app to request a task from another app that is installed on the same device. However, this interoperation mechanism poses security risks if an app does not implement it properly, such as permission re-delegation vulnerabilities, i.e., a form of privilege escalation where unprivileged malicious apps exploit vulnerable privileged apps to take privileged actions on the attacker behalf. Static analysis techniques as well as run-time protections have been proposed to detect permission re-delegation vulnerabilities. However, as acknowledged by their authors, most of these approaches are affected by many false positives because they do not discriminate between benign task requests and actual permission re-delegation vulnerabilities. In this keynote, we will present a recent approach aiming at filling this gap and at bridging static and dynamic analysis with security testing for precise detection of permission re-delegation vulnerabilities. Our approach first groups a large set of benign and non-vulnerable apps into different clusters, based on their similarities in terms of functional descriptions. It then generates permission re-delegation model for each cluster, which characterizes common permission re-delegation behaviors of the apps in the cluster. Given an app under test, our approach checks whether it has permission re-delegation behaviors that deviate from the model of the cluster it belongs to. If that is the case, it generates test cases to detect the vulnerabilities, that show how the vulnerabilities can be exploited. Empirical validation suggests that this security testing approach outperforms state-of-the-art in terms of vulnerability detection precision.

09:15
60m
Keynote
Security Testing of Android Apps
INTUITESTBEDS
Mariano CeccatoUniversity of Verona
10:30 - 11:00
INTUITESTBEDS Full research paper INTUITESTBEDS at Paiva
10:30
30m
Paper
Model-based Automated Testing of Mobile Applications: An Industrial Case Study
INTUITESTBEDS
Stefan KarlssonABB AB, Mälardalen University
Pre-print
11:00 - 11:30
INTUITESTBEDS Full research paper INTUITESTBEDS at Paiva
11:00
30m
Paper
Improving Mobile User Interface Testing with Model Driven Monkey Search
INTUITESTBEDS
11:30 - 11:50
INTUITESTBEDS Position paper INTUITESTBEDS at Paiva
11:30
20m
Paper
A Metric Framework for the Gamification of Web and Mobile GUI Testing
INTUITESTBEDS
Riccardo CoppolaPolitecnico di Torino
11:50 - 12:15
Closing INTUITESTBEDSINTUITESTBEDS at Paiva
11:50
25m
Day closing
Open Discussion and Closing
INTUITESTBEDS

Call for Papers

We solicit novel papers related to the following topics (not strictly limited) in the context of testing User Interface and other Event Based Systems:

  1. Modeling and model inference,
  2. Test case generation and execution,
  3. Test oracles,
  4. Coverage, metrics and evaluation,
  5. Data analysis and reporting,
  6. Abstraction and re-usability,
  7. Interoperability and cross-platform testing,
  8. Prioritization and optimization,
  9. Tooling and industrial experiences.

Papers can be of one of the following four types:

  • Full research contributions will be 8 pages in two-column IEEE conference publication format.
  • Position papers describing an important direction for our community will be a maximum of 4 pages. in two-column IEEE conference publication format.
  • Testing tool demos will be 4 pages length in two-column IEEE conference publication format, for researchers who want to present tools relevant to the workshop.
  • Industrial presentations will require the submission of a 2 page overview and 4 example slides.

Each paper in the first three categories (full, position and demo) will be reviewed by at least three program committee members. Papers should be submitted as PDF files in two-column IEEE conference publication format. Templates for LaTeX and Microsoft Word are available here. Please use the letter, format template and conference option.

Accepted papers will be published as part of ICST workshops proceedings, through the IEEE digital library.

Papers should be submitted through EasyChair.

Dr. Mariano Ceccato University of Verona

Security Testing of Android Apps

Android facilitates apps interoperation and integration through inter-process communication mechanism, by allowing an app to request a task from another app that is installed on the same device. However, this interoperation mechanism poses security risks if an app does not implement it properly, such as permission re-delegation vulnerabilities, i.e., a form of privilege escalation where unprivileged malicious apps exploit vulnerable privileged apps to take privileged actions on the attacker behalf. Static analysis techniques as well as run-time protections have been proposed to detect permission re-delegation vulnerabilities. However, as acknowledged by their authors, most of these approaches are affected by many false positives because they do not discriminate between benign task requests and actual permission re-delegation vulnerabilities. In this keynote, we will present a recent approach aiming at filling this gap and at bridging static and dynamic analysis with security testing for precise detection of permission re-delegation vulnerabilities. Our approach first groups a large set of benign and non-vulnerable apps into different clusters, based on their similarities in terms of functional descriptions. It then generates permission re-delegation model for each cluster, which characterizes common permission re-delegation behaviors of the apps in the cluster. Given an app under test, our approach checks whether it has permission re-delegation behaviors that deviate from the model of the cluster it belongs to. If that is the case, it generates test cases to detect the vulnerabilities, that show how the vulnerabilities can be exploited. Empirical validation suggests that this security testing approach outperforms state-of-the-art in terms of vulnerability detection precision.