PROMISE is an annual forum for researchers and practitioners to present, discuss and exchange ideas, results, expertise and experiences in construction and/or application of predictive models and data analytics in software engineering. PROMISE encourages researchers to publicly share their data in order to provide interdisciplinary research between the software engineering and data mining communities, and seek for verifiable and repeatable experiments that are useful in practice.

Please see FSE 2022 website for venue, registration, and visa information


Keynote by Dr. Bram Adams, Queen's University, Canada

Release Engineering in the AI World: How can Analytics Help?

Abstract: The last decade, the practices of continuous delivery and deployment have taken the software engineering world by storm. While applications used to be released in an ad hoc manner, breakthroughs in (amongst others) continuous integration, infrastructure-as-code and log monitoring have turned the reliable release of cloud applications into a manageable achievement for most companies. However, the advent of AI models seems to have caused a "reset", pushing companies to reinvent the way in which they release high-quality products that now not only rely on source code, but also on data and models. This talk will focus on the key ingredients of successful pre-AI release engineering practices, then will connect those to newly emerging, post-AI release engineering practices. After this talk, the audience will understand the major challenges software companies face to release their AI product multiple times a day, as well as the opportunities for predictive models and data analytics.
Biography: Bram Adams is an associate professor at Queen's University. He obtained his PhD at the GH-SEL lab at Ghent University (Belgium). His research interests include software release engineering, mining software repositories, and the role of human affect in software engineering. His work has been published at premier software engineering venues such as EMSE, TSE, ICSE, FSE, MSR and ICSME, and received the 2021 MSR Foundational Contribution Award. In addition to co-organizing the RELENG International Workshop on Release Engineering from 2013 to 2015 (and the 1st/2nd IEEE Software Special Issue on Release Engineering), he co-organized the SEMLA, SoHEAL, PLATE, ACP4IS, MUD and MISS workshops, and the MSR Vision 2020 Summer School. He has been PC co-chair of SCAM 2013, SANER 2015, ICSME 2016 and MSR 2019, and will be "software analytics" area co-chair for ICSE 2023.

Accepted Papers

  • Fahad Al Debeyan, Tracy Hall, and David Bowes
    Improving the Performance of Code Vulnerability Prediction using Abstract Syntax Tree Information
  • Dhasarathy Parthasarathy, Cecilia Ekelin, Anjali Karri, Jiapeng Sun, and Panagiotis Moraitis
    Measuring Design Compliance using Neural Language Models: An Automotive Case Study
  • Peter Bludau and Alexander Pretschner
    Feature Sets in Just-in-Time Defect Prediction: An Empirical Evaluation
  • Tugce Coskun, Rusen Halepmollasi, Khadija Hanifi, Ramin Fadaei Fouladi, Pinar Comak De Cnudde, and Ayse Tosun
    Profiling Developers to Predict Vulnerable Code Changes
  • Khaled Al-Sabbagh, Miroslaw Staron, and Regina Hebig
    Predicting Build Outcomes in Continuous Integration using Textual Analysis of Source Code Commits
  • Hammad Ahmad, Colton Holoday, Ian Bertram, Kevin Angstadt, Zohreh Sharafi, and Westley Weimer
    LOGI: An Empirical Model of Heat-Induced Disk Drive Data Loss and Its Implications for Data Recovery
  • Burak Yetistiren, Isik Ozsoy, and Eray Tuzun
    Assessing the Quality of GitHub Copilot's Code Generation
  • Jediael Mendoza, Jason Mycroft, Lyam Milbury, Nafiseh Kahani, and Jason Jaskolka
    On the Effectiveness of Data Balancing Techniques in the Context of ML-Based Test Case Prioritization
  • Mazen Mohamad, Jan-Philipp Steghöfer, Alexander Åström, and Riccardo Scandariato
    Identifying Security-Related Requirements in Regulatory Documents Based on Cross-Project Classification
  • Prantik Parashar Sarmah and Sridhar Chimalakonda
    API + Code = Better Code Summary? Insights from an Exploratory Study

Topics of Interest

PROMISE papers can explore any of the following topics (or more).

Application-oriented papers:

  • prediction of cost, effort, quality, defects, business value;
  • quantification and prediction of other intermediate or final properties of interest in software development regarding people, process or product aspects;
  • using predictive models and data analytics in different settings, e.g. lean/agile, waterfall, distributed, community-based software development;
  • dealing with changing environments in software engineering tasks;
  • dealing with multiple-objectives in software engineering tasks;
  • using predictive models and software data analytics in policy and decision-making.

Ethically-aligned papers:

  • Can we apply and adjust our AI-for-SE tools (including predictive models) to handle ethical non-functional requirements such as inclusiveness, transparency, oversight and accountability, privacy, security, reliability, safety, diversity and fairness?

Theory-oriented papers:

  • model construction, evaluation, sharing and reusability;
  • interdisciplinary and novel approaches to predictive modelling and data analytics that contribute to the theoretical body of knowledge in software engineering;
  • verifying/refuting/challenging previous theory and results;
  • combinations of predictive models and search-based software engineering;
  • the effectiveness of human experts vs. automated models in predictions.

Data-oriented papers:

  • data quality, sharing, and privacy;
  • curated data sets made available for the community to use;
  • ethical issues related to data collection and sharing;
  • metrics;
  • tools and frameworks to support researchers and practitioners to collect data and construct models to share/repeat experiments and results.

Validity-oriented papers:

  • replication and repeatability of previous work using predictive modelling and data analytics in software engineering;
  • assessment of measurement metrics for reporting the performance of predictive models;
  • evaluation of predictive models with industrial collaborators.

 

Important Dates

  • Abstracts due: July 4th, 2022
  • Submissions due: July 8st, 2022
  • Author notification: July 29th, 2022
  • Camera ready: September 5th, 2022
  • Conference Date: November 17th, 2022

 

Journal Special Section

Following the conference, the authors of the best papers will be invited to submit extended versions of their papers for consideration in a special section in the journal Empirical Software Engineering (EMSE).

EMSE encourages open science and reproducible research for this special issue. Please see our Open Science Initiative for further information.

 

Call for papers

Technical papers: (10 pages) PROMISE accepts a wide range of papers where AI tools have been applied to SE such as predictive modeling and other AI methods. Both positive and negative results are welcome, though negative results should still be based on rigorous research and provide details on lessons learned.

Industrial papers: (2-4 pages) Results, challenges, lessons learned from industrial applications of software analytics.

New idea papers: (2-4 pages) Novel insights or ideas that may yet to be fully tested.

Tutorials/Technical Briefing: (2+1 pages) (*new this year*) Tutorials and short technical briefings on trending topics related to software engineering (duration: 60/90/120 minutes). The proposal should be no longer than 2 pages plus one page for brief speaker information and biographies. The tutorial Call for Papers and submission guidelines can be can be found here.

Publication and Attendance

Accepted papers will be published in the ACM Digital Library within its International Conference Proceedings Series and will be available electronically via ACM Digital Library.

Each accepted paper needs to have one registration at the full conference rate and be presented in person at the conference.

Green Open Access

Similar to other leading SE conferences, PROMISE supports and encourages Green Open Access, i.e., self-archiving. Authors can archive their papers on their personal home page, an institutional repository of their employer, or at an e-print server such as arXiv (preferred). Also, given that PROMISE papers heavily rely on software data, we would like to draw authors that leverage data scraped from GitHub of GitHub's Terms of Service, which require that "publications resulting from that research are open access".

We also strongly encourage authors to submit their tools and data to Zenodo, which adheres to FAIR (findable, accessible, interoperable and re-usable) principles and provides DOI versioning.

 

Submissions

PROMISE 2022 submissions must meet the following criteria:
  • be original work, not published or under review elsewhere while being considered;
  • conform to the ACM SIG proceedings template;
  • not exceed 10 (4) pages for technical (industrial, new-ideas) papers including references;
  • be written in English;
  • be prepared for double blind review
    • Exception: for data-oriented papers, authors may elect not to use double blind by placing a footnote on page 1 saying "Offered for single-blind review".
  • be submitted via HotCRP;
  • on submission, please choose the paper category appropriately, i.e., technical (main track, 10 pages max); industrial (4 pages max); and new idea papers (4 pages max).
To satisfy the double blind requirement submissions must meet the following criteria:
  • no author names and affiliations in the body and metadata of the submitted paper;
  • self-citations are written in the third person;
  • no references to the authors personal, lab, or university website;
  • no references to personal accounts on GitHub, bitbucket, Google Drive, etc.
Submissions will be peer reviewed by at least three experts from the international program committee. Submissions will be evaluated on the basis of their originality, importance of contribution, soundness, evaluation, quality, and consistency of presentation, and appropriate comparison to related work.

Programme Committee

  • Hirohisa Aman, Ehime University
  • Sousuke Amasaki, Okayama Prefectural University
  • Gemma Catolino, Tilburg University - Jheronimus Academy of Data Science
  • Jinfu Chen, Huawei Technologies Canada
  • Zadia Codabux, University of Saskatchewan
  • Eleni Constantinou, Eindhoven University of Technology
  • Carmine Gravino, University of Salerno
  • Tracy Hall, Lancaster University
  • Steffen Herbold, TU Clausthal, Germany
  • Yasutaka Kamei, Kyushu University
  • Maxime Lamothe, Polytechnique Montreal
  • Gregorio Robles, Universidad Rey Juan Carlos
  • Mohammed Sayagh, ETS - Quebec University
  • Martin Shepperd, Gothenburg University/Brunel University
  • Yiming Tang, Concordia University
  • Melina Vidoni, Australian National University, CECS School of Computing
  • Zhiyuan Wan, Zhejiang University
  • Lili Wei, The Hong Kong University of Science and Technology
  • Xiaoyuan Xie, Wuhan University
  • Ahmed Zerouali, Vrije Universiteit Brussels
  • Hongyu Zhang, The University of Newcastle

Steering Committee

General Chair

PC Co-Chairs

Tutorial Co-Chairs

Publicity Co-Chairs

Publication Chair