PROMISE is an annual forum, sponsored by ACM SIGSOFT, for researchers and practitioners to present, discuss and exchange ideas, results, expertise and experiences in construction and/or application of predictive models and data analytics in software engineering. PROMISE encourages researchers to publicly share their data in order to provide interdisciplinary research between the software engineering and data mining communities, and seek for verifiable and repeatable experiments that are useful in practice.
Please see FSE 2021 website for venue, registration, and visa information
ProgramTo be announced.
Topics of Interest
PROMISE papers can explore any of the following topics (or more).
- prediction of cost, effort, quality, defects, business value;
- quantification and prediction of other intermediate or final properties of interest in software development regarding people, process or product aspects;
- using predictive models and data analytics in different settings, e.g. lean/agile, waterfall, distributed, community-based software development;
- dealing with changing environments in software engineering tasks;
- dealing with multiple-objectives in software engineering tasks;
- using predictive models and software data analytics in policy and decision-making.
- Can we apply and adjust our AI-for-SE tools (including predictive models) to handle ethical non-functional requirements such as inclusiveness, transparency, oversight and accountability, privacy, security, reliability, safety, diversity and fairness?
- model construction, evaluation, sharing and reusability;
- interdisciplinary and novel approaches to predictive modelling and data analytics that contribute to the theoretical body of knowledge in software engineering;
- verifying/refuting/challenging previous theory and results;
- combinations of predictive models and search-based software engineering;
- the effectiveness of human experts vs. automated models in predictions.
- data quality, sharing, and privacy;
- curated data sets made available for the community to use;
- ethical issues related to data collection and sharing;
- tools and frameworks to support researchers and practitioners to collect data and construct models to share/repeat experiments and results.
- replication and repeatability of previous work using predictive modelling and data analytics in software engineering;
- assessment of measurement metrics for reporting the performance of predictive models;
- evaluation of predictive models with industrial collaborators.
- Abstracts due:
May 28thJune 7th, 2021
- Submissions due:
June 3rdJune 10th, 2021
- Author notification:
June 28thJuly 4th, 2021
- Camera ready: July 8th, 2021
- Conference Date: August 19th-20th, 2021
Journal Special Section
Following the conference, the authors of the best papers will be invited to submit extended versions of their papers for consideration in a special section of a journal. The details will be announced later.
Call for papers
Technical papers: (10 pages) PROMISE accepts a wide range of papers where AI tools have been applied to SE such as predictive modeling and other AI methods. Both positive and negative results are welcome, though negative results should still be based on rigorous research and provide details on lessons learned.
Industrial papers: (2-4 pages) Results, challenges, lessons learned from industrial applications of software analytics.
New idea papers: (2-4 pages) Novel insights or ideas that may yet to be fully tested.
Defect prediction challenge-track papers: (2 pages). For details on this challenge track, see the challenge CFP. In summary, we provide just-in-time defect prediction data, and you can submit any defect prediction approach of your choosing to participate in this challenge. The highest scoring approach wins. Note that such challenge track papers would be a suitable term project from university students.
Publication and Attendance
Accepted papers will be published in the main ACM publication program and will be available electronically via ACM Digital Library.
Each accepted paper needs to have one registration at the full conference rate and be presented in person at the conference.
Green Open Access
Similar to other leading SE conferences, PROMISE supports and encourages Green Open Access, i.e., self-archiving. Authors can archive their papers on their personal home page, an institutional repository of their employer, or at an e-print server such as arXiv (preferred). Also, given that PROMISE papers heavily rely on software data, we would like to draw authors that leverage data scraped from GitHub of GitHub's Terms of Service, which require that "publications resulting from that research are open access".
We also strongly encourage authors to submit their tools and data to Zenodo, which adheres to FAIR (findable, accessible, interoperable and re-usable) principles and provides DOI versioning.
SubmissionsPROMISE 2021 submissions must meet the following criteria:
- be original work, not published or under review elsewhere while being considered;
- conform to the ACM SIG proceedings template;
- not exceed 10 (4) pages for full (short) papers including references;
- be written in English;
- be prepared for double blind review, except for data papers, where double blind is optional (see instructions below);
- be submitted via HotCRP (please choose the paper category appropriately).
Double-Blind Review Process
PROMISE 2021 will employ a double-blind review process, except for data papers, where double blind is optional (see below). This means that the submissions should by no means disclose the identity of the authors. The authors must make every effort to honor the double-blind review process. In particular, the authors’ names must be omitted from the submission and references to their prior work should be in the third person.
If the paper is about a data set or data collection tool, double-blind review is not obligatory. However, authors may choose to opt in double blind reviews by making their data repository or data collection tool anonymous and omitting their paper authorship information if they wish to. If in doubt regarding the obligatoriness of double blind review for your specific case, please contact the PC chairs.
Why double blind?
Double blind has now taken off, mostly driven by the considerable number of requests from the software engineering community. We have also now decided to respond to this call. We are aware that there are certain challenges with regard to the double blind review process as detailed by Shepperd . However, we hope that the benefits, some of which are discussed by Le Goues , will overcome those challenges. https://empiricalsoftwareengineering.wordpress.com/2017/11/05/why-i-disagree-with-double-blind-reviewing/
- Ayse Tosun, Istanbul Technical University
- Burak Turhan, University of Oulu
- Carmine Gravino, University of Salerno
- Chunyang Chen, Monash University
- Eunjong Choi, Kyoto Institute of Technology
- Fabio Palomba University of Salerno
- Gema Rodriguez Perez, University of Waterloo
- Hirohisa Aman, Ehime University
- Hironori Washizaki, Waseda University
- Hoa Khanh Dam, University of Wollongong
- Hongyu Zhang, The University of Newcastle
- Koji Toda, Fukuoka Institute of Technology
- Lech Madeyski, Wroclaw University of Science and Technology
- Martin Shepperd, Brunel University London
- Neng Zhang, Sun Yat-sen University
- Osamu Mizuno, Kyoto Institute of Technology
- Tracy Hall, Lancaster University
- Vu Nguyen, University of Science, VNU-HCM. KMS Technology, Inc.
- Weiyi Shang, Concordia University
- Xiaoyuan Xie, Wuhan University
- Yasutaka Kamei, Kyushu University
- Yiming Tang, Concordia University
- Yuming Zhou, Nanjing University
- Zhiyuan Wan, Zhejiang University
- Foutse Khomh, Ecole Polytechnique de Montreal
- Ayse Tosun, Istanbul Technical University
- David Bowes, University of Central Lancashire
- Giuseppe Destefanis, Brunel University
- Tim Menzies, North Carolina State University
- Meiyappan Nagappan, University of Wateroo
- Jean Petric, Lancaster U.
- Shane McIntosh, University of Waterloo