PROMISE is an annual forum for researchers and practitioners to present, discuss and exchange ideas, results, expertise and experiences in construction and/or application of predictive models and data analytics in software engineering. PROMISE encourages researchers to publicly share their data in order to provide interdisciplinary research between the software engineering and data mining communities, and seek for verifiable and repeatable experiments that are useful in practice.

Please see ESEIW website for venue, registration, and visa information


Keynote by Dr. Christian Bird, Microsoft Research, USA

Lessons and Insights from Tech Transfers at Microsoft





Abstract: As a basic industrial research lab, Microsoft Research expects its members to both publish basic research and put it into practice. Unfortunately, moving from a validated technique or model in a published paper to a state where that same technique is being used by and providing value to software development projects on a regular basis in a consistent and timely fashion is a time consuming, fraught, and difficult task. We have attempted to make this transition, which we call "Tech Transfer", many times in the empirical software engineering group (ESE) at Microsoft Research. Much like research in general, there have been both triumphs and setbacks, but each experience has provided valuable insight and informed our next effort. This talk shares our experiences from successes and failures and provides lessons and guidance that can be used by others trying to transfer their research into practice in both industrial and academic contexts.

Biography: Christian Bird is a principal researcher in the Empirical Software Engineering group at Microsoft Research. He is primarily interested in the relationship between software design, social dynamics, and processes in large development projects and in developing tools and techniques to help software teams. He uses both quantitative and qualitative methods to understand and improve areas including code review, software engineering management, and software productivity. He has published in the top Software Engineering venues, has received multiple distinguished papers and test of time awards, and was recognized with the ACM SigSoft Early Career Award. Christian received B.S. from Brigham Young University and his Ph.D. from the University of California, Davis.


Program

8:30-10:00 Opening (Chairs: Foutse Khomh and Jean Petric) 10:00-10:30 Coffee Break

10:30-12:00 Modelling and Data (Chair: Leandro Minku) 12:00-13:30 Lunch Break

13:30-15:00 Replication and Quality (Chair: Audris Mockus) 15:00-15:30 Coffee Break

15:30-16:50 Effort Estimation and Code Review (Chair: Sousuke Amasaki) 16:50-17:10 Closing (Chairs: Foutse Khomh, Jean Petric and Leandro Minku)

Topics of Interest

Application oriented:

  • prediction of cost, effort, quality, defects, business value;
  • quantification and prediction of other intermediate or final properties of interest in software development regarding people, process or product aspects;
  • using predictive models and data analytics in different settings, e.g. lean/agile, waterfall, distributed, community-based software development;
  • dealing with changing environments in software engineering tasks;
  • dealing with multiple-objectives in software engineering tasks;
  • using predictive models and software data analytics in policy and decision-making.

Theory oriented:

  • model construction, evaluation, sharing and reusability;
  • interdisciplinary and novel approaches to predictive modelling and data analytics that contribute to the theoretical body of knowledge in software engineering;
  • verifying/refuting/challenging previous theory and results;
  • combinations of predictive models and search-based software engineering;
  • the effectiveness of human experts vs. automated models in predictions.

Data oriented:

  • data quality, sharing, and privacy;
  • curated data sets made available for the community to use;
  • ethical issues related to data collection and sharing;
  • metrics;
  • tools and frameworks to support researchers and practitioners to collect data and construct models to share/repeat experiments and results.

Validity oriented:

  • replication and repeatability of previous work using predictive modelling and data analytics in software engineering;
  • assessment of measurement metrics for reporting the performance of predictive models;
  • evaluation of predictive models with industrial collaborators.

 

Important Dates

  • Abstracts due: June 17th, 2019 (extended)
  • Submissions due: June 17th, 2019 (extended)
  • Author notification: July 7th, 2019
  • Camera ready: July 21st, 2019
  • Conference Date: September 18th, 2019

 

Journal Special Section

  • Following the conference, the authors of the best papers will be invited to submit extended versions of their papers for consideration in a special section of the Information and Software Technology journal by Elsevier.

Kinds of Papers

We invite theory and empirical studies on the topics of interest (e.g. case studies, meta-analysis, replications, experiments, simulations, surveys etc.), as well as industrial experience reports detailing the application of predictive modelling and data analytics in industrial settings. Both positive and negative results are welcome, though negative results should still be based on rigorous research and provide details on lessons learned. It is encouraged, but not mandatory, that conference attendees contribute the data used in their analysis on-line. Submissions can be of the following kinds:

  • Full papers (oral presentation): papers with novel and complete results.
  • Short papers (oral presentation): papers to disseminate on-going work and preliminary results for early feedback, or vision papers about the future of predictive modelling and data analytics in software engineering
Note about GitHub research: Given that PROMISE papers heavily rely on software data, we would like to draw authors that leverage data scraped from GitHub of GitHub's Terms of Service, which require that “publications resulting from that research are open access”. Similar to other leading SE conferences, PROMISE supports and encourages Green Open Access, i.e., self-archiving. Authors can archive their papers on their personal home page, an institutional repository of their employer, or at an e-print server such as arXiv (preferred).

 

Submissions

PROMISE 2019 submissions must meet the following criteria:
  • be original work, not published or under review elsewhere while being considered;
  • conform to the ACM SIG proceedings template;
  • not exceed 10 (4) pages for full (short) papers including references;
  • be written in English;
  • be prepared for double blind review, except for data papers, where double blind is optional (see instructions below);
  • be submitted via EasyChair (please choose the paper category appropriately).
Submissions will be peer reviewed by at least three experts from the international program committee. Submissions will be evaluated on the basis of their originality, importance of contribution, soundness, evaluation, quality, and consistency of presentation, and appropriate comparison to related work. Accepted papers will be published in the ACM Digital Library within its International Conference Proceedings Series and will be available electronically via ACM Digital Library. Each accepted paper needs to have one registration at the full conference rate and be presented in person at the conference.


Double-Blind Review Process

PROMISE 2019 will employ a double-blind review process, except for data papers, where double blind is optional (see below). This means that the submissions should by no means disclose the identity of the authors. The authors must make every effort to honor the double-blind review process. In particular, the authors’ names must be omitted from the submission and references to their prior work should be in the third person.

If the paper is about a data set or data collection tool, double-blind review is not obligatory. However, authors may choose to opt in double blind reviews by making their data repository or data collection tool anonymous and omitting their paper authorship information if they wish to. If in doubt regarding the obligatoriness of double blind review for your specific case, please contact the PC chairs.


Why double blind?

Double blind has now taken off, mostly driven by the considerable number of requests from the software engineering community. We have also now decided to respond to this call. We are aware that there are certain challenges with regard to the double blind review process as detailed by Shepperd [1]. However, we hope that the benefits, some of which are discussed by Le Goues [2], will overcome those challenges.

[1] https://empiricalsoftwareengineering.wordpress.com/2017/11/05/why-i-disagree-with-double-blind-reviewing/
[2] https://www.cs.cmu.edu/~clegoues/double-blind.html

Programme Committee

  • David Bowes, Lancaster University
  • Ricardo Britto, Blekinge Institute of Technology
  • Hoa Khanh Dam, University of Wollongong
  • Giuseppe Destefanis, Brunel University London
  • Carmine Gravino, University of Salerno
  • Tracy Hall, Lancaster University
  • Rachel Harrison, Oxford Brooks University
  • Yasutaka Kamei, Kyushu University
  • Lech Madeyski, Wroclaw University of Science and Technology
  • Shane McIntosh, McGill University
  • Tim Menzies, North Carolina State University
  • Jaechang Nam, Handong Global University
  • Maleknaz Nayebi, Polytechnique Montreal
  • Fabio Polomba, University of Zurich
  • Daniel Rodriguez, University of Alcalá
  • Martin Shepperd, Brunel University London
  • Emad Shihab, Concordia University
  • Yuan Tian, Singapore Management University
  • Ayse Tosun, Istanbul Technical University
  • Burak Turhan, Monash University
  • Hironori Washizaki, Waseda University
  • Xin Xia, Monash University
  • Yuming Zhou, Nanjing University

Steering Committee

General Chair

PC Co-Chairs

Publication Chair

Publicity Chair