PROMISE'19 CALL FOR PAPERS
PROMISE is an annual forum for researchers and practitioners to present, discuss and exchange ideas, results, expertise and experiences in construction and/or application of predictive models and data analytics in software engineering. PROMISE encourages researchers to publicly share their data in order to provide interdisciplinary research between the software engineering and data mining communities, and seek for verifiable and repeatable experiments that are useful in practice.
Please see ESEIW website for venue, registration, and visa information
Keynote by Dr. Christian Bird, Microsoft Research, USA
Lessons and Insights from Tech Transfers at Microsoft
|
Abstract: As a basic industrial research lab, Microsoft Research expects its members to both publish basic research and put it into practice. Unfortunately, moving from a validated technique or model in a published paper to a state where that same technique is being used by and providing value to software development projects on a regular basis in a consistent and timely fashion is a time consuming, fraught, and difficult task. We have attempted to make this transition, which we call "Tech Transfer", many times in the empirical software engineering group (ESE) at Microsoft Research. Much like research in general, there have been both triumphs and setbacks, but each experience has provided valuable insight and informed our next effort. This talk shares our experiences from successes and failures and provides lessons and guidance that can be used by others trying to transfer their research into practice in both industrial and academic contexts. Biography: Christian Bird is a principal researcher in the Empirical Software Engineering group at Microsoft Research. He is primarily interested in the relationship between software design, social dynamics, and processes in large development projects and in developing tools and techniques to help software teams. He uses both quantitative and qualitative methods to understand and improve areas including code review, software engineering management, and software productivity. He has published in the top Software Engineering venues, has received multiple distinguished papers and test of time awards, and was recognized with the ACM SigSoft Early Career Award. Christian received B.S. from Brigham Young University and his Ph.D. from the University of California, Davis. |
Program
8:30-10:00 Opening (Chairs: Foutse Khomh and Jean Petric)- 8:30-9:00 Opening
- 9:00-10:00 Keynote
10:30-12:00 Modelling and Data (Chair: Leandro Minku)
- 10:30-10:50 Tapajit Dey, Yuxing Ma and Audris Mockus. Patterns of Effort Contribution and Demand and User Classification based on Participation Patterns in NPM Ecosystem.
- 10:50-11:10 Renan Vieira, Antônio da Silva, Lincoln Rocha and João Paulo Gomes. From Reports to Bug-Fix Commits: A 10 Years Dataset of Bug-Fixing Activity from 55 Apache's Open Source Projects.
- 11:10-11:30 Valentina Lenarduzzi, Nyyti Saarimäki and Davide Taibi. The Technical Debt Dataset.
- 11:30-11:40 Sousuke Amasaki, Hirohisa Aman and Tomoyuki Yokogawa. Applying Cross Project Defect Prediction Approaches to Cross-Company Effort Estimation.
- 11:40-12:00 Discussion
13:30-15:00 Replication and Quality (Chair: Audris Mockus)
- 13:30-13:50 Hadi Jahanshahi, Dhanya Jothimani, Ayse Bener and Mucahit Cevik. Does chronology matter in JIT defect prediction? A Partial Replication Study.
- 13:50-14:10 An Nguyen, Bach Le and Vu Nguyen. Prioritizing automated user interface tests using reinforcement learning.
- 14:10-14:30 Sravya Polisetty, Andriy Miranskyy and Ayse Bener. On Usefulness of the Deep-Learning-Based Bug Localization Models to Practitioners.
- 14:30-14:40 Idan Amit and Dror Feitelson. Which Refactoring Reduces Bug Rate?
- 14:40-15:00 Discussion
15:30-16:50 Effort Estimation and Code Review (Chair: Sousuke Amasaki)
- 15:30-15:50 Song Wang, Chetan Bansal, Nachiappan Nagappan and Adithya Abraham Philip. Leveraging Change Intents for Characterizing and Identifying Large-Review-Effort Changes.
- 15:50-16:10 Thu Tran, Vu Nguyen, Thong Truong, Chi Tran and Phu Le. An evaluation of parameter pruning approaches for software estimation.
- 16:10-16:30 Emre Sülün, Eray Tüzün and Ugur Dogrusoz. Reviewer Recommendation Using Software Artifact Traceability Graphs.
- 16:30-16:50 Discussion
Topics of Interest
Application oriented:
- prediction of cost, effort, quality, defects, business value;
- quantification and prediction of other intermediate or final properties of interest in software development regarding people, process or product aspects;
- using predictive models and data analytics in different settings, e.g. lean/agile, waterfall, distributed, community-based software development;
- dealing with changing environments in software engineering tasks;
- dealing with multiple-objectives in software engineering tasks;
- using predictive models and software data analytics in policy and decision-making.
Theory oriented:
- model construction, evaluation, sharing and reusability;
- interdisciplinary and novel approaches to predictive modelling and data analytics that contribute to the theoretical body of knowledge in software engineering;
- verifying/refuting/challenging previous theory and results;
- combinations of predictive models and search-based software engineering;
- the effectiveness of human experts vs. automated models in predictions.
Data oriented:
- data quality, sharing, and privacy;
- curated data sets made available for the community to use;
- ethical issues related to data collection and sharing;
- metrics;
- tools and frameworks to support researchers and practitioners to collect data and construct models to share/repeat experiments and results.
Validity oriented:
- replication and repeatability of previous work using predictive modelling and data analytics in software engineering;
- assessment of measurement metrics for reporting the performance of predictive models;
- evaluation of predictive models with industrial collaborators.
Important Dates
- Abstracts due:
June 17th, 2019 (extended) - Submissions due:
June 17th, 2019 (extended) - Author notification:
July 7th, 2019 - Camera ready:
July 21st, 2019 - Conference Date: September 18th, 2019
Journal Special Section
- Following the conference, the authors of the best papers will be invited to submit extended versions of their papers for consideration in a special section of the Information and Software Technology journal by Elsevier.
Kinds of Papers
We invite theory and empirical studies on the topics of interest (e.g. case studies, meta-analysis, replications, experiments, simulations, surveys etc.), as well as industrial experience reports detailing the application of predictive modelling and data analytics in industrial settings. Both positive and negative results are welcome, though negative results should still be based on rigorous research and provide details on lessons learned. It is encouraged, but not mandatory, that conference attendees contribute the data used in their analysis on-line. Submissions can be of the following kinds:
- Full papers (oral presentation): papers with novel and complete results.
- Short papers (oral presentation): papers to disseminate on-going work and preliminary results for early feedback, or vision papers about the future of predictive modelling and data analytics in software engineering
Submissions
PROMISE 2019 submissions must meet the following criteria:- be original work, not published or under review elsewhere while being considered;
- conform to the ACM SIG proceedings template;
- not exceed 10 (4) pages for full (short) papers including references;
- be written in English;
- be prepared for double blind review, except for data papers, where double blind is optional (see instructions below);
- be submitted via EasyChair (please choose the paper category appropriately).
Double-Blind Review Process
PROMISE 2019 will employ a double-blind review process, except for data papers, where double blind is optional (see below). This means that the submissions should by no means disclose the identity of the authors. The authors must make every effort to honor the double-blind review process. In particular, the authors’ names must be omitted from the submission and references to their prior work should be in the third person.
If the paper is about a data set or data collection tool, double-blind review is not obligatory. However, authors may choose to opt in double blind reviews by making their data repository or data collection tool anonymous and omitting their paper authorship information if they wish to. If in doubt regarding the obligatoriness of double blind review for your specific case, please contact the PC chairs.
Why double blind?
Double blind has now taken off, mostly driven by the considerable number of requests from the software engineering community. We have also now decided to respond to this call. We are aware that there are certain challenges with regard to the double blind review process as detailed by Shepperd [1]. However, we hope that the benefits, some of which are discussed by Le Goues [2], will overcome those challenges.
[1] https://empiricalsoftwareengineering.wordpress.com/2017/11/05/why-i-disagree-with-double-blind-reviewing/[2] https://www.cs.cmu.edu/~clegoues/double-blind.html
Programme Committee
- David Bowes, Lancaster University
- Ricardo Britto, Blekinge Institute of Technology
- Hoa Khanh Dam, University of Wollongong
- Giuseppe Destefanis, Brunel University London
- Carmine Gravino, University of Salerno
- Tracy Hall, Lancaster University
- Rachel Harrison, Oxford Brooks University
- Yasutaka Kamei, Kyushu University
- Lech Madeyski, Wroclaw University of Science and Technology
- Shane McIntosh, McGill University
- Tim Menzies, North Carolina State University
- Jaechang Nam, Handong Global University
- Maleknaz Nayebi, Polytechnique Montreal
- Fabio Polomba, University of Zurich
- Daniel Rodriguez, University of Alcalá
- Martin Shepperd, Brunel University London
- Emad Shihab, Concordia University
- Yuan Tian, Singapore Management University
- Ayse Tosun, Istanbul Technical University
- Burak Turhan, Monash University
- Hironori Washizaki, Waseda University
- Xin Xia, Monash University
- Yuming Zhou, Nanjing University
Steering Committee
- David Bowes, University of Central Lancashire
- Shane McIntosh, McGill University
- Leandro Minku, University of Birmingham
- Andriy Miranskyy, Ryerson University
- Emad Shihab, Concordia University
- Ayse Tosun, Istanbul Technical University
General Chair
- Leandro Minku, University of Birmingham
PC Co-Chairs
- Foutse Khomh, Ecole Polytechnique de Montréal
- Jean Petric, Lancaster University
Publication Chair
- David Bowes, University of Central Lancashire
Publicity Chair
- Giuseppe Destefanis, Brunel University London