Date Tags

Shaw, M., "Writing good software engineering research papers," in Software Engineering, 2003. Proceedings. 25th International Conference on , vol., no., pp.726-736, 3-10 May 2003 doi: 10.1109/ICSE.2003.1201262

Introduction

The case study provides an examination of software engineering research paper abstracts, focusing on three areas of high importance with regard to the effectiveness. For the purposes of this case study, effectiveness can be considered being accepted into a conference, discussed more widely by the author as providing useful knowledge in that it incrementally advances the field. The author examines abstracts of software engineering conference papers, those submitted to ICSE 2002, to identify the types of research and their success in being chosen amongst competing papers for inclusion in the conference. The observed discussions of the program committee provide illumination into specific shortfalls and strengths of the submitted papers. The author's stated goal is to provide readers with how to better present results and design better research projects.

The paper provides both a framework and specific actions that can be taken when writing research papers in the subject area of software engineering. The author's discussion provides an instructive framework for structuring research projects. The findings of the study are discussed below to highlight the portions that are particularly useful to consider in completing research.

Summary of Findings

The three areas of focus include type of research question, type of results created by the questions, and the type of validation used. The author provides a framework of varying categories or types for each of the main areas of focus and explores varying paradigms of the papers submitted, providing a rank of accepted papers based on these qualities.

The first area of focus, type of question, provides five different types of questions most commonly explored amongst the submitted papers. These include questions of: method of development (“How can we do/create/modify/evolve..., What is a better way to do/create/modify/evolve), analysis or evaluation (“evaluate correctness”, choose between options), design/evaluation of a particular implementation (“How good is Y?, What is better..., How does X compare to Y?), generalization (“formal/empirical model for X”), and feasibility study or exploration(“Is it possible... Does X exist...).

Amongst these type of questions, the most common submission type was method or means of development type papers followed by method for analysis or evaluation. These two question types had the highest ratio of acceptance with the later type being accepted at the highest rate followed by the former. Based on observation of the programming committee, the author notes that the technique of evaluation must be made clear early in the paper.

The second area of focus, type of research results, provides seven different type of questions most commonly explored amongst the submitted papers. These include: procedure or technique (“New or better way to do some task...”), qualitative/descriptive model(“architectural style, framework, or design pattern...”), empirical model(“predictive model based on observed data”), analytical model(“formal analysis”), tool or notation(“formal language to support a technique or model”), solution/prototype/judgment(“Solution to application problem that shows application of SE principles”), and report(observations, but not systematic).

Amongst these types of results, the largest number of papers submitted were of the procedure or technique result type. The papers with the highest rate of acceptance were of result type empirical model and tool/notation, although empirical model type papers were only one percent of overall submissions. Papers with the result type of procedure/technique were the third highest, given that they comprised 44% of submission their level of acceptance is more meaningful than that of the empirical type. Based on observation of the programming committee, the author notes several questions that will be considered “What, precisely, do you claim to contribute?”, “What’s new here?”, “What has been done before? How is your work different or better?”, “What, precisely, is the result?”. Specific examples of appropriate wording/phrasing of research claims and how to relate results to existing works are valuable to researchers.

The third area of focus, types of research validation, provides six types of validation used. These include: analysis(formal model, empirical model, experiment), evaluation(feasibility studies, pilot projects), experience(real world use shown through qualitative, empirical or notation), example(“technique or procedure based on a real system”), persuasion(noted to be rarely sufficient), assertion(noted to be not likely acceptable).

Amongst these types of validation, the largest number of papers submitted had no mention of validation mentioned in the abstract, followed by those using example type validation. The papers of highest acceptance ratio were that of experience validation type and analysis validation type. Based on observation of the programming committee, the author notes that evidence in the form of analysis, experience, and realistic examples are often effective.

Taking these three areas of focus together, the paradigm of analysis, procedure, and example had the highest number of acceptances, followed by analysis, procedure, and analysis. The author concludes with qualities common to the clearest abstracts that discuss: current state, identification of a problem, what the paper contributes to problem, specific results, and demonstration or defense of the results.

Discussion

The case study provides an instructive framework that could be followed when carrying out research in the field of computer engineering. The paradigm imposed by the author gives the reader a useful way to frame the three important aspects of any research report: the type of question, the type of results, and the type of validation. The author's observations of the programming committee and example questions that researchers should evaluate their own work with, also provide valuable insight that is seemingly useful to consider throughout the research process.

The process used in conducting the case study presents several short comings that could be improved upon in future research. The limitation of evaluating only paper abstracts, rather than the full reports, limits the findings to how to get accepted to this conference but cannot necessarily assume those findings to be consistent if the full research is considered. The examination of abstract from a single year of a single conference is also a limitation. The study of several conferences in the area of software engineering over a period of time would provide more widely applicable findings. However, remedy of either would vastly change the scope and process of the case study.