Context: The field of software-development effort estimation explores ways of defining effort through prediction approaches. Even though this field has a crucial impact on budgeting and project planning in industry, the number of works classifying and examining currently available approaches is still small. Objective: This article, therefore, presents a comprehensive overview of these approaches, and pinpoints research gaps, challenges, and trends. Method: A systematic mapping of the literature was designed and performed based on well-established practical guidelines. In total, 120 primary studies were selected, analyzed and categorized, after applying a careful filtering process from a sample of 3,746 candidate studies to answer six research questions. Results: Over 70% of the selected studies adopted multiple effort estimation approaches; over 45% adopted evaluation research as research method; over 90% of the participants were students, rather than professionals; most studies had their quality assessed as high, and were most commonly published in journals. Conclusions: Our study benefits practitioners and researchers by providing a body of knowledge about the current literature, serving as a starting point for upcoming studies. This article reports challenges worth investigating, regarding the use of cognitive load and team interaction.
The use of biometric data (BD) records promises to advance the software engineering field. The rapid adoption of wearable computing technology has widely increased the amount of BD records available. Several aspects about the use of BD records in software engineering field are unknown, such as body measurements used to support daily tasks, and empirical methods that are used to evaluate their benefits. Consequently, a thorough understanding of state-of-the-art techniques still remains limited. This article, therefore, aims at providing a classification and a thematic analysis of studies on the use of BD records in the context of software development. Moreover, it seeks to introduce a classification taxonomy, and pinpoints research gaps, challenges and trends. A systematic mapping of the literature was designed and performed based on well-established practical guidelines. In total, 40 primary studies were analysed and categorised, which were selected by applying a careful filtering process from a sample of 3930 studies to answer seven research questions. Over 77% of articles use more than one biometric aspect to analyse tasks performed by developers; over 47% of articles used eye-track sensor to analyse biometric factors, followed by brain-wearable sensors with 40%, skin sensor with 22%, cardiac sensor with 20%, and others fewer representatives; most studies analysed had their quality assessed as high; most studies were published in journal. This study provides a systematic map of studies that use BD records in software engineering, thereby serving as a basis for future research.
Behaviour & Information Technology, pp. 1-23,
There has been an increased focus on context-aware tools in software engineering. Within this area, an important challenge is to define and model the context for software-development projects and software development in general. This article reports a controlled experiment that compares the effort to imple- ment changes, the correctness and the maintainability of an existing application between two projects; one that uses qualitative dashboards depicting contextual information, and one that does not. The results of this controlled experiment suggest that the usage of qualitative dashboards improves the correctness during the software maintenance activities and reduces the effort to implement these activities.
Journal of Systems and Software, Vol. 159, pp. 1-19,
Model merging plays a chief role in many software engineering activities, e.g. evolving Unified Modelling Language (UML) models for adding new features. Software developers may evolve UML models using merge relationships. However, given the growing heterogeneity of merge strategies and the UML limitations for expressing merge relationship, it is particularly challenging for them to specify merge relationships. Consequently, developers, end up expressing improperly merge relationships. Today, the UML can neither specify the semantics nor the order in which merge relationships must be performed. Developers are unable to specify the semantics and order in which merge relationships must be performed. This study, therefore, proposes UML2Merge, which is a UML extension for expressing merge relationship. The UML2Merge was evaluated through an empirical study with 10 participants for investigating its effects on the merge effort, the correctness of merge relationships, and the participant’s acceptance. The collected data suggest that the UML2Merge is proper to express merge relationships by requiring a low merge effort, producing elevated correctness of merge relationships, and having a high acceptance of the participants. The results are encouraging and show the potential for using UML2Merge to express the evolution of UML models through merge relationships.
IET Software, Vol. 13, Iss. 6, pp. 575-586,
Model comparison has been widely used to support many tasks in model-driven software development. For this reason, many techniques of comparing them have been proposed in the last few decades. However, academia and industry have overlooked a classi cation of currently available approaches to the comparison of design models. Hence, a thorough understanding of state-of-the-art techniques remains limited and incon- clusive. This article, therefore, focuses on providing a classi cation and a thematic analysis of studies on the comparison of software design models. We carried out a systematic mapping study following well-established guidelines to answer nine research questions. In total, 56 primary studies (out of 4,132) were selected from 10 widely recognized electronic databases after a careful ltering process. The main results are that a majority of the primary studies (1) provide coarse-grained techniques of the comparison of general-purpose diagrams, (2) adopt graphs as principal data structure and compare software design models considering structural prop- erties only, (3) pinpoint commonalities and di erences between software design models rather than assess their similarity, and (4) propose new techniques while neglecting the production of empirical knowledge from experimental studies. Finally, this article highlights some challenges and directions that can be explored in upcoming studies.
ACM Computing Surveys, Vol. 52, No. 3, pp. 48:1-48:41,
Several approaches to measure similarity between UML models have been proposed in recent years. However, they usually fall short of what was expected in terms of precision and sensitivity. Consequently, software developers end up using imprecise, similarity-measuring approaches to figure out how similar design models of fast-changing information systems are. This article proposes UMLSim, which is a hybrid approach to measure similarity between UML models. It brings an innovative approach by using multiple criteria to quantify how UML models are similar, including semantic, syntactic, structural, and design criteria. A case study was conducted to compare the UMLSim with five state-of-the-art approaches through six evaluation scenarios, in which the similarity between realistic UML models was computed. Our results, supported by empirical evidence, show that, on average, the UML-Sim presented high values for precision (0.93), recall (0.63) and f-measure (0.67) metrics, excelling the state-of-the-art approaches. The empirical knowledge and insights that are produced may serve as a starting point for future works. The results are encouraging and show the potential for using UMLSim in real-world settings.
XV Brazilian Symposium on Information Systems (SBSI’19), No. 17, pp. 1-8, May 20–24, 2019, Aracaju, Brazil,
The integration of feature models plays a key role in many software engineering tasks, e.g., adding new features to software product lines (SPL) of information systems. Previous empirical studies have revealed that integrating design models is still considered a time-consuming and error-prone task. Unfortunately, integration approaches with tool support are still severely lacking. Even worse, little is known about the effort invested by developers to integrate models manually, and how correct the integrated models are. This paper proposes FMIT, which is a semiautomatic tool to support the integration of feature models. It comes up with a strategy-based approach to reduce the effort that developers invest to combine feature models and increase the amount of correctly integrated models. A controlled experiment was run with 10 volunteers through six realistic integration scenarios. Our results, supported by statistical tests, show that our semiautomatic approach not only reduced the integration effort by 73.01%, but also increased the number of correctly integrated feature models by 43.01%, compared with the manual approach. Our main contributions are a semiautomatic, strategy-based approach with tool support, and empirical evidence on its benefits. Our encouraging results open the way for the development of new heuristics and tools to support developers during the evolution of feature models.
XV Brazilian Symposium on Information Systems (SBSI’19), No. 39, pp. 1-8, May 20–24, 2019, Aracaju, Brazil,
In collaborative software modeling the two main types of collaboration still present problems, such as the constant interruptions that hinder the cognitive process in synchronous collaboration, and the complicated and costly stages of con ict resolution in asynchronous collaboration. For this, this paper proposes a technique called “UMLCollab”. This technique combines aspects from synchronous and asynchronous collaboration. Through experiments, developers applied the proposed solution and they achieved to an intermediate productivity in relation to traditional collaboration methods. The results showed that the “UMLCollab” improved the correctness of the changed models, the notion of developer regarding to the resolution of con icts, and enabled the parallel changes occurring while other collaborators are working on without degrade the software diagrams being modelled locally.
XV Brazilian Symposium on Information Systems (SBSI’19), No. 30, pp. 1-8, May 20–24, 2019, Aracaju, Brazil,
Context: In recent years, several studies explored different facets of the developers’ cognitive load while executing tasks related to software engineering. Researchers have proposed and assessed different ways to measure developers’ cognitive load at work and some studies have evaluated the interplay between developers’ cognitive load and other attributes such as productivity and software quality. Problem: However, the body of knowledge about developers’ cognitive load measurement is still dispersed. That hinders the effective use of developers’ cognitive load measurements by industry practitioners and makes it difficult for researchers to build new scientific knowledge upon existing results. Objective: This work aims to pinpoint gaps providing a classification and a thematic analysis of studies on the measurement of cognitive load in the context of software engineering. Method: We carried out a Systematic Mapping Study (SMS) based on well-established guidelines to investigate nine research questions. In total, 33 articles (out of 2,612) were selected from 11 search engines after a careful filtering process. Results: The main findings are that (1) 55% of the studies adopted electroencephalogram (EEG) technology for monitoring the cognitive load; (2) 51% of the studies applied machine- learning classification algorithms for predicting cognitive load; and (3) 48% of the studies measured cognitive load in the context of programming tasks. Moreover, a taxonomy was derived from the answers of research questions. Conclusion: This SMS highlighted that the precision of machine learning techniques is low for realistic scenarios, despite the combination of a set of features related to developers’ cognitive load used on these techniques. Thus, this gap makes the effective integration of the measure of developers’ cognitive load in industry still a relevant challenge.
27th International Conference on Program Comprehension (ICPC), pp. 42-52, Montreal, Canada, May,
Context: The integration of feature models has been widely investigated in the last decades, given its pivotal role for supporting the evolution of software product lines. Unfortunately, academia and industry have overlooked the production of a thematic analysis of the current literature. Hence, a thorough understanding of the state-of-the-art works remains still limited. Objective: This study seeks to create a panoramic view of the current literature to pinpoint gaps and supply insights of this research field. Method: A systematic mapping study was performed based on well-established empirical guidelines for answering six research questions. In total, 47 primary studies were selected by applying a filtering process from a sample of 2874 studies. Results: The main results obtained are: (1) most studies use a generic notation (68.09%, 32⁄47) for representing feature models; (2) only one study (2%, 1⁄47) compares feature models based on their syntactic and semantics; (3) there is no preponderant use of a particular integration technique in the selected studies; (4) most studies (70%, 33⁄47) provide a product-based strategy to evaluate the integrated feature models; (5) majority (70%, 33⁄47) automates the integration process; and (6) most studies (90%, 42⁄47) propose techniques, rather than focusing on producing practical knowledge derived from empirical studies. Conclusion: The results were encouraging and suggest that integration of feature models is still an evolving re- search area. This study provides insightful information for the definition of a more ambitious research agenda. Lastly, empirical studies exploring the required effort to apply the current integration techniques in real-world settings are highly recommended in future work.
Information and Software Technology, Vol. 105, pp. 209-225,
Model composition plays a key role in many tasks in model-centric software development, e.g., evolving UML diagrams to add new features or reconciling models developed in parallel by different software development teams. However, based on our experience in previous empirical studies, one of the main impairments for the widespread adoption of composition techniques is the lack of empirical knowledge about their effects on developers’ effort. This problem applies to both existing categories of model composition techniques, i.e., specification-based (e.g., Epsilon) and heuristic-based techniques (e.g., IBM RSA). This paper, therefore, reports on a controlled experiment that investigates the effort of (1) applying both categories of model composition techniques and (2) detecting and resolving inconsistencies in the output composed models. We evaluate the techniques in 144 evolution scenarios, where 2,304 compositions of elements of UML class diagrams were produced. The main results suggest that (1) the employed heuristic-based techniques require less effort to produce the intended model than the chosen specification-based technique, (2) there is no significant difference in the correctness of the output composed models generated by these techniques, and (3) the use of manual heuristics for model composition outperforms their automated counterparts.
Journal on Software and Systems Modeling, Volume 14, Issue 4, pp. 1349–1365,
The importance of model composition in model-centric software development is recognized by researchers and practitioners. However, the lack of empirical evidence about the impact of model composition techniques on developers’ effort is a key impairment for their adoption in real-world design settings. Software engineers are left without any guidance on how to properly use certain model techniques in a way that effectively reduces their development effort. This work aims to address this problem by: (1) providing empirical evidence on model composition effort through a family of experimental studies; (2) defining quantitative indicators to objectively assess key attributes of model composition effort; (3) deriving a method to support the systematic application of composition techniques; and (4) conceiving a new model composition technique to overcome the problems identified throughout the experimental evaluations.
32nd ACM/IEEE International Conference on Software Engineering, Doctoral Symposium, Vol. 2, pp. 405-408, Cape Town, South Africa,
Model composition is a common operation used in many software development activities—for example, reconciling models developed in parallel by different development teams, or merging models of new features with existing model artifacts. Unfortunately, both commercial and academic model composition tools suffer from the composition conflict problem. That is, models to-be-composed may conflict with each other and these conflicts must be resolved. In practice, detecting and resolving conflicts is a highly-intensive manual activity. In this paper, we investigate whether aspect-orientation reduces conflict resolution effort as improved modularization may better localize conflicts. The main goal of the paper is to conduct an exploratory study to analyze the impact of aspects on conflict resolution. In particular, model compositions are used to express the evolution of architectural models along six releases of a software product line. Well-known composition algorithms, such as override, merge and union, are applied and compared on both AO and non-AO models in terms of their conflict rate and effort to solve the identified conflicts. Our findings identify specific scenarios where aspect-orientation properties, such as obliviousness and quantification, result in a lower (or higher) composition effort.
9th International Conference on Aspect-Oriented Software Development (AOSD’10), pp. 73-84, Rennes and Saint-Malo, France, March,