ICSOFT 2018 Abstracts


Area 1 - Foundational and Trigger Technologies

Short Papers
Paper Nr: 24
Title:

In Situ Mutation for Active Things in the IoT Context

Authors:

Noura Faci, Zakaria Maamar, Thar Baker, Emir Ugljanin and Mohamed Sellami

Abstract: This paper discusses mutation as a new way for making things, in the context of Internet-of-Things (IoT), active instead of being passive as reported in the ICT literature. IoT is gaining momentum among ICT practitioners who see a lot of benefits in using things to support users have access to and control over their surroundings. However, things are still confined into the limited role of data suppliers. The approach proposed in this paper advocates for 2 types of mutation, active and passive, along with a set of policies that either back or deny mutation based on specific “stopovers” referred to as permission, prohibition, dispensation, and obligation. A testbed and a set of experiments demonstrating the technical feasibility of the mutation approach, are also presented in the paper. The testbed uses NodeMCU firmware and Lua script interpreter.

Paper Nr: 67
Title:

Everything-as-a-Thing for Abstracting the Internet-of-Things

Authors:

Zakaria Maamar, Noura Faci, Mohamed Sellami, Emir Ugljanin and Ejub Kajan

Abstract: This paper discusses Everything-as-a-Thing (*aaT) as a novel way for abstracting the Internet-of-Things (IoT) applications. Compared to other forms of abstraction like Everything-as-a-Service (*aaS) and Everything-as-a-Resource (*aaR), *aaT puts emphasis on living things, on top of non-living things, that populate these applications. On the one hand, living things take over roles that are defined in terms of rights and duties. On the other hand, non-living things offer capabilities that are defined in terms of functional and non-functional properties. Interactions that occur between living and non-living things are specified as stories that define who does what, when, and where. For illustration purposes, *aaT is put into action using a healthcare case study.

Paper Nr: 72
Title:

Cognitive Computing Meets the Internet of Things

Authors:

Zakaria Maamar, Thar Baker, Noura Faci, Emir Ugljanin, Yacine Atif, Mohammed Al-Khafajiy and Mohamed Sellami

Abstract: This paper discusses the blend of cognitive computing with the Internet-of-Things that should result into developing cognitive things. Today’s things are confined into a data-supplier role, which deprives them from being the technology of choice for smart applications development. Cognitive computing is about reasoning, learning, explaining, acting, etc. In this paper, cognitive things’ features include functional and non-functional restrictions along with a 3 stage operation cycle that takes into account these restrictions during reasoning, adaptation, and learning. Some implementation details about cognitive things are included in this paper based on a water pipe case-study.

Paper Nr: 73
Title:

Cloud Adoption Readiness Assessment Framework for Small and Medium Enterprises in Developing Economies - Evidential Reasoning Approach

Authors:

Mesfin Workineh, Nuno M. Garcia and Dida Midekso

Abstract: The aim of this paper is to develop Cloud computing (CC) adoption readiness assessment framework for small and medium enterprises (SMEs) in developing economies. The benefits obtained from CC let the SMEs in developing economies to consider CC as an alternate technological solution. These SMEs require adoption readiness assessment framework in order to eliminate complexities during adoption. Most of the existing frameworks involve technological characteristics to assess adoption readiness and also do not handle uncertainties of decision makers. But, technological characteristics are not foremost indicates of adoption readiness. Therefore, this study proposes Cloud adoption readiness assessment framework based on organizational resources perspective using evidential reasoning (ER) approach. The finding of this study contributes to the existing CC literature and helps the practitioner to make an informed adoption decision. Lastly, the effectiveness of proposed framework is shown using case study.

Paper Nr: 92
Title:

Virtualization: Past and Present Challenges

Authors:

Bruno Rodrigues, Frederico Cerveira, Raul Barbosa and Jorge Bernardino

Abstract: Virtualization plays an important role in cloud computing by providing the capability of running multiple operating systems and applications on top of the same underlying hardware. However, there are limitations in current virtualization technologies, which are inherited by cloud computing. For example, performance, security and dependability. While these challenges remain, the adoption of virtualization in the different fields will be limited. This paper presents a survey in the past and open virtualization challenges, such as nested virtualization, reliability and security.

Posters
Paper Nr: 8
Title:

Variability Modelling for Elastic Scaling in Cloud Computing

Authors:

Mohamed Lamine Berkane, Lionel Seinturier and Mahmoud Boufaida

Abstract: Elasticity is an increasingly important characteristic for cloud computing environments, in particular for those that are deployed in dynamically changing environments. The purpose is to let the systems react and adapt the workload on its current and additional (in an autonomic manner) hardware and software resources. In this paper, we propose an approach that allows the combination of variability and reusability for modelling elasticity. The used approach is based on self-adaptive systems and feature modelling into a single solution. We show the feasibility of the proposed model through Znn.com scenario.

Paper Nr: 112
Title:

The Addition of Geolocation to Sensor Networks

Authors:

Robert Bryce and Gautam Srivastava

Abstract: Sensor networks are recently rapidly growing research area in wireless communications and distributed networks. A sensor network is a densely deployed wireless network of small, low cost sensors, which can be used in various applications like health, environmental monitoring, military, natural disaster relief, and finally gathering and sensing information in inhospitable locations to name a few. In this paper, we focus on one specific type of sensor network called MQTT, which stands for Message Queue Transport Telemetry. MQTT is an open source publisher/subscriber standard for M2M (Machine to Machine) communication. This makes it highly suitable for Internet of Things (IoT) messaging situations where power usage is at a premium or in mobile devices such as phones, embedded computers or microcontrollers. In its original state, MQTT is lacking the ability to broadcast geolocation as part of the protocol itself. In today’s age of IoT however, it has become more pertinent to have geolocation as part of the protocol. In this paper, we add geolocation to the MQTT protocol and offer a revised version, which we call MQTT-G. We describe the protocol here and show where we were able to embed geolocation successfully.

Area 2 - Software Engineering and Systems Development

Full Papers
Paper Nr: 9
Title:

Impact of Mutation Operators on Mutant Equivalence

Authors:

Imen Marsit, Mohamed Nazih Omri, JiMing Loh and Ali Mili

Abstract: The presence of equivalent mutants is a recurrent source of aggravation in mutation-based studies of software testing, as it distorts our analysis and precludes assertive claims. But the determination of whether a mutant is equivalent to a base program is undecidable, and practical approaches are tedious, error-prone, and tend to produce insufficient or unnecessary conditions of equivalence. We argue that an attractive alternative to painstakingly identifying equivalent mutants is to estimate their number. This is an attractive alternative for two reasons: First, in most practical applications, it is not necessary to identify equivalent mutants individually; rather it suffices to know their number. Second, even when we need to identify equivalent mutants, knowing their number enables us to single them out with little to moderate effort.

Paper Nr: 19
Title:

Automated Selection of Software Refactorings that Improve Performance

Authors:

Nikolai Moesus, Matthias Scholze, Sebastian Schlesinger and Paula Herber

Abstract: Performance is a critical property of a program. While there exist refactorings that have the potential to significantly increase the performance of a program, it is hard to decide which refactorings effectively yield improvements. In this paper, we present a novel approach for the automated detection and selection of refactorings that are promising candidates to improve performance. Our key idea is to provide a heuristics that utilizes software properties determined by both static code analyses and dynamic software analyses to compile a list of concrete refactorings sorted by their assessed potential to improve performance. The expected performance improvement of a concrete refactoring depends on two factors: the execution frequency of the respective piece of code, and the effectiveness of the refactoring itself. To assess the latter, namely the general effectiveness of a given set of refactorings, we have implemented a set of micro benchmarks and measured the effect of each refactoring on computation time and memory consumption. We demonstrate the practical applicability of our overall approach with experimental results.

Paper Nr: 21
Title:

A Versatile Tool Environment to Perform Model-based Testing of Web Applications and Multilingual Websites

Authors:

Winfried Dulz

Abstract: This paper examines techniques for the model-based testing of web applications and multilingual websites. For this purpose, the simple web application HelloMBTWorld is used to explain the essential steps for a model-based test process that applies statistical usage models to generate and evaluate appropriate test suites. Model-based techniques that provide graphical representations of usage models allow to set the test focus on specific regions of the system under test that shall be tested. Based on adopted profiles different user groups can be distinguished by different test suites during the test execution. Generic usage models, which are adjusted to specific language environments during the test execution, permit the testing of multilingual websites. Using the Eclipse modeling framework in combination with the TestPlayer tool chain provides a versatile tool environment for model-based testing of web applications and websites.

Paper Nr: 22
Title:

Investigating Order Information in API-Usage Patterns: A Benchmark and Empirical Study

Authors:

Ervina Çergani, Sebastian Proksch, Sarah Nadi and Mira Mezini

Abstract: Many approaches have been proposed for learning Application Programming Interface (API) usage patterns from code repositories. Depending on the underlying technique, the mined patterns may (1) be strictly sequential, (2) consider partial order between method calls, or (3) not consider order information. Understanding the trade-offs between these pattern types with respect to real code is important in many applications (e.g. code recommendation or misuse detection). In this work, we present a benchmark consisting of an episode mining algorithm that can be configured to learn all three types of patterns mentioned above. Running our benchmark on an existing dataset of 360 C# code repositories, we empirically study the resulting API usage patterns per pattern type. Our results show practical evidence that not only do partial-order patterns represent a generalized super set of sequential-order patterns, partial-order mining also finds additional patterns missed by sequence mining, which are used by a larger number of developers across code repositories. Additionally, our study empirically quantifies the importance of the order information encoded in sequential and partial-order patterns for representing correct co-occurrences of code elements in real code. Furthermore, our benchmark can be used by other researchers to explore additional properties of API patterns.

Paper Nr: 23
Title:

Extreme Learning Machine based Linear Homogeneous Ensemble for Software Fault Prediction

Authors:

Pravas Ranjan Bal and Sandeep Kumar

Abstract: Many recent studies have experimented the software fault prediction models to predict the number of software faults using statistical and traditional machine learning techniques. However, it is observed that the performance of traditional software fault prediction models vary from dataset to dataset. In addition, the performance of the traditional models degrade for inter release prediction. To address these issues, we have proposed linear homogeneous ensemble methods based on two variations of extreme learning machine, Differentiable Extreme Learning Machine Ensemble (DELME) and Non-differentiable Extreme Learning Machine Ensemble (NELME), to predict the number of software faults. We have used seventeen PROMISE datasets and five eclipse datasets to validate these software fault prediction models. We have performed two types of predictions, within project defect prediction and inter release prediction, to validate our proposed fault prediction model. The experimental result shows consistently better performance across all datasets.

Paper Nr: 31
Title:

A Pattern-matching based Approach for Problem Solving in Model Transformations

Authors:

Youness Laghouaouta, Pierre Laforcade and Esteban Loiseau

Abstract: As MDE (Model Driven Engineering) principles are increasingly applied in the development of complex software, the use of constraint solving in the specification of model transformations is often required. However, transformation techniques do not provide fully integrated supports for solving constraints, and external solvers are not well adapted. To deal with this issue, this paper proposes a pattern-matching based approach as a promising solution for enforcing constraints on target models. A transformation infrastructure is semiautomatically generated and it provides support for specifying patterns, searching for match models, and producing valid target models. Finally, a use case is presented in order to illustrate our contribution.

Paper Nr: 33
Title:

Formalizing Agile Software Product Lines with a RE Metamodel

Authors:

Hassan Haidar, Manuel Kolp and Yves Wautelet

Abstract: Requirements engineering (RE) techniques can play a determinant role when making the strategic decision to adopt an Agile Product Line approach to the production of software-intensive systems. This paper proposes an integrated goal and feature-based metamodel for agile software product lines. The aim is to allow analysts and developers to produce specifications that precisely capture the stakeholder’s needs and intentions as well as to manage product line variabilities. Adopting practices from requirements engineering, especially goal and feature models, helps designing the domain and application engineering tiers of an agile product line. Such an approach allows a holistic perspective integrating human, organizational and agile aspects to better understand product lines dynamic business environments. It helps bridging the gap between product lines structures and requirements models, and proposes an integrated framework to all actors involved in the product line architecture.

Paper Nr: 36
Title:

Supporting the Systematic Goal Refinement in KAOS using the Six-Variable Model

Authors:

Nelufar Ulfat-Bunyadi, Nazila Gol Mohammadi and Maritta Heisel

Abstract: In requirements engineering, different types of modelling techniques exist for documenting requirements and their refinement (e.g. goal-oriented techniques, problem-based techniques). Each type of technique has its advantages and shortcomings. However, extensions made to one type may be beneficial to another type as well, if transferred to it. KAOS is, for example, a comprehensive methodology that supports goal-oriented requirements engineering. As part of the KAOS methodology, multi-agent goals are refined until they can be assigned to single agents in the software or in the environment. Beside goals, domain properties and hypotheses (facts and assumptions about the environment) can also be modelled in KAOS goal models as well as their influence on the satisfaction of goals. However, the KAOS methodology provides limited support in the systematic refinement of goals. Developers using the KAOS method are left alone in refining the multi-agent goals and in making domain properties and hypotheses explicit. The Six-Variable Model, on the other hand, is an extension of problem diagrams and supports a systematic refinement of requirements and a systematic elicitation of domain properties and domain hypotheses. In this paper, we show how the Six-Variable Model can be used to support a systematic refinement of goals in KAOS goal models.

Paper Nr: 41
Title:

Simple App Review Classification with Only Lexical Features

Authors:

Faiz Ali Shah, Kairit Sirts and Dietmar Pfahl

Abstract: User reviews submitted to app marketplaces contain information that falls into different categories, e.g., feature evaluation, feature request, and bug report. The information is valuable for developers to improve the quality of mobile applications. However, due to the large volume of reviews received every day, manual classification of user reviews into these categories is not feasible. Therefore, developing automatic classification methods using machine learning approaches is desirable. In this study, we compare the simplest textual machine learning classifier using only lexical features—the so-called Bag-of-Words (BoW) approach—with the more complex models used in previous works adopting rich linguistic features. We find that the performance of the simple BoW model is very competitive and has the advantage of not requiring any external linguistic tools to extract the features. Moreover, we experiment with deep learning based Convolutional Neural Network (CNN) models that have recently achieved state-of-the-art results in many classification tasks. We find that, on average the CNN models do not perform better than the simple BoW model—it is possible that for the CNN model to gain an advantage, a larger training set would have been necessary.

Paper Nr: 42
Title:

Enhancing Software Development Process Quality based on Metrics Correlation and Suggestion

Authors:

Sarah Dahab, Erika Silva, Stephane Maag, Ana Rosa Cavalli and Wissam Mallouli

Abstract: To improve software quality, it is necessary to introduce new metrics with the required detail and increased expressive power, in order to provide valuable information to the different actors of software development. In this paper we present two approaches based on metrics that contribute to improve software quality development. Both approaches are complementary and are focused on the combination, reuse and correlation of metrics. They suggest to the user indications of how to reuse metrics and provide recommendations after the application of metrics correlation. They have been applied to selected metrics on software maintainability, safety, security etc. The approaches have been implemented in two tools, Metrics Suggester and MINT. Both approaches and tools are part of the ITEA3 MEASURE project and they have been integrated into the project platform. To illustrate its application we have created different scenarios on which both approaches are applied. Results show that both approaches are complementary and can be used to improve the software process.

Paper Nr: 47
Title:

Towards an Agile Development Model for Certifiable Medical Device Software - Taking Advantage of the Medical Device Regulation

Authors:

Manuel Zamith and Gil Gonçalves

Abstract: Regulation of medical devices has been one of the most prominent initiatives of the European Union in the health domain. The recent Medical Device Regulation 2017/745/EEC extended the definition of medical devices to standalone software systems with prognostic and prevision intended purposes. This paradigm shift stimulates the development of more lightweight software systems such as mobile applications, that can be classified as legitimate medical devices and can be prescribed to patients. This new context creates the need for an urgent adjustment of the currently used software development life cycle models and processes. This article discusses a tailored agile approach based on the Scrum model, designed to be compliant with the international standards for medical device software development and benefit the creation of software solutions according to the current Medical Device Framework. The discussion in this paper demonstrates there is no reason to believe that agile methodologies should not benefit the process of creating software solutions in the medical device domain.

Paper Nr: 49
Title:

Evaluating Students’ Perception of Their Learning in a Student-centered Software Engineering Course - A Experimental Study

Authors:

Julio Cezar Costa Furtado and Sandro Ronaldo Bezerra Oliveira

Abstract: The teaching and learning process in a humanistic approach will depend on the individual character of the process and how it is accustomed to relating to the student’s character. In this approach, the focus is given to the individual, with student-centered teaching. This work aims to identify whether if the student-centered teaching approaches are preferred by students and how much are adopted by Software Engineering (SE) teachers, and evaluated the effects on students' perception of their learning process when immersed in a humanistic, student-centered course. The study results provided an initial indicator that such an approach achieves a better effect on student on course adequacy when compared to traditional lectures classes as the primary instructional method.

Paper Nr: 53
Title:

Finding Regressions in Projects under Version Control Systems

Authors:

Jaroslav Bendík, Nikola Beneš and Ivana Černá

Abstract: Version Control Systems (VCS) are frequently used to support development of large-scale software projects. A typical VCS repository can contain various intertwined branches consisting of a large number of commits. If some kind of unwanted behaviour (e.g. a bug in the code) is found in the project, it is desirable to find the commit that introduced it. Such commit is called a regression point. There are two main issues regarding the regression points. First, detecting whether the project after a certain commit is correct can be very expensive and it is thus desirable to minimise the number of such queries. Second, there can be several regression points preceding the actual commit and in order to fix the actual commit it is usually desirable to find the latest regression point. Contemporary VCS contain methods for regression identification, see e.g. the git bisect tool. In this paper, we present a new regression identification algorithm that outperforms the current tools by decreasing the number of validity queries. At the same time, our algorithm tends to find the latest regression points which is a feature that is missing in the state-of-the-art algorithms. The paper provides an experimental evaluation on a real data set.

Paper Nr: 66
Title:

Towards a Taxonomy of Bad Smells Detection Approaches

Authors:

Mouna Hadj-Kacem and Nadia Bouassida

Abstract: Refactoring is a popular maintenance activity that improves the internal structure of a software system while maintaining its external behaviour. During the refactoring process, detecting bad smells plays a crucial role in establishing reliable and accurate results. So far, several approaches have been proposed in the literature to detect bad smells at different levels. In this paper, we focus on reviewing the state-of-the-art of object-oriented bad smells detection approaches. For the purpose of comparability, we propose a hierarchical taxonomy by following a development methodology. Our taxonomy encompasses three main dimensions describing the detection approach via the used method, analysis and assessment. The resulting taxonomy provides a deeper understanding of existing approaches. It highlights many key factors that concern the developers when making a choice of an existing detection approach or when proposing a new one.

Paper Nr: 78
Title:

iArch-U/MC: An Uncertainty-Aware Model Checker for Embracing Known Unknowns

Authors:

Naoyasu Ubayashi, Yasutaka Kamei and Ryosuke Sato

Abstract: Embracing uncertainty in software development is one of the crucial research topics in software engineering. In most projects, we have to deal with uncertain concerns by using informal ways such as documents, mailing lists, or issue tracking systems. This task is tedious and error-prone. Especially, uncertainty in programming is one of the challenging issues to be tackled, because it is difficult to verify the correctness of a program when there are uncertain user requirements, unfixed design choices, and alternative algorithms. This paper proposes iArch-U/MC, an uncertainty-aware model checker for verifying whether or not some important properties are guaranteed even if Known Unknowns remain in a program. Our tool is based on LTSA (Labelled Transition System Analyzer) and is implemented as an Eclipse plug-in.

Paper Nr: 93
Title:

Clustering-based Under-sampling for Software Defect Prediction

Authors:

Moheb M. R. Henein, Doaa M. Shawky and Salwa K. Abd-El-Hafiz

Abstract: Detection of software defective modules is important for reducing the time and resources consumed by software testing. Software defect data sets usually suffer from imbalance, where the number of defective modules is fewer than the number of defect-free modules. Imbalanced data sets make the machine learning algorithms to be biased toward the majority class. Clustering-based under-sampling shows its ability to find good representatives of the majority data in different applications. This paper presents an approach for software defect prediction based on clustering-based under-sampling and Artificial Neural Network (ANN). Firstly, clustering-based under-sampling is used for selecting a subset of the majority samples, which is then combined with the minority samples to produce a balanced data set. Secondly, an ANN model is built and trained using the resulted balanced data set. The used ANN is trained to classify the software modules into defective or defect-free. In addition, a sensitivity analysis is conducted to choose the number of majority samples that yields the best performance measures. Results show the high prediction capability for the detection of defective modules while maintaining the ability of detecting defect-free modules.

Short Papers
Paper Nr: 5
Title:

Patterns in Textual Requirements Specification

Authors:

David Šenkýř and Petr Kroha

Abstract: In this paper, we investigate methods of grammatical inspection to identify patterns in textual requirements specification. Unfortunately, a text in natural language includes additionally many inaccuracies caused by ambiguity, inconsistency, and incompleteness. Our contribution is that using our patterns, we are able to extract the information from the text that is necessary to fix some of the problems mentioned above. We present our implemented tool TEMOS that is able to detect some inaccuracies in a text and to generate fragments of the UML class model from textual requirements specification. We use external on-line resources to complete the textual information of requirements.

Paper Nr: 11
Title:

Developing a Task Management System - A Classroom Software Engineering Experience

Authors:

Joo Tan, Jake Betts, Tyler Lance, Adam Whittaker and David Yocum

Abstract: Software Engineering requires team collaboration from all project teams as one organized group. At Kutztown University in USA, students in a capstone software engineering course sequence work in project teams to gather and understand requirements, redesign, develop and test a system. In this paper, we explain the software engineering process followed by six project teams while developing the system. The teams ran into many problems during implementation. We discuss the different kind of issues encountered during the process. Lessons learned from this experiences are summarized so that future teams can benefit from this experience.

Paper Nr: 15
Title:

Factors that Complicate the Selection of Software Requirements - Validating Factors from Literature in an Empirical Study

Authors:

Hans Schoenmakers, Rob Kusters and Jos Trienekens

Abstract: In market-driven software product development, new features may be added to the software, based on a collection of candidate requirements. Selecting requirements however is difficult. Despite all work done on this problem, known as the next release problem, what is missing, is a comprehensive overview of the factors that complicate selecting software requirements. This paper aims at getting such overview. The authors performed a systematic literature review, searching for occurrences in the literature where a causal relation was suggested between certain conditions and the difficulty of selecting software requirements. Analyzing 544 papers led to 156 findings. Clustering them resulted in 33 complicating factors that were classified in eight groups. The complicating factors were validated in semi-structured interviews with twelve experts from three different industrial organizations. These interviews consisted of questions about participant’s experiences with the complicating factors, and of questions how these factors complicated selecting requirements. The results aid in getting a better understanding of the complexity of selecting requirements.

Paper Nr: 16
Title:

TCC (Tracer-Carrying Code): A Hash-based Pinpointable Traceability Tool using Copy&Paste

Authors:

Katsuhiko Gondow, Yoshitaka Arahori, Koji Yamamoto, Masahiro Fukuyori and Riyuuichi Umekawa

Abstract: In software development, it is crucially important to effectively capture, record and maintain traceability links in a lightweight way. For example, we often would like to know “what documents (rationale) a programmer referred to, to write this code fragment”, which is supposed to be solved by the traceability links. Most of previous work are retrospective approaches based on information retrieval techniques, but they are likely to generate many false positive traceability links; i.e., their accuracy is low. Unlike retrospective approaches, this paper proposes a novel lightweight prospective approach, which we call TCC (tracer-carrying code). TCC uses a hash value as a tracer (global ID), widely used in distributed version control systems like Git. TCC automatically embeds a TCC tracer into source code as a side-effect of users’ copy&paste operation, so users have no need to explicitly handle tracers (e.g., users have no need to copy&pastes URLs). TCC also caches the referred original text into Git repository. Thus, users can always view the original text later by simply clicking the tracer, even after its URL or file path is changed, or the original text is modified or removed. To show the feasibility of our TCC approach, we developed several TCC prototype systems for Emacs editor, Google Chrome browser, Chrome PDF viewer, and macOS system clipboard. We applied them to the development of a simple iPhone application, which shows a good result; our TCC is quite effective and useful to establish and maintain pinpointable traceability links in a lightweight way. Also several important findings are obtained.

Paper Nr: 32
Title:

COnfECt: An Approach to Learn Models of Component-based Systems

Authors:

Sébastien Salva and Elliott Blot

Abstract: This paper addresses the problem of learning models of component-based systems. We focus on model learning approaches that generate state diagram models of software or systems. We present COnfECt, a method that supplements passive model learning approaches to generate models of component-based systems seen as black-boxes. We define the behaviours of components that call each other with Callable Extended FSMs (CEFSM). COnfECt tries to detect component behaviours from execution traces and generates systems of CEFSMs. To reach that purpose, COnfECt is based on the notions of trace analysis, event correlation, model similarity and data clustering. We describe the two main steps of COnfECt in the paper and show an example of integration with the passive model learning approach Gk-tail.

Paper Nr: 45
Title:

A Multi-source Machine Learning Approach to Predict Defect Prone Components

Authors:

Pasquale Ardimento, Mario Luca Bernardi and Marta Cimitile

Abstract: Software code life cycle is characterized by continuous changes requiring a great effort to perform the testing of all the components involved in the changes. Given the limited number of resources, the identification of the defect proneness of the software components becomes a critical issue allowing to improve the resources allocation and distributions. In the last years several approaches to evaluating the defect proneness of software components are proposed: these approaches exploit products metrics (like the Chidamber and Kemerer metrics suite) or process metrics (measuring specific aspect of the development process). In this paper, a multi-source machine learning approach based on a selection of both products and process metrics to predict defect proneness is proposed. With respect to the existing approaches, the proposed classifier allows predicting the defect proneness basing on the evolution of these features across the project development. The approach is tested on a real dataset composed of two well-known open source software systems on a total of 183 releases. The obtained results show that the proposed features have effective defect proneness prediction ability.

Paper Nr: 63
Title:

On the Use of Models for Real-time Reconfigurations of Embedded Systems

Authors:

Naima Armaoui, Mohamed Naija and Samir Ben Ahmed

Abstract: The development of Multi-Processor System-on-Chip (MPSoC) for high-performance embedded applications has become a major challenge for designers due to a number of crucial constraints to meet, such as functional correctness and temporal performance. This paper presents a new process intended to support and facilitate the co-design and scheduling analysis of high-performance applications on MPSoCs. The contribution of this process is that it is designed to i) model the system functionality, execution architectures and allocation of software and hardware parts using a high-level modeling language ii) verify scheduling analysis of the system using a simulation tool and iii) offer a reconfiguration technique in order to meet constraints and preserve the system non-functional properties (NFPs). As a proof of concepts, we present a case study consisting of a JPEG encoder, with very promising results.

Paper Nr: 65
Title:

Enhancing Problem Clarification Artifacts with Online Deliberation

Authors:

Fabrício Matheus Gonçalves, Felipe Rodrigues Jensen, Julio Cesar dos Reis and Maria Cecília Calani Baranauskas

Abstract: Information system design demands understanding requirements from diversified stakeholders. As an initial step, the problem clarification is essential to obtain a shared view of the involved problems and solutions. Several techniques have been proposed and practiced by the systems engineering community for problem clarification. While existing literature has brought problem clarification artifacts via a online computational system, stakeholders still lack means of meaning negotiation practices that usually happen in face-to-face meetings. This paper proposes a deliberation model integrated to the online use of problem clarification artifacts. The deliberation provides a collaborative process for building common ground for reflection. The proposed model illustrates the possibilities of deliberation in statements created in three artefacts of the Organizational Semiotics: Stakeholder Identification, Evaluation Frame and Semiotic Framework.

Paper Nr: 74
Title:

Investigating the Effect of Software Metrics Aggregation on Software Fault Prediction

Authors:

Deepanshu Dixit and Sandeep Kumar

Abstract: In inter-releases software fault prediction, the data from the previous version of the software that is used for training the classifier might not always be of same granularity as that of the testing data. The same scenario may also happen in the cross project software fault prediction. So, one major issue in it can be the difference in granularity i.e. training and testing datasets may not have the metrics at the same level. Thus, there is a need to bring the metrics at the same level. In this paper, aggregation using Average Absolute Deviation (AAD) and Interquartile Range (IQR) are explored. We propose the method for aggregation of metrics from class to package level for software fault prediction and validated the approach by performing experimental analysis. We did the experimental study to analyze the performance of software fault prediction mechanism when no aggregation technique was used and when the two mentioned aggregation techniques were used. The experimental study revealed that the aggregation improved the performance and out of AAD and IQR aggregation techniques, IQR performs relatively better.

Paper Nr: 75
Title:

Developing a Taxonomy for Software Process Context

Authors:

Diana Kirk and Jim Buchan

Abstract: When developing software intensive products, practitioners adapt software practices to suit their specific environment. In order to evaluate and compare practices in an evidence based way, researchers must report the context in which the practice was enacted. This is problematic, as the discipline lacks an agreed classification structure for software context. In this paper, we re-position earlier investigations into software context for the purpose of practice evaluation by mapping the evolved framework as a taxonomy. The purpose of the taxonomy is to support discussions about situated software practices with a view to guiding researchers in the specification of context. We conducted an initial evaluation by classifying existing context structures into the taxonomy, and by implementing a small trial study. In future work, we will refine the taxonomy in conjunction with software researchers and practitioners, and use the taxonomy for evidence accumulation.

Paper Nr: 94
Title:

Lightweight Call-Graph Construction for Multilingual Software Analysis

Authors:

Anne Marie Bogar, Damian M. Lyons and David Baird

Abstract: Analysis of multilingual codebases is a topic of increasing importance. In prior work, we have proposed the MLSA (MultiLingual Software Analysis) architecture, an approach to the lightweight analysis of multilingual codebases, and have shown how it can be used to address the challenge of constructing a single call graph from multilingual software with mutual calls. This paper addresses the challenge of constructing monolingual call graphs in a lightweight manner (consistent with the objective of MLSA) which nonetheless yields sufficient information for resolving language interoperability calls. A novel approach is proposed which leverages information from a compiler-generated AST to provide the quality of call graph necessary, while the program itself is written using an Island Grammar that parses the AST providing the lightweight aspect necessary. Performance results are presented for a C/C++ implementation of the approach, PAIGE (Parsing AST using Island Grammar Call Graph Emitter) showing that despite its lightweight nature, it outperforms Doxgen, is robust to changes in the (Clang) AST, and is not restricted to C/C++.

Paper Nr: 96
Title:

The Dynamic Sensor Data Description and Data Format Conversion Language

Authors:

Gergely Mezei, Ferenc A. Somogyi and Károly Farkas

Abstract: Nowadays the proliferation of IoT (Internet of Things) devices results in heterogeneous and proprietary sensor data formats which makes challenging the processing and interpretation of sensor data across IoT domains. To achieve syntactic interoperability (the ability to exchange uniformly structured data) is still an issue under research. In this paper, we introduce our purpose developed new script language called Language for Sensor Data Description (L4SDD) to achieve cross-domain syntactic interoperability. L4SDD defines a unified output data format and specifies how the data is to be converted into this format from the various sensor inputs. Besides the language itself, we also present the main features of the workbench created to edit and maintain the scripts and introduce the IoT framework around the solution. The approach provides a high performant, secure and easy-to-use solution to transform their data to an easily processed, self-describing, universal data structure. Although the paper contains implementation details, the solution can be used in other, similar projects as well. As a practical validation, we also illustrate our solution via a real-life case study.

Paper Nr: 103
Title:

Towards a Visualization of Multi-level Metamodeling Techniques

Authors:

Sándor Bácsi and Gergely Mezei

Abstract: In the recent decade, a wide range of tools and methodologies have been introduced in the field of multi-level metamodeling. One of the newest approaches is the Dynamic Multi-Layer Algebra (DMLA). DMLA incorporates a fully self-modeled textual operation language above the tuple-based model entity representation. This textual language simplifies editing the models, but it has its drawbacks especially in following evolving requirements. In this paper, we introduce a visualization concept, which can support the more effective manipulation of the particular model and facilitate the process of multi-level metamodeling within DMLA.

Paper Nr: 106
Title:

On Graphical User Interface Verification

Authors:

Abdulaziz Alkhalid and Yvan Labiche

Abstract: Graphical User Interface (GUI) testing, for instance by means of capture and replay tools, is computationally expensive. In this paper, we present an approach for GUI verification that is not GUI (verification) testing. Using this approach, we study the input provided by an actor to the GUI and the output of the GUI to the underlying functionality. We also verify relations between those inputs and outputs. We describe the approach and discuss some first steps towards its validation in terms of fault detection using a real, though simple GUI-based software as well as a synthetic GUI-based software.

Paper Nr: 109
Title:

Grasping Primitive Enthusiasm - Approaching Primitive Obsession in Steps

Authors:

Edit Pengő and Péter Gál

Abstract: Primitive Obsession is a type of a code smell that has lacked the attention of the research community. Although, as a code smell it can be a useful indicator of underlying design problems in the source code, there was only one previously presented automated detection method. In this paper, the Primitive Obsession is discussed and multiple variants for Primitive Enthusiasm is defined. Primitive Enthusiasm is a metric designed to highlight possible Primitive Obsession infected code parts. Additionally other supplemental metrics are presented to grasp more aspects of Primitive Obsession as well. The current implementation of the described metrics is for Java and the evaluation was done on three open-source Java systems.

Paper Nr: 110
Title:

JavaScript Guidelines for JavaScript Programmers - A Comprehensive Guide for Performance Critical JS Programs

Authors:

Gábor Lóki and Péter Gál

Abstract: Programming guidelines are used for almost every programming language. Guidelines can differ for each project and each programmer. In general, however they usually try to give a common format for the given project in some aspect. This aspect can be code style related or even performance related. A performance guideline tries to help programmers formulate such code which can be executed quickly by the computer. For statically compiled languages, numerous performance guidelines are available. In the web era, the JavaScript language is used extensively by many developers. For this language, the performance guidelines are not that widespread, although there are a few research papers about them. Additionally, the language has incorporated new constructs in its newer versions. In this paper, some of the new ECMAScript 6 constructs are investigated to determine if they should be used in a performance sensitive JavaScript application. The elements are compared with the ECMAScript 5.1 variants. To give a more usable set of guidelines, the tests are performed on multiple JavaScript engines ranging from server side JS engines to engines which can be used in embedded systems.

Paper Nr: 116
Title:

Improved Effort Estimation of Heterogeneous Ensembles using Filter Feature Selection

Authors:

Mohamed Hosni, Ali Idri and Alain Abran

Abstract: Estimating the amount of effort required to develop a new software system remains the main activity in software project management. Thus, providing an accurate estimate is essential to adequately manage the software lifecycle. For that purpose, many paradigms have been proposed in the literature, among them Ensemble Effort Estimation (EEE). EEE consists of predicting the effort of the new project using more than one single predictor. This paper aims at improving the prediction accuracy of heterogeneous ensembles whose members use filter feature selection. Three groups of ensembles were constructed and evaluated: ensembles without feature selection, ensembles with one filter, and ensembles with different filters. The overall results suggest that the use of different filters lead to generate more accurate heterogeneous ensembles, and that the ensembles whose members use one filter were the worst ones.

Paper Nr: 118
Title:

On using UML Diagrams to Identify and Assess Software Design Smells

Authors:

Thorsten Haendler

Abstract: Deficiencies in software design or architecture can severely impede and slow down the software development and maintenance progress. Bad smells and anti-patterns can be an indicator for poor software design and suggest for refactoring the affected source code fragment. In recent years, multiple techniques and tools have been proposed to assist software engineers in identifying smells and guiding them through corresponding refactoring steps. However, these detection tools only cover a modest amount of smells so far and also tend to produce false positives which represent conscious constructs with symptoms similar or identical to actual bad smells (e.g., design patterns). These and other issues in the detection process demand for a code or design review in order to identify (missed) design smells and/or re-assess detected smell candidates. UML diagrams are the quasi-standard for documenting software design and are often available in software projects. In this position paper, we investigate whether (and to which extend) UML diagrams can be used for identifying and assessing design smells. Based on a description of difficulties in the smell detection process, we discuss the importance of design reviews. We then investigate to which extend design documentation in terms of UML2 diagrams allows for representing and identifying software design smells. In particular, 14 kinds of design smells and their representability in UML class and sequence diagrams are analyzed. In addition, we discuss further challenges for UML-based identification and assessment of bad smells.

Posters
Paper Nr: 27
Title:

A Teaching Approach of Software Measurement Process with Gamification - A Experimental Study

Authors:

Lennon Sales Furtado and Sandro Ronaldo Bezerra Oliveira

Abstract: The literature points out some obstacles in teaching the measurement process. Among these, one of the most notorious is the medium as such process is taught, being generally taught purely by means of lectures. In order to address this problem, some authors point out that it is important to use alternative teaching proposals, such as the use of gamification. To this end, this paper presents a proposal to teach the software measurement process by a gamified classroom. Its purpose is to encourage interaction in the classroom and thereby foster interest in such process. Furthermore, using this proposal, an experiment was conducted in a class with 15 students and as a means of evaluating the proposal, a questionnaire was applied where the students pointed different criteria to evaluate the modules of the proposal. As a result, a level of more than 80% of evaluations was obtained with the good and excellent criteria for the gamification proposal. The same proposal has helped in the teaching process of software measurement by creating a competitive and collaborative environment with the core in classroom interactions.

Paper Nr: 28
Title:

An Investigation into the Energy Consumption of HTTP POST Request Methods for Android App Development

Authors:

Hina Anwar, Dietmar Pfahl and Satish Srirama

Abstract: Producing energy efficient applications without compromising performance is a difficult job for developers as it affects the utility of smart devices. In this paper, we conducted a small-scale evaluation of selected implementations using different methods for making HTTP POST requests. In the evaluation, we measured how much energy is consumed by each implementation and how varying message payload size effects the energy consumption of each implementation. Our results provide useful guidance for mobile app developers. In particular, we found that implementation using OkHttp consumes less energy than the implementation using HttpURLConnection or Volley libraries. These results serve to inform the developers about the energy consumption of different HTTP POST request methods.

Paper Nr: 29
Title:

End-User Software Engineering in K-12 by Leveraging Existing Curricular Activities

Authors:

Ilenia Fronza and Claus Pahl

Abstract: In recent years, an increasing number of people (called “end-users”) have started to perform a range of activities related to software development, such as coding with domain-specific languages. The research in the area of End-User Software Engineering (EUSE) aims at improving the quality of end-user-produced software by paying attention to the entire software life cycle. The increasing number of activities dedicated to the diffusion of coding in K-12 motivates the need of EUSE also in this environment. Students indeed will probably need to produce software in their future careers (even if not professionally), and the quality of their software may be crucial. In this work, we describe a didactic module in which the activities usually carried out in the existing study programme are exploited to introduce software engineering principles. The module does not shift the students’ attention from their main objectives and does not introduce additional lectures on software engineering topics. We describe the results of a first edition of the module that involved 17 students in a trilingual international high school. The results are promising and allow us to formulate hypotheses for further work, such as extending our approach to other activities and observe if and when students will develop a “software engineering mindset”, even without developing software.

Paper Nr: 30
Title:

Analysis of Ensemble Models for Aging Related Bug Prediction in Software Systems

Authors:

Shubham Sharma and Sandeep Kumar

Abstract: With the evolution of the software industry, the growing software complexity led to the increase in the number of software faults. According to the study, the software faults are responsible for many unplanned system outages and affects the reputation of the company. Many techniques are proposed in order to avoid the software failures but still software failures are common. Many software faults and failures are outcomes of a phenomenon, called software aging. In this work, we have presented the use of various ensemble models for development of approach to predict the Aging Related Bugs (ARB). A comparative analysis of different ensemble techniques, bagging, boosting and stacking have been presented with their comparison with the base learning techniques which has not been explored in the prediction of ARBs. The experimental study has been performed on the LINUX and MYSQL bug datasets collected from Software Aging and Rejuvenation Repository.

Paper Nr: 71
Title:

Evolving a Model for Software Process Context: An Exploratory Study

Authors:

Diana Kirk and Stephen G. MacDonell

Abstract: In the domain of software engineering, our efforts as researchers to advise industry on which software practices might be applied most effectively are limited by our lack of evidence based information about the relationships between context and practice efficacy. In order to accumulate such evidence, a model for context is required. We are in the exploratory stage of evolving a model for context for situated software practices. In this paper, we overview the evolution of our proposed model. Our analysis has exposed a lack of clarity in the meanings of terms reported in the literature. Our base model dimensions are People, Place, Product and Process. Our contributions are a deepening of our understanding of how to scope contextual factors when considering software initiatives and the proposal of an initial theoretical construct for context. Study limitations relate to a possible subjectivity in the analysis and a restricted evaluation base. In the next stage in the research, we will collaborate with academics and practitioners to formally refine the model.

Paper Nr: 77
Title:

Cross Project Software Defect Prediction using Extreme Learning Machine: An Ensemble based Study

Authors:

Pravas Ranjan Bal and Sandeep Kumar

Abstract: Cross project defect prediction, involves predicting software defects in the new software project based on the historical data of another project. Many researchers have successfully developed defect prediction models using conventional machine learning techniques and statistical techniques for within project defect prediction. Furthermore, some researchers also proposed defect prediction models for cross project defect prediction. However, it is observed that the performance of these defect prediction models degrade on different datasets. The completeness of these models are very poor. We have investigated the use of extreme learning machine (ELM) for cross project defect prediction. Further, this paper investigates the use of ELM in non linear heterogeneous ensemble for defect prediction. So, we have presented an efficient nonlinear heterogeneous extreme learning machine ensemble (NH ELM) model for cross project defect prediction to alleviate these mentioned issues. To validate this ensemble model, we have leveraged twelve PROMISE and five eclipse datasets for experimentation. From experimental results and analysis, it is observed that the presented nonlinear heterogeneous ensemble model provides better prediction accuracy as compared to other single defect prediction models. The evidences from completeness analysis also proved that the ensemble model shows improved completeness as compared to other single prediction models for both PROMISE and eclipse datasets.

Paper Nr: 97
Title:

A Symbolic Model Checker for Database Programs

Authors:

Angshuman Jana, Md. Imran Alam and Raju Halder

Abstract: Most of the existing model checking approaches refer mainstream languages without considering any database statements. As the result, they are not directly applicable to database applications for verifying their correctness. On the other hand, few works in the literature address the verification of database applications focusing atomicity constraints, transaction properties, etc. In this paper, as an alternative, we propose the design of a symbolic model checker for database programs to verify integrity properties defined over database attributes. The proposed model checker is designed based on the following key modules: (i) Abstraction, (ii) Verification, and (iii) Refinement.

Paper Nr: 104
Title:

Deconstructing the Refactoring Process from a Problem-solving and Decision-making Perspective

Authors:

Thorsten Haendler and Josef Frysak

Abstract: Refactoring is the process of improving a software system’s internal technical quality by modifying and re-structuring a system’s source code without changing its external behavior. Manual identification and assessment of refactoring candidates as well as planning and performing the refactoring steps are complex and tedious tasks, for which several tools and techniques for automation and decision support have been proposed in recent years. Despite these advances, refactoring is still a neglected part of software engineering in practice, which is attributed to several barriers that prevent software practitioners from refactoring. In this paper, we present an approach for deconstructing the refactoring process into decision-problems and corresponding decision-making sub-processes. Within this, we pursue the question of whether and how a theoretical perspective can contribute to better understand the difficulties in the refactoring process (barriers) and to help improving the refactoring support techniques (enablers). For this purpose, we follow a deductive reasoning approach by applying concepts from decision-making research to deconstruct the refactoring process. As a result, we present a process model, which integrates primary decision problems and corresponding decision-making sub-processes in refactoring. Based on this process model, software companies can gain a better understanding of decision-making in the refactoring process. We finally discuss the applied procedure and reflect on limitations and potential of applying such a theoretical perspective.

Paper Nr: 107
Title:

Towards GUI Functional Verification using Abstract Interpretation

Authors:

Abdulaziz Alkhalid and Yvan Labiche

Abstract: Abstract interpretation is a static analysis technique used mostly for non-functional verification of software. In this paper, we show the status of the technology that implements abstract interpretation which can help in GUI-based software verification. Specifically, we investigate the use of the Julia tool for the functional verification of a Graphical User Interface (GUI).

Area 3 - Software Systems and Applications

Full Papers
Paper Nr: 4
Title:

From Theory to Practice: The Challenges of a DevOps Infrastructure as Code Implementation

Authors:

Clauirton Siebra, Rosberg Lacerda, Italo Cerqueira, Jonysberg P. Quintino, Fabiana Florentin, Fabio Q. B. da Silva and Andre L. M. Santos

Abstract: DevOps is a recent approach that intends to improve the collaboration between development and IT operations teams, in order to establish a continuous and efficient deployment process. Previous studies show that DevOps is based on dimensions, such as culture of collaboration, automation and monitoring. However, few studies discuss the current frameworks that support such dimensions, so that there is a lack in information that could assist development teams in deciding for the most adequate framework according to their needs. This work aims at presenting a practical DevOps implementation and analysing how the process of software delivery and infrastructure changes was automated. Our approach follows the principles of infrastructure as code, where a configuration platform – PowerShell DSC – was used to automatically define reliable environments for continuous software delivery. Then, we compare this approach with other alternative such as Chef and Puppet tools, stressing the features, advantages and challenges of each strategy. The lessons learned from this work are then used to create a more concrete set of practices that could assist the transition from traditional approaches to an automation process of continuous software delivery.

Paper Nr: 6
Title:

A Commit Change-based Weighted Complex Network Approach to Identify Potential Fault Prone Classes

Authors:

Chun Yong Chong and Sai Peck Lee

Abstract: Over the past few years, attention has been focused on utilizing complex network analysis to gain a high-level abstraction view of software systems. While many studies have been proposed to use interactions between software components at the variable, method, class, package, or combination of multiple levels, limited studies investigated how software change history and evolution pattern can be used as a basis to model software-based weighted complex network. This paper attempts to fill in the gap by proposing an approach to model a commit change-based weighted complex network based on historical software change and evolution data captured from GitHub repositories with the aim to identify potential fault prone classes. Experiments were carried out using three open-source software to validate the proposed approach. Using the well-known change burst metric as a benchmark, the proposed method achieved average precision of 0.77 and recall of 0.8 on all the three test subjects.

Paper Nr: 7
Title:

A Social Multi-agent Cooperation System based on Planning and Distributed Task Allocation: Real Case Study

Authors:

Dhouha Ben Noureddine, Atef Gharbi and Samir Ben Ahmed

Abstract: In multi-agent systems, agents are socially cooperated with their neighboring agents to accomplish their goals. In this paper, we propose an agent-based architecture to handle different services and tasks; in particular, we focus on individual planning and distributed task allocation. We introduce the multi-agent planning in which each agent uses the fuzzy logic technique to select the alternative plans. We also propose an effective task allocation algorithm able to manage loosely coupled distributed environments where agents and tasks are heterogeneous. We illustrate our line of thought with a Benchmark Production System used as a running example in order to explain better our contribution. A set of experiments show the efficiency of our planning approach and the performance of our distributed task allocation method.

Paper Nr: 13
Title:

Analysis of GPGPU Programs for Data-race and Barrier Divergence

Authors:

Santonu Sarkar, Prateek Kandelwal, Soumyadip Bandyopadhyay and Holger Giese

Abstract: Todays business and scientific applications have a high computing demand due to the increasing data size and the demand for responsiveness. Many such applications have a high degree of parallelism and GPGPUs emerge as a fit candidate for the demand. GPGPUs can offer an extremely high degree of data parallelism owing to its architecture that has many computing cores. However, unless the programs written to exploit the architecture are correct, the potential gain in performance cannot be achieved. In this paper, we focus on the two important properties of the programs written for GPGPUs, namely i) the data-race conditions and ii) the barrier divergence. We present a technique to identify the existence of these properties in a CUDA program using a static property verification method. The proposed approach can be utilized in tandem with normal application development process to help the programmer to remove the bugs that can have an impact on the performance and improve the safety of a CUDA program.

Paper Nr: 14
Title:

An Automatic Test Data Generation Tool using Machine Learning

Authors:

Ciprian Paduraru and Marius-Constantin Melemciuc

Abstract: This paper discusses an open source tool that is capable to assist users in generating automatic test data for multiple programs under test. The tool works by clustering inputs data from a corpus folder and producing generative models for each of the clusters. The models have a recurrent neural network structure and their training and sampling are parallelized with Tensorflow. As features, the tool supports online updating of the corpus folder and the already trained models, and supports any kind of program under test or input file example. There is no manual effort for users, other than customizing per cluster parameters for optimizations or using function hooks that they could use through a data structure, which acts as an expert system. The evaluation section shows the efficiency of both learning and code coverage using some concrete programs and new tests sampling methods.

Paper Nr: 38
Title:

Orchestrating Functional Change Decisions in Scrum Process using COSMIC FSM Method

Authors:

Asma Sellami, Mariem Haoues, Nour Borchani and Nadia Bouassida

Abstract: Because user requirements change frequently during the software life-cycle, project managers looked at flexible methods that manage changes as more information take place. Today, agile methods such as Scrum are increasingly used in software organizations to avoid the danger of “scope creep” and project delays. Although the scrum method is gaining popularity, a change request is still poorly evaluated within an ongoing sprint. Moreover, there is a lack of a structured change evaluation approach that helps in making an appropriate decision after a Functional Change (FC) request. The main objective of this paper is to propose a decision support tool that assists the decision-makers to decide whether to accept, deny or defer a given FC request. For this purpose, we use the COSMIC Functional Size Measurement (FSM) method to evaluate a FC request. We distinguish between a FC affecting an ongoing sprint and a FC affecting an implemented sprint. Based mainly on the functional size of the proposed FC and the development progress, we provide recommendations to the decision-makers.

Paper Nr: 68
Title:

WIoT: Interconnection between Wise Objects and IoT

Authors:

Ilham Alloui, Eric Benoit, Stéphane Perrin and Flavien Vernier

Abstract: Internet of Things (IoT) technologies remain young and require software technologies to ensure data/information management among things in order to deliver more sophisticated services to their users. In particular, users of IOT-based technologies need systems adapt to their use and not the reverse. To meet those requirements, we enriched our object oriented framework WOF (Wise Object Framework) with a communication structure to interconnect WOs (Wise Objects) and IoT. Things from IoT are then able to learn, monitor and analyze data in order to be able to adapt their behavior. In this paper, we recall the underlying concepts of our framework and then focus on the interconnection between WOs and IoT. This is provided through a software bus-based architecture and IoT related communication protocols. We designed a dedicated communication protocol for IoT objects. We show how IoT objects can benefit from learning, monitoring and analysis mechanisms provided by WOF to identify common usage of a system and unusual behavior (habit change). We illustrate this through a particular case of home automation.

Paper Nr: 85
Title:

A Software Product Line Approach for Feature Modeling and Design of Secure Connectors

Authors:

Michael Shin, Hassan Gomaa and Don Pathirage

Abstract: This paper describes a software product line approach to modeling the variability of secure software connectors by means of a feature model, which consists of security pattern and communication pattern features used in the design of secure component-based software architectures for concurrent and distributed software applications. Applying separation of concerns, these features are designed as security and communication pattern components. Each secure connector is designed as a composite component that encapsulates both security pattern and communication pattern components. Integration of these components within a secure connector is enabled by a security coordinator. This paper describes the feature model, design of secure connectors, how applications are built using secure connectors, and the validation of the approach.

Paper Nr: 91
Title:

Scalable Supervised Machine Learning Apparatus for Computationally Constrained Devices

Authors:

Jorge López, Andrey Laputenko, Natalia Kushik, Nina Yevtushenko and Stanislav N. Torgaev

Abstract: Computationally constrained devices are devices with typically low resources / computational power built for specific tasks. At the same time, recent advances in machine learning, e.g., deep learning or hierarchical or cascade compositions of machines, that allow to accurately predict / classify some values of interest such as quality, trust, etc., require high computational power. Often, such complicated machine learning configurations are possible due to advances in processing units, e.g., Graphical Processing Units (GPUs). Computationally constrained devices can also benefit from such advances and an immediate question arises: how? This paper is devoted to reply the stated question. Our approach proposes to use scalable representations of ‘trained’ models through the synthesis of logic circuits. Furthermore, we showcase how a cascade machine learning composition can be achieved by using ‘traditional’ digital electronic devices. To validate our approach, we present a set of preliminary experimental studies that show how different circuit apparatus clearly outperform (in terms of processing speed and resource consumption) current machine learning software implementations.

Paper Nr: 100
Title:

A Refinement based Verification Approach of BPMN Models using NuSMV

Authors:

Salma Ayari, Yousra Bendaly Hlaoui and Leila Jemni Ben Ayed

Abstract: Modeling complex workflow systems, using BPMN (Business Process Modeling Notation), is going increasing attention by all interested researches in distributed field. The step-wise refinement technique facilitates the understanding of complex systems by dealing with the major issues before getting involved in the details. In this paper, we propose a verification technique based on refinement BPMN process which allows to model an application by refinement and to induce gradually required properties at each level from the abstract to the concrete one. We introduce refinement patterns allowing the design of a complex application at different abstract level. Hence, a formal semantics for BPMN models based on Kripke structure and BPMN refinement patterns will be provided for a formal verification of this correctness. This verification is ensured automatically by NuSMV model Checker based on a BPMN language to NuSMV language transformation. The refinement correctness are expressed as refinement safety properties specified with LTL (Linear Temporal Logic).

Short Papers
Paper Nr: 17
Title:

Why Do We Need the C language in Programming Courses?

Authors:

Katsuhiko Gondow and Yoshitaka Arahori

Abstract: The C language is still one of the important programming languages both in development and in education, since C has several positive characteristics like good abstraction for low-level programming, and fast execution speed with less footprint, although C has several drawbacks and pitfalls like buffer overruns. So there are several research studies to support C programming educations to compensate the C’s drawbacks and pitfalls, but there is a skeptical view about these research direction: “C is a bad language, so it is better to stop teaching C, instead of supporting C education”. In this position paper, we argue against this skeptical view, mainly because C is very important in upper-level courses like embedded/system programming and operating systems, so worth teaching.

Paper Nr: 25
Title:

Machine Learning for the Internet of Things Security: A Systematic Review

Authors:

Darko Andročec and Neven Vrček

Abstract: Internet of things (IoT) is nowadays one of the fastest growing technologies for both private and business purposes. Due to a big number of IoT devices and their rapid introduction to the market, security of things and their services is often not at the expected level. Recently, machine learning algorithms, techniques, and methods are used in research papers to enhance IoT security. In this paper, we systematically review the state-of-the art to classify the research on machine learning for the IoT security. We analysed the primary studies, identify types of studies and publication fora. Next, we have extracted all machine learning algorithms and techniques described in primary studies, and identified the most used ones to tackle IoT security issues. We classify the research into three main categories (intrusion detection, authentication and other) and describe the primary studies in detail to analyse existing relevant works and propose topics for future research.

Paper Nr: 55
Title:

Reference Architecture Design: A Practical Approach

Authors:

Mustapha Derras, Laurent Deruelle, Jean Michel Douin, Nicole Levy, Francisca Losavio, Yann Pollet and Valérie Reiner

Abstract: Reference Architectures (RA) in a Software Product Line (SPL) context are generic schemas that can be instantiated to configure concrete architectures for particular software systems or products of the SPL family. SPLs claim to be reusable industrial solutions to reduce development cost and time to market; however their development requires a huge effort, since the RA must be evolutionary. The goal of this work in progress is to present a practical RA domain engineering method for the Human Resources domain based on a bottom-up strategy applied to the early Scope phase of SPL Engineering. A usual industrial practice in this development is to start from a single existing product built by the enterprise and incrementally derive the RA by adding new elements. Four architectural configurations are developed and quality properties are considered early as major responsible of the SPL variability. Our approach is applied to a real industrial case study.

Paper Nr: 57
Title:

On Handling Source Code Positions and Local Variables in LTL Software Model Checking

Authors:

Guillaume Hétier and Hanifa Boucheneb

Abstract: Software model checking techniques can provide the guaranty a system respects a specification. However, some limitations reduce the expressiveness of the most used specification formalisms (the assertions and LTL) and increase the risk of error, especially for concurrent programs. We design a new specification formalism that extends LTL by allowing local variables and code positions in LTL atomic propositions. We introduce validity areas to extend the definition of atomic propositions using local variables and to handle positions in source code. Then, we introduce a source to source transformation that aims to reduce the LTL verification problem to an assertion verification problem for finite programs by building the product between the program code source and the implementation of Büchi automaton. Eventually, we apply this transformation to verify a small benchmark specified with the specification formalism we proposed.

Paper Nr: 61
Title:

From a BPMN Model to an Aligned UML Analysis Model

Authors:

Wiem Khlif, Nouchène Elleuch Ben Ayed and Hanêne Ben-Abdallah

Abstract: Aligning the information system (IS) of an enterprise to its corresponding Business Process (BP) model is crucial to the consistent analysis of the business performance. However, establishing or maintaining this BP-IS alignment is not trivial when the enterprise develops a new IS or changes its IS or BP. The difficulty mainly stems from the differences in the knowledge of the information system developers and the business process experts. This paper proposes a new requirements engineering method that helps software analysts to build an IS analysis model, which is aligned to a given BP model. The built model can be used to develop a new IS and/or to examine the deviation of the new IS from the existing one after BP/IS evolution. The proposed method adopts an MDA approach where, at the CIM level, the BP is modelled through the standard BPMN and, at the PIM level, the aligned IS model is generated as UML use case diagram documented with a set of system sequence diagrams and the corresponding class diagram. Its originality resides in the CIM to PIM transformations which account for the BP structural and semantic perspectives to generate an aligned IS model.

Paper Nr: 86
Title:

A Software Product Line Approach to Designing End User Applications for the Internet of Things

Authors:

Vasilios Tzeremes and Hassan Gomaa

Abstract: The ubiquity of the Internet of Things (IoT) has made a big impact in creating smart spaces that can sense and react to human activities. The natural progression of these spaces is for end users to create customized applications that suit their everyday needs. One of the shortcomings of the current approaches is that there is a lack of reuse and end users have to design from scratch similar applications for different smart spaces, which leads to duplication of effort and software quality issues. This paper describes a systematic approach for adopting reuse in IoT by using Software Product Line (SPL) concepts while using design patterns relevant to these environments. In detail the paper describes the End User (EU) SPL process that can be used to design EU SPLs for IoT environments and derive applications for different smart spaces. A Smart Home case study is discussed to illustrate the inner workings of the EU SPL process for IoT applications.

Paper Nr: 88
Title:

Open Source Data Mining Tools Evaluation using OSSpal Methodology

Authors:

Any Keila Pereira, Ana Paula Sousa, João Ramalho Santos and Jorge Bernardino

Abstract: Data Mining is currently one of the best technological developments that offers efficient ways to analyse massive data sets and get hidden and useful knowledge that can have value to business. The use of Open Source Data Mining tools has the advantage of not increasing acquisition costs for companies and organizations. However, one of the main challenges is to choose the best Open Source Data Mining tool that meet their specific needs. This paper compares three of the top Open Source Data Mining tools: Knime, RapidMiner, and Weka. For the comparison the OSSpal methodology is used, combining quantitative and qualitative evaluation measures to identify the best tool.

Paper Nr: 101
Title:

System to Predict Diseases in Vineyards and Olive Groves using Data Mining and Geolocation

Authors:

Luís Alves, Rodrigo Rocha Silva and Jorge Bernardino

Abstract: In recent years, producers have complained about the disease attacks in their crops, due in large part to the weather conditions that lead to heavy losses. Information and communication technology in agriculture offers a wide range of solutions to some agricultural challenges. This technology that allows progress of the agricultural sector can increase the productivity and profitability of a farm. This paper intends to propose a System to predict diseases in Vineyards and Olive Groves using data mining and geolocation. Grapevine Downy Mildew, Powdery Mildew, Peacock Spot and Olive Anthracnose are the diseases used to test system because they are diseases that cause large losses in production that result in very small profits and large economic losses. The system captures and stores climatic, environmental data as well as data of the producers and their properties. The data collected by the system is used to predict diseases using data mining. We choose Random Forest algorithm provided by Weka, an open source system that provides a collection of visualization tools and algorithms for data analysis and predictive modelling, to calculate the probability of diseases occurrence. The main objective of the system is to help producers in a preventive way so that there is less loss in the production of such agricultural crops.

Paper Nr: 102
Title:

A Comprehensive Approach to Compliance Management in Inter-organizational Service Integration Platforms

Authors:

Laura González and Raúl Ruggia

Abstract: Organizations increasingly collaborate with each other using the service-oriented approach. Such collaboration is usually supported by integration platforms which process all inter-organizational service interactions. In turn, these collaborative environments have to satisfy compliance requirements originating from different sources (e.g. laws, standards). This paper proposes a comprehensive approach to compliance management in inter-organizational service integration platforms. The approach defines a compliance management life cycle for this context and a common framework to homogeneously manage compliance issues in different areas.

Posters
Paper Nr: 2
Title:

Issue Tracking Systems: What Developers Want and Use

Authors:

Davide Falessi, Freddy Hernandez and Foaad Khosmood

Abstract: An Issue Tracking System (ITS) allows a developer to keep track of, prioritize, and assign multitudes of bugs, feature requests, and other development tasks such as testing. Despite ITSs play a significant role in day-to-day developers’ activities, no previous study investigated what developers want and use in an ITS. The aim of this paper is twofold. First, we provide a feature matrix that maps six of the most used ITS to features, and second, we measure the developers’ level of use and perceived importance of each feature. This knowledge has multiple benefits such as supporting the decision of the ITS to use and revealing promising areas of research and development. Specifically, quality improvement effort should target improving functionality in use, and development effort should target supporting functionalities needed. In this paper, we define and extract ten core ITS features and asked more than a hundred developers to rate their importance and use. Our results show that Advanced Search and Flexible Notifications are the most important features. Moreover, results show that no feature has been used by more than 90% of the respondents. Another interesting finding is that 27% of respondents rate Workflow Automation as a useful or required feature, despite having never used it themselves; this suggests the need to better training, exposure or of availability of ITS features. In conclusion, our results pave the way to significant research and development effort on ITS.

Paper Nr: 18
Title:

Adapting a Component-based Model Approach to SOA: A Robotic Experience

Authors:

Francisca Rosique, Nour Ali and Fernando Losilla

Abstract: C-Forge is an approach that combines Component Based Software Engineering (CBSE) and Model Driven Software Development (MDSD), and has been previously used to define the software architecture of robotic systems. However, as robotic systems become part of a dynamic and heterogeneous environment, CBSE becomes limited. A paradigm that promises to easily adapt and integrate collaborative, heterogeneous and distributed systems is Service Oriented Architecture (SOA). In this paper, we enrich C-Forge with service oriented architectural primitives by extending its CBSE metamodel and Model Driven Methodology.

Paper Nr: 37
Title:

Deriving Integrated Software Design Models from BPMN Business Process Models

Authors:

Estrela Ferreira Cruz and António Miguel Rosado da Cruz

Abstract: Business process management focuses its attention on designing, modelling and documenting business processes, to describe which activities are performed and the dependencies between them. These business process models have lots of useful information for starting to develop a supporting software system. This paper proposes a model-driven approach to support the construction of a use case model, an integrated domain model, and a user interface model, from a set of business process models, comprising all existing information in those models. The proposed approach obtains a system’s complete use case model, including the identification of actors, use cases and the corresponding descriptions, and relations between use cases, and between these and the structural domain classes. The resulting integrated use case and domain models are then further transformed into the system’s default abstract user interface model. A demonstration case is used throughout the paper as a running example. At the end, conclusions are presented, together with some future research directions.

Paper Nr: 44
Title:

Importance of Time Management in IT Projects

Authors:

Artur Biskupek and Seweryn Spalek

Abstract: This article consists of two parts. The first part is a review of the literature about IT project management and time management in project. The second part shows the results of the research on the influence of time management on project success. The research tool was a questionnaire. The authors invited 75 project managers to participate in the research and 53 of them sent back a completed questionnaire. The research tool contained three parts. The first of them was the imprint. The second part concerned the way of defining IT projects and time management in IT projects. The third part concerned the topic of the research. The questionnaire included twenty questions. After results analysis and interpretation, it is the research shows the impact is so significant, that it can be asserted as it decides of success or failure of the whole project and it has to be managed all the time during the project. The authors were aware the research was limited by having such a small research sample and geographical constraints, so it is suggested that further, more extensive research should be conducted in order to validate the result of this research.

Paper Nr: 51
Title:

Automatic Properties Classification Approach for Guiding the Verification of Complex Reconfigurable Systems

Authors:

Mohamed Ramdani, Laid Kahloul and Mohamed Khalgui

Abstract: This paper deals with reconfigurable discrete event/control systems (RDECSs) that dynamically change their structures due to external changes in environment or user requirements. Reconfigurable Timed Net Condition/Event Systems (R-TNCESs) are proposed as an extension of the Petri nets formalism for the optimal functional and temporal specification of RDECSs. The correct design of these systems continues to challenge experts in both academia and industry, since bugs not covered early can be extremely expensive at the final deployment. The classic model-checking using computation tree logic (CTL) and its extensions (extended CTL, Timed CTL, etc) produces a large number of properties, possibly redundant, to be verified in a complex R-TNCES. To control the complexity and to reduce the verification time, a reduction technique of properties is proposed. The novelty consists in the classification of CTL properties according to their semantic relationships for guiding an efficient verification. An algorithm is proposed for the automatic classification of CTL properties before starting model-checking process. A case study is exploited to illustrate the impact of using this technique. The current results show the benefits of the paper’s contribution.

Paper Nr: 56
Title:

A Formal Approach for Multi-occurrence Crisis Management

Authors:

Hela Kadri, Simon Collart-Dutilleul, Philippe Bon and Samir Ben Ahmed

Abstract: Having observed the need from the state of the art concerning crisis management, a formal expression is proposed. In the case of neighbour crisis series, the crisis management plans (CMPs) are merged into one and the level of complexity could be increased. A formal approach to control crisis management systems (CMS) is then introduced, which is studied as a discrete event system (DES). Based on the supervisory control theory (SCT), a multi-model approach is used to define the behavior of each CMP separately as well as its different control strategies through the use of prioritized colored Petri nets (PCPN). Finally, a global CMS model is provided. It is generated using an algorithm and ensures the control of CMPs evolution thanks to the operating modes management. In addition to the simulation and the formal validation of some safety properties, the global model incorporates concepts of common operating mode and of common sub-behavior in order to be of a reasonable size.

Paper Nr: 64
Title:

A Refactoring Architecture for Measuring and Identifying Spots of Design Patterns Insertion in Source Code

Authors:

Luan Bukowitz Beluzzo, Simone Nasser Matos and Thyago Henrique Pacher

Abstract: This work presents an architecture for detecting insertion spots of design patterns in an object-oriented source code. The proposed architecture contains a Service that implements Detection Methods (DMS) present in the literature such as identification of precursors, prolog rules and facts, among others. The DMS notifies the Metrics Service (MS) which patterns can be used. The evaluation of the application of the patterns undertaken by the MS is performed by means of quality metrics such as maintainability, flexibility, and so forth. The MS notifies the Client App (CA) of the advantages and disadvantages of using the eligible patterns. The CA interacts with the user to retrieve decisions about which changes to perform in source code according to the design pattern real benefit and notifies the Applier Service (AS), that applies the patterns in the source code. The difference between the proposed architecture and the literature is that it allows a thorough interaction with the user and it creates an extendable environment to cover several pattern detection/insertion methods. The architecture allows automated support to users engaged in the refactoring process based on design patterns.

Paper Nr: 83
Title:

Revisiting the Notion of GUI Testing

Authors:

Abdulaziz Alkhalid, Yvan Labiche and Sashank Nekkanti

Abstract: The practitioner interested in reducing software verification effort may found herself lost in the many alternative definitions of Graphical User Interface (GUI) testing that exist and their relation to the notion of system testing. One result of these many definitions is that one may end up testing twice the same parts of the Software Under Test (SUT), specifically the application logic code. We revisit the notion of GUI testing and introduce a taxonomy of activities pertaining to testing GUI-based software. We use the taxonomy to map a representative sample of research works and tools, showing several aspects of testing GUI-software may be overlooked.

Paper Nr: 84
Title:

Web Service Selection based on Parallel Cluster Partitioning and Representative Skyline

Authors:

Sahar Regaieg, Saloua Ben Yahia and Samir Ben Ahmed

Abstract: Optimizing the composition of web services is a multi-criteria optimization problem that consists in selecting the best web services candidates from a set of services having the same functionalities but with different Quality of Service (QoS). In a large scale context, the huge number of web services leads to a great challenge: how to find the optimal web services composition while satisfying all the constraints within a reasonable execution time. Most of the solutions dealing with large scale systems propose a parallel Skyline phase performed on a partitioned data space to preselect the best web services candidates. The Global Skyline is computed after the consolidation of all the Local Skylines and, eventually the optimization algorithm is applied. However, these partitioning approaches are only based on pure geometric rules and do not classify the web services according to their real contribution to the optimal or sub-optimal solution search area. We will propose in this paper an intelligent partitioning approach using a cluster based algorithm combined with the representative Skyline.

Paper Nr: 87
Title:

Formal Verification for Advanced Sensing Applications: Data Pre-processing in the INSPEX System

Authors:

Joseph Razavi, Richard Banach, Suzanne Lesecq, Olivier Debicki, Nicolas Mareau, Julie Foucault, Marc Correvon and Gabriela Dudnik

Abstract: The INSPEX project aims to miniaturize state-of-the-art obstacle detection technology comprising heterogeneous sensors and advanced processing, so that it can be used for wearable devices. The project focuses on enhancing the white cane used by some visually impaired and blind people. Due to high demand for reliability and performance, the project is a good candidate for the use of formal methods. In this paper, we report lessons we have learned from formal modelling exercises related to the pre-processing of sensor information in INSPEX.

Paper Nr: 113
Title:

On-Line Detecting Instrument of Multiple Working Modes for Optical Fiber Lines of Power System

Authors:

Wanchang Jiang, Cong Huo, Peng Ren, Shengda Wang and He Chen

Abstract: To solve the problem of false alarm or false switching and limitation of available communication wavelength for the existing optical fiber monitoring systems, and ensure the reliable and stable work of the optical fiber network in the power system, an on-line detecting instrument with three different working modes is designed for monitoring power optical fiber lines. And these three working modes, monitoring mode of working optical fiber core, detecting mode of spare optical fiber core and non-detecting mode for optical fiber core, can be freely and independently switched for each fiber line to meet the various demands of monitoring. Detecting units, embedded remote control unit, optical time domain reflectometer, optical switch, alarm module, and application and data sever are designed as instrument structure. The optical path and circuit among units and components is realized based on the printed circuit board, and FPGA board is used to control the units and components, realize the control communication and data transmission between the software platform and the hardware of the instrument. The detected data will be preprocessed by wavelet transform to optimize the diagnosis conclusion of fault. To illustrate effectiveness of the instrument, the instrument is installed in one substation of State Grid Corporation. The effectiveness of the instrument is described and analyzed with the application example.

Paper Nr: 117
Title:

Benefits, Limitations and Costs of IT Infrastructure Virtualization in the Academic Environment. Case Study using VDI Technology

Authors:

Artur Rot and Pawel Chrobak

Abstract: The article describes the economic, organisational and technological reasons for implementing VDI solutions (Virtual Desktop Infrastructure) in the learning centers of academic institutions. It presents also major disadvantages, limitations and challenges of this technology. The comparison of total costs of previous solution (PC) and VDI technology has been also discussed. In addition to the benefits, limitations and costs analysis, a case study of the implementation of a model solution at the Wroclaw University of Economics was presented, which includes almost 400 zero client terminals and over 500 virtualised systems available to students in 12 laboratories. The authors were the originators of the concept of VDI implementation at this University and leaders of the project team. The article also presents selected experiences from project implementation and comments, the discussion of which may be crucial for the successful implementation of presented solutions.