ICSOFT 2019 Abstracts


Area 1 - Foundational and Trigger Technologies

Full Papers
Paper Nr: 40
Title:

Design of Scalable and Resilient Applications using Microservice Architecture in PaaS Cloud

Authors:

David Gesvindr, Jaroslav Davidek and Barbora Buhnova

Abstract: With the increasing adoption of microservice architecture and popularity of Platform as a Service (PaaS) cloud, software architecture design is in many domains leaning towards composition of loosely interconnected services hosted in the PaaS cloud, which in comparison to traditional multitier applications introduces new design challenges that software architects need to face when aiming at high scalability and resilience. In this paper, we study the key design decisions made during microservice architecture design and deployment in PaaS cloud. We identify major challenges of microservice architecture design in the context of the PaaS cloud, and examine the effects of architectural tactics and design patterns in addressing them. We apply selected tactics on a sample e-commerce application, constituting of microservices operated by Azure Service Fabric and utilizing other supportive PaaS cloud services within Microsoft Azure. The impact of the examined design decisions on the throughput, response time and scalability of the analyzed application is evaluated and discussed.

Paper Nr: 95
Title:

Towards an Accurate Prediction of the Question Quality on Stack Overflow using a Deep-Learning-Based NLP Approach

Authors:

László Tóth, Balázs Nagy, Dávid Janthó, László Vidács and Tibor Gyimóthy

Abstract: Online question answering (Q&A) forums like Stack Overflow have been playing an increasingly important role in supporting the daily tasks of developers. Stack Overflow can be considered as a meeting point of experienced developers and those who are looking for a solution for a specific problem. Since anyone with any background and experience level can ask and respond to questions, the community tries to use different solutions to maintain quality, such as closing and deleting inappropriate posts. As over 8,000 posts arrive on Stack Overflow every day, the effective automatic filtering of them is essential. In this paper, we present a novel approach for classifying questions based exclusively on their linguistic and semantic features using deep learning method. Our binary classifier relying on the textual properties of posts can predict whether the question is to be closed with an accuracy of 74% similar to the results of previous metrics-based models. In accordance with our findings we conclude that by combining deep learning and natural language processing methods, the maintenance of quality at Q&A forums could be supported using only the raw text of posts.

Short Papers
Paper Nr: 17
Title:

On Improving 3D U-net Architecture

Authors:

Roman Janovský, David Sedláček and Jiří Žára

Abstract: This paper presents a review of various techniques for improving the performance of neural networks on segmentation task using 3D convolutions and voxel grids – we provide comparison of network with and without max pooling, weighting, masking out the segmentation results, and oversampling results for imbalanced training dataset. We also present changes to 3D U-net architecture that give better results than the standard implementation. Although there are many out-performing architectures using different data input, we show, that although the voxel grids that serve as an input to the 3D U-net, have limits to what they can express, they do not reach their full potential.

Paper Nr: 18
Title:

A Microservice Architecture for Multimobility in a Smart City

Authors:

Cristian Lai, Francesco Boi, Alberto Buschettu and Renato Caboni

Abstract: In this paper we present a microservice architecture designed to build IoT services available on the Web. The high potential of microservice architectures will impact the design of complex systems more significantly in the coming years. We propose a draft of architecture that we have used to develop an application for multimobility services in a smart city using an ecosystem of devices. Such application, designed for a real case study, is extremely heterogeneous in terms of IoT devices and implements a wide range of services for citizens. It aims to give a contribution to reducing traffic generated by private vehicles in the city and to help drivers going towards high traffic areas by presenting real-time mobility data from different sources. The evaluation of the architecture carried out in this study allows to understand how it behaves under an increasing number of devices and users connected to the platform, particularly in terms of response time alterations caused by a large number of requests.

Paper Nr: 51
Title:

An Algorithm for Message Type Discovery in Unstructured Log Data

Authors:

Daniel Tovarňák

Abstract: Log message abstraction is a common way of dealing with the unstructured nature of log data. It refers to the separation of static and dynamic part of the log message, so that both parts can be accessed independently, allowing the message to be abstracted into a more structured representation. To facilitate this task, so-called message types and the corresponding matching patterns must be first discovered, and only after that can be this pattern-set used to pattern-match individual log messages in order to extract dynamic information and impose some structure on them. Because the manual discovery of message types is a tiresome and error-prone process, we have focused our research on data mining algorithms that are able to discover message types in already generated log data. Since we have identified several deficiencies of the existing algorithms, which are limiting their capabilities, we propose a novel algorithm for message type discovery addressing these deficiencies.

Paper Nr: 74
Title:

From Confidential kNN Queries to Confidential Content-based Publish/Subscribe

Authors:

Emanuel Onica, Hugues Mercier and Etienne Rivière

Abstract: Content-based publish/subscribe (pub/sub) is an effective paradigm for information dissemination in distributed systems. In brief, publishers generate feeds of information, and subscriber clients register their interests with a pub/sub service tasked with delivering the published data to interested subscribers. Modern pub/sub services are often externalized to public clouds. This brings economic advantages that are unfortunately overshadowed by associated security risks, in particular related to the confidentiality of both the published data as well as of the subscriptions. Guaranteeing confidentiality for content-based pub/sub in an efficient fashion is an active research area. A promising direction is to leverage specific cryptographic solutions that permit the execution of the pub/sub service over encrypted data. In this article we describe a simple and general methodology to derive new mechanisms for pub/sub confidentiality out of another category of data protection schemes: confidential kNN query mechanisms designed for encrypted databases. We exemplify this framework with a concrete use case. We believe that this initial step will lead to more secure and efficient adaptations of kNN solutions to the pub/sub domain.

Paper Nr: 7
Title:

An Augmented Reality Mirror Exergame using 2D Pose Estimation

Authors:

Fernando Losilla and Francisca Rosique

Abstract: Exergames have become very popular for fitness and rehabilitation purposes. They usually rely on RGB-D sensors to estimate human 3D body pose and, therefore, allow users to interact with virtual environments. Currently, a new generation of deep learning techniques enable the estimation of 2D body pose from video sequences. These video sequences could be augmented with the estimated pose and other virtual objects, resulting in augmented reality mirrors where players can see their reflection along with other visual cues that guide them through exercises. The main benefit of using this approach would be replacing RGB-D cameras with simpler and more widely available webcams. This approach is explored in this work with the development of the ExerCam exergame. This application relies on a single webcam and the OpenPose library to allow users to perform exercises where they have to reach virtual targets appearing on the screen. A preliminary study has been performed in order to explore the technical viability and usability of this application, with promising results.

Area 2 - Software Engineering and Systems Development

Full Papers
Paper Nr: 8
Title:

Test Input Partitioning for Automated Testing of Satellite On-board Image Processing Algorithms

Authors:

Ulrike Witteck, Denis Grießbach and Paula Herber

Abstract: On-board image processing technologies in the satellite domain are subject to extremely strict requirements with respect to reliability and accuracy in hard real-time. Due to their large input domain, it is infeasible to execute all possible test cases. To overcome this problem, we define a novel test approach that efficiently and systematically captures the input domain of satellite on-board image processing applications. To achieve this, we first present a dedicated partitioning into equivalence classes for each input parameter. Then, we define multidimensional coverage criteria to assess a given test suite for its coverage on the complete input domain. Finally, we present a test generation algorithm that automatically inserts missing test cases into a given test suite based on our multidimensional coverage criteria. This results in a reasonably small test suite that covers the whole input domain of satellite on-board image processing applications. We demonstrate the effectiveness of our approach with experimental results from the ESA medium-class mission PLATO.

Paper Nr: 26
Title:

Designing Software Architecture to Support Continuous Delivery and DevOps: A Systematic Literature Review

Authors:

Robin Bolscher and Maya Daneva

Abstract: This paper presents a systematic literature review of software architecture approaches that support the implementation of Continuous Delivery (CD) and DevOps. Its goal is to provide an understanding of the stateof-the-art on the topic, which is informative for both researchers and practitioners. We found 17 characteristics of a software architecture that are beneficial for CD and DevOps adoption and identified ten potential software architecture obstacles in adopting CD and DevOps in the case of an existing software system. Moreover, our review indicated that micro-services are a dominant architectural style in this context. Our literature review has some implications: for researchers, it provides a map of the recent research efforts on software architecture in the CD and DevOps domain. For practitioners, it describes a set of software architecture principles that possibly can guide the process of creating or adapting software systems to fit in the CD and DevOps context.

Paper Nr: 30
Title:

Performance Analysis of Mobile Cross-platform Development Approaches based on Typical UI Interactions

Authors:

Stefan Huber and Lukas Demetz

Abstract: The market for mobile apps is projected to generate revenues of nearly $ 190 billion by 2020. Besides native development approaches, in which developers are required to maintain a unique code base for each mobile platform they want to support, mobile cross-platform development (MCPD) approaches can be used to develop mobile apps. MCPD approaches allow building and deploying mobile apps for several mobile platforms from one single code base. The goal of this paper is to analyze the performance of MCPD approaches based on UI interactions. For this, we developed three mobile apps, one native and two developed using MCPD approaches. Using an automated test, we measured CPU usage and memory consumption of these apps when executing one selected UI interaction, that is, swiping through a virtual scrollable list. The results indicate that the CPU usage of the two apps developed using MCPD approaches is about twice as high compared to the native app, the memory consumption is even substantially higher than in the native app. This papers confirms results of previous studies and extends the body of knowledge by testing UI interactions.

Paper Nr: 35
Title:

Quantitative Metrics for Mutation Testing

Authors:

Amani Ayad, Imen Marsit, JiMeng Loh, Mohamed N. Omri and Ali Mili

Abstract: Mutant generation is the process of generating several variations of a base program by applying elementary modifications to its source code. Mutants are useful only to the extent that they are semantically distinct from the base program; the problem of identifying and weeding out equivalent mutants is an enduring issue in mutation testing. In this paper we take a quantitative approach to this problem where we do not focus on identifying equivalent mutants, but rather on gathering quantitative information about them.

Paper Nr: 36
Title:

Quality Aspects of Serverless Architecture: An Exploratory Study on Maintainability

Authors:

Louis Racicot, Nicolas Cloutier, Julien Abt and Fabio Petrillo

Abstract: Serverless architecture is emerging more and more popular as the tools are becoming cheap and more accessible. This way of designing an architecture presents many advantages especially for computing intensive and event-driven applications. Stateless functions are the foundation for these types of architectures, and it might cause an impact on the maintainability of the software. In this paper, we statically analyzed 25 open-source projects using serverless architecture to bring out metrics that applies to the different characteristics of software maintainability. We found out that some characteristics are positively impacted whilst some other seems to be negatively impacted. This paper thus provides findings on the current state of the projects’ maintainability using serverless architecture.

Paper Nr: 50
Title:

RE4DIST: Model-based Elicitation of Functional Requirements for Distributed Systems

Authors:

Roman Wirtz and Maritta Heisel

Abstract: Nowadays, software-based systems are often decomposed into several distributed subsystems. The complexity of those systems and the decomposition in different subsystems requires a detailed analysis and documentation of functional requirements. Documenting and managing the functional requirements in a consistent manner is a challenge for software engineers. The requirements for each subsystem cannot be considered in isolation, but it is necessary to state the relations between the functional requirements, too. In this paper, we propose a model-based method to elicit and document functional requirements for distributed systems. Our contribution is two-fold: By providing a requirements model, we first enable consistent documentation of the requirements for the different subsystems and make relations between them explicit. Second, we propose a method to systematically elicit functional requirements of distributed systems. By using the proposed model, we document the results in a consistent manner. Our approach is tool supported, which simplifies its application.

Paper Nr: 52
Title:

Fuzz Testing with Dynamic Taint Analysis based Tools for Faster Code Coverage

Authors:

Ciprian Paduraru, Marius-Constantin Melemciuc and Bogdan Ghimis

Abstract: This paper presents a novel method for creating and using generative models for testing software applications. At the core of our method, there is a tool performing binary tracing using dynamic taint analysis. Our open-source tool can learn a connection between code variables that affect the program’s execution flow and their content in a set of initial training examples, producing a generative testing model which can be inferred later to produce new tests. This work attempts to maximize the code coverage metrics by focusing only on those parts of the input that affect the control flow of a program. The method can be used to automatize the test data generation on any binary x86 application. Evaluation section shows that it is producing better code coverage on applications accepting binary input formats, especially when the feedback from the test system is needed in a short time.

Paper Nr: 56
Title:

Evaluating Software Metrics for Sorting Software Modules in Order of Defect Count

Authors:

Xiaoxing Yang

Abstract: Sorting software modules in order of defect count can help testers to focus on software modules with more defects. Many approaches have been proposed to accomplish this. In order to compare approaches more fairly, researchers have provided publicly available data sets. In this paper, we provide a new metric selection approach and evaluate the usefulness of software metrics of eleven publicly available data sets, in order to investigate the quality of these data sets and find out the software metrics that are most efficient for sorting modules in order of defect count. Unexpectedly, experimental results show that only one metric can work well over most of these data sets, which implies that more effective metrics should be introduced. We also obtain other findings from these data sets, which can help to introduce new metrics for sorting software modules in order of defect count to some extent.

Paper Nr: 58
Title:

A Validation Study of a Requirements Engineering Artefact Model for Big Data Software Development Projects

Authors:

Darlan Arruda, Nazim H. Madhavji and Ibtehal Noorwali

Abstract: The elicitation, specification, analysis, prioritisation and management of system requirements for large projects are known to be challenging. It involves a number of diverse issues, such as: different types of stakeholders and their needs, relevant application domains, knowing about product and process technologies, regulatory issues, and applicable standards. The advent of “Big Data” and, in turn, the need for software applications involving Big Data, has further complicated requirements engineering (RE). In part, this is due to the lack of clarity in the RE literature and practices on how to treat Big Data and the “V” characteristics in the development of Big Data applications. Traditionally, researchers in the RE field have created domain models that help in understanding the context of the problem, and in supporting communication and analysis in a project. Analogously, for the emerging field of software applications involving Big Data, we propose an empirically derived RE artefact model. It has been validated for qualities such as: accuracy, completeness, usefulness, and generalisability by ten practitioners from Big Data software development projects in industry. The validation results indicate that the model captures the key RE elements and relationships involved in the development of Big Data software applications. The resultant artefact model is anticipated to help in such activities as: requirements elicitation and specification; definition of specific RE processes; customising and creating a common vision in Big Data RE projects; and creating traceability tools linking the artefacts.

Paper Nr: 63
Title:

Systematic Comparison of Six Open-source Java Call Graph Construction Tools

Authors:

Judit Jász, István Siket, Edit Pengő, Zoltán Ságodi and Rudolf Ferenc

Abstract: Call graphs provide the groundwork for numerous analysis algorithms and tools. However, in practice, their construction may have several ambiguities, especially for object-oriented programming languages like Java. The characteristics of the call graphs – which are influenced by building requirements such as scalability, efficiency, completeness, and precision – can greatly affect the output of the algorithms utilizing them. Therefore, it is important for developers to know a well-defined set of criteria based on which they can choose the most appropriate call graph builder tool for their static analysis applications. In this paper, we studied and compared six static call graph creator tools for Java. Our aim was to identify linguistic and technical properties that might induce differences in the generated call graphs besides the obvious differences caused by the various call graph construction algorithms. We evaluated the tools on multiple real-life open-source Java systems and performed a quantitative and qualitative assessment of the resulting graphs. We have shown how different outputs could be generated by the different tools. By manually analyzing the differences found on larger programs, we also found differences that we did not expect based on our preliminary assumptions.

Paper Nr: 105
Title:

New Methodology for Backward Analysis of Reconfigurable Event Control Systems using R-TNCESs

Authors:

Yousra Hafidi, Laid Kahloul and Mohamed Khalgui

Abstract: This paper deals with reconfigurable discrete event control systems (RDECSs). We model RDECSs using reconfigurable timed net condition/event systems (R-TNCESs) formalism which is an extension from Petri nets to deal with reconfiguration properties. Model-based diagnosis algorithms are widely used in academia and industry to detect faulty components and ensure systems safety. The application of these methods on reconfigurable systems is impossible due to their special behavior. In this paper, we propose accomplishing techniques of backward reachability to make reconfigurable systems model-based diagnosis possible using R-TNCESs. The flexibility among reconfigurable systems like RDECSs allows them to challenge recent requirements of markets. However, such properties and complicated behavior make their verification task being complex and sometimes impossible. We deal with the previous problem by proposing a new methodology based on backward reachability of RDECSs using (R-TNCESs) formalism including improvement methods. The proposed methodology serves to reduce as much as possible redundant computations and gives a package to be used in model-based diagnosis algorithms. The paper’s contribution is applied to a benchmark modular production system. Finally, a performance evaluation is achieved for different sizes of the problem to study benefits and limits of the proposed methodology among large-scale systems.

Paper Nr: 114
Title:

Optimization of Software Estimation Models

Authors:

Chris Kopetschny, Morgan Ericsson, Welf Löwe and Anna Wingkvist

Abstract: In software engineering, estimations are frequently used to determine expected but yet unknown properties of software development processes or the developed systems, such as costs, time, number of developers, efforts, sizes, and complexities. Plenty of estimation models exist, but it is hard to compare and improve them as software technologies evolve quickly. We suggest an approach to estimation model design and automated optimization allowing for model comparison and improvement based on commonly collected data points. This way, the approach simplifies model optimization and selection. It contributes to a convergence of existing estimation models to meet contemporary software technology practices and provide a possibility for selecting the most appropriate ones.

Short Papers
Paper Nr: 5
Title:

An Experimental Evaluation of a Teaching Approach for Statistical Process Control for Software Engineers: An Experimental Study

Authors:

Julio C. Furtado and Sandro B. Oliveira

Abstract: The purpose of this research study is to identify Statistical Process Control methods and their importance for the current software industry. Statistical Process Control is a set of techniques employed for achieving a stable and repeatable process. In organizations that are seeking a high degree of maturity, it is necessary to achieve a statistical control of software processes and to know their behavior and operational performance. However, the use of these techniques by software development organizations has proved to be complex. The approach adopted for the teaching approach involves reading articles and experience performance reports, practical cases, discussion, the use of games and simulators, practical projects and reflection by students on the knowledge learned and activities carried out. In this context, it is expected that the students will feel a greater degree of motivation and reach more advanced levels of learning by being involved in a more playful and practical class atmosphere that simulates the environments encountered in industry. The experiment was conducted with undergraduates enrolled in a Computer Science Bachelor's degree programe, who were divided into a “control group” and an “experimental group”. At the end, the two groups carried out a practical project to evaluate the level of learning reached by the students. The results of the study suggest that there was a difference in the effectiveness of the learning resulting from the teaching approach and traditional instruction. We observed a mean gain of 30.06% in the experimental group, which is evidence of this rise in the learning effectiveness.

Paper Nr: 9
Title:

The Use of Game Elements and Scenarios for Teaching and Learning the Function Point Analysis Technique: A Experimental Study

Authors:

Estêvão D. Santos and Sandro B. Oliveira

Abstract: Currently, the development of new technologies occurs at all times, and tied to this is the increased competition between organizations. Based on this principle, it is essential that they seek to achieve quality in the development of their applications. One of the essential tools for this is Function Point Analysis (FPA). In view of this scenario, it is indispensable that the students have contact with this technique as soon as possible. Thus, this study aims to use the games concepts and elements to stimulate support for teaching and engaging student motivation in learning this technique of software estimation, which was taught in a postgraduate course in computer science in an brazilian federal university. For that, classes were defined to teach the FPA technique that used games elements as motivation for the students. Therefore, this research resulted in an enrichment of the knowledge of these students in the practice of estimation, commonly present and recommended the use in software quality models. This work aims to contribute to the teaching of the FPA technique for students, aiming at a better preparation for the software development market. It was also verified that the use of Gamification elements and learning scenarios for the teaching of this estimation technique was efficient, since the participating students were more dedicated to the tasks and were participative in all the different types of classes.

Paper Nr: 14
Title:

An Analysis System for Mobile Applications MVC Software Architectures

Authors:

Dragoş Dobrean and Laura Dioşan

Abstract: Mobile applications are software systems that are highly used by all modern people; a vast majority of those are intricate systems. Due to their increase in complexity, the architectural pattern used plays a significant role in their lifecycle. Architectural patterns can not be enforced on a codebase without the aid of an external tool; with this idea in mind, the current paper describes a novel technique for an automatically analysis of Model View Controller mobile application codebases from an architectural point of view. The analysis takes into account the constraints imposed by this layered architecture and offers insightful metrics regarding the architectural health of the codebase, while also highlighting the architectural issues. Both open source and private codebases have been analysed by the proposed approach and the results indicate an average accuracy of 89.6% of the evaluation process.

Paper Nr: 38
Title:

Test Suite Minimization of Evolving Software Systems: A Case Study

Authors:

Amit Goyal, R. K. Shyamasundar, Raoul Jetley, Devina Mohan and Srini Ramaswamy

Abstract: Test suite minimization ensures that an optimum set of test cases are selected to provide maximum coverage of requirements. In this paper, we discuss and evaluate techniques for test suite minimization of evolving software systems. As a case study, we have used an industrial tool, Static Code Analysis (SCAN) tool for Electronic Device Description Language (EDDL) as the System Under Test (SUT). We have used standard approaches including Greedy, Greedy Essential (GE) and Greedy Redundant Essential (GRE) for minimization of the test suite for a given set of requirements of the SUT. Further, we have proposed and implemented k-coverage variants of these approaches. The minimized test suite which is obtained as a result reduces testing effort and time during regression testing. The paper also addresses the need for choosing an appropriate level of granularity of requirements to efficiently cover all requirements. The paper demonstrates how fine grained requirements help in finding an optimal test suite to completely address the requirements and also help in detecting bugs in each version of the software. Finally, the results from different analyses have been presented and compared and it has been observed that GE heuristics performs the best (run time) under certain conditions.

Paper Nr: 39
Title:

Effort Prediction in Agile Software Development with Bayesian Networks

Authors:

Laura-Diana Radu

Abstract: The success rate of software projects has been increased since agile methodologies were adopted by many companies. Due their flexibility and continuous communication with clients, the main reason for the failure has shifted from the formulation and understanding of the requirements to inaccurate effort estimation. In recent years, several researchers and practitioners have proposed different estimation techniques. However, some projects are still failing because the budget and/or schedule are not accurately estimated since there still are numerous uncertain variables in software development process. Previous team collaborations, expertise and experience of team members, frequency of changing requirements or priorities are just a few examples. To improve the accuracy of effort estimation, this research proposes a model for agile software development project prediction using Bayesian networks. Based on literature review and practitioners’ knowledge, we identified two major categories of factors that influence effort needed: teamwork quality and user stories characteristics. We identified the sub-factors for each category and inter-dependencies between them. In our model, these factors are the nodes of the directed acyclic graph. The model can help agile teams to obtain a better software effort estimation.

Paper Nr: 68
Title:

Conceptual Modelling of the Dynamic Goal-oriented Safety Management for Safety Critical Systems

Authors:

Sana Debbech, Philippe Bon and Simon Collart-Dutilleul

Abstract: In the context of Safety Critical Systems (SCSs), safety measures derived from the dysfunctional analysis are generally expressed in an informal way. However, in an early phase of SCSs design, there is a need to link these safety measures to Goal-Oriented Requirements Engineering (GORE) concepts. Moreover, the current practice of the safety measures development is not based on a specific goal-oriented control model. Since there are different knowledge domains, there is a lack of a common vocabulary aiming to avoid the semantic heterogeneity between them. Consequently, a common model for an unambiguous knowledge sharing and a full semantic interoperability assurance is missing. In this paper, we propose the Goal-Oriented Safety Management Ontology (GOSMO), a domain ontology, which is grounded in the Unified Foundational Ontology (UFO) and provides a conceptualization and a real-world semantic interpretation of the knowledge matching for SCSs. Furthermore, the proposed safety measures development process is performed using a reinterpretation from the safety point of view of the Organization-Based Control Access (Or-BAC), which was initially developed for the Information Systems (IS) security. The GOSMO aims to capture the alignment between the considered domains concepts through the reference models reuse and the proposed taxonomy based on standards definitions. The proposed ontology is evaluated by the formalization of two cases studies from the railway domain, since it is the target application domain. Finally, the evaluation results show that GOSMO covers and analyses several real critical situations and fulfils its intended purpose.

Paper Nr: 81
Title:

Software Modularity Coupling Resolution by the Laplacian of a Bipartite Dependency Graph

Authors:

Iaakov Exman and Netanel Ohayon

Abstract: Software modularity by pinpointing and subsequent resolution of the remaining coupling problems is often assumed to be a general approach to optimize any software system design. However, software coupling types with differing specific characteristics, seemingly pose serious impediments to any generic coupling resolution approach. Despite the diversity of types, this work proposes a generic approach to solve any coupling type in three steps: a-obtain the Dependency graph for the coupled modules; b-convert the dependency graph into a Bipartite Graph; c-generate the Laplacian Matrix from the Bipartite Graph. Coupling problems to be resolved are then located, using Laplacian eigenvectors, in particular the Fiedler eigenvector. The generic approach is justified, explained in detail, and illustrated by a few case studies.

Paper Nr: 87
Title:

Towards Lakosian Multilingual Software Design Principles

Authors:

Damian M. Lyons, Saba B. Zahra and Thomas M. Marshall

Abstract: Large software systems often comprise programs written in different programming languages. In the case when cross-language interoperability is accomplished with a Foreign Function Interface (FFI), for example pybind11, Boost.Python, Emscripten, PyV8, or JNI, among many others, common software engineering tools, such as call-graph analysis, are obstructed by the opacity of the FFI. This complicates debugging and fosters potential inefficiency and security problems. One contributing issue is that there is little rigorous software design advice for multilingual software. In this paper, we present our progress towards a more rigorous design approach to multilingual software. The approach is based on the existing approach to the design of large-scale C++ systems developed by Lakos. The FFI is an aspect of physical rather than logical architecture. The Lakosian approach is one of the few design methodologies to address physical design rather than just logical design. Using the MLSA toolkit developed in prior work for analysis of multilingual software, we focus in on one FFI – the pybind11 FFI. An extension to the Lakosian C++ design rules is proposed to address multilingual software that uses pybind11. Using a sample of 50 public GitHub repositories that use pybind11, we measure how many repositories would currently satisfy these rules. We conclude with a proposed generalization of the pybind11-based rules for any multilingual software using an FFI interface.

Paper Nr: 101
Title:

Towards a Lawful Authorized Access: A Preliminary GDPR-based Authorized Access

Authors:

Cesare Bartolini, Said Daoudagh, Gabriele Lenzini and Eda Marchetti

Abstract: The General Data Protection Regulation (GDPR)’s sixth principle, Integrity and Confidentiality, dictates that personal data must be protected from unauthorised or unlawful processing. To this aim, we propose a systematic approach for authoring access control policies that are by-design aligned with the provisions of the GDPR. We exemplify it by considering realistic use cases.

Paper Nr: 11
Title:

A Formal Requirements Modeling Approach: Application to Rail Communication

Authors:

Steve T. Fotso, Régine Laleau, Hector R. Barradas, Marc Frappier and Amel Mammar

Abstract: This paper is about the formal specification of requirements of a rail communication protocol called Saturn, proposed by ClearSy systems engineering, a French company specialised in safety critical systems. The protocol was developed and implemented within a rail product, widely used, without modeling, verifying and even documenting its requirements. This paper outlines the formal specification, verification and validation of Saturn’s requirements in order to guarantee its correct behavior and to allow the definition of slightly different product lines. The specification is performed according to SysML/KAOS, a formal requirements engineering method developed in the ANR FORMOSE project for critical and complex systems. System requirements, captured with a goal modeling language, give rise to the behavioral part of a B System specification. In addition, an ontology modeling language allows the specification of domain entities and properties. The domain models thus obtained are used to derive the structural part of the B System specification obtained from system requirements. The B System model, once completed with the body of events, can then be verified and validated using the whole range of tools that support the B method. Five refinement levels of the rail communication protocol were constructed. The method has proven useful. However, several missing features were identified. This paper also provides a formally defined extension of the modeling languages to fill the shortcomings.

Paper Nr: 19
Title:

Automatic Algorithmic Complexity Determination Using Dynamic Program Analysis

Authors:

Istvan G. Czibula, Zsuzsanna Oneţ-Marian and Robert-Francisc Vida

Abstract: Algorithm complexity is an important concept in computer science concerned with the efficiency of algorithms. Understanding and improving the performance of a software system is a major concern through the lifetime of the system especially in the maintenance and evolution phase of any software. Identifying certain performance related issues before they actually affect the deployed system is desirable and possible if developers know the algorithmic complexity of the methods from the software system. In many software projects, information related to algorithmic complexity is missing, thus it is hard for a developer to reason about the performance of the system for different input data sizes. The goal of this paper is to propose a novel method for automatically determining algorithmic complexity based on runtime measurements. We evaluate the proposed approach on synthetic data and actual runtime measurements of several algorithms in order to assess its potential and weaknesses.

Paper Nr: 20
Title:

Application of Open Coding using the Grounded Theory Method to Identify the Profile of Information and Commucation Technology Companies in the State of Pará from Brazil

Authors:

Elziane M. Soares and Sandro B. Oliveira

Abstract: The use of methods inserted in the context of Experimental Software Engineering as the experimental study, case studies, opinion surveys and controlled experiments, has intensified in recent years. Considering this context, this study aims to present the application of the Open Coding proposal in the experimental method of Grounded Theory (GT), in order to contribute to the definition of the profile of Information and Communication Technology (ICT) companies in the State of Pará from Brazil, since the results generated in this work favor the development of the next steps of the GT method. The context explored happened under the perpescence of the MOSE model, more specifically in the Customer and Market (CM) dimension, once the investigation was based on information related to the concepts present in this dimension, aiming to know the form that occurs the development of the CM competence in the daily life of each company participating in this research.

Paper Nr: 21
Title:

Gamification and Evaluation of the Knowledge Management Application in a Software Quality Lab: An Experimental Study

Authors:

Antonilson S. Alcantara and Sandro B. Oliveira

Abstract: This paper presents the application of Gamification, proposed by Alcantara and Oliveira (2018), for Teaching and Learning of Knowledge Management in a Software Quality Lab. The proposal is briefly presented, followed by the description of the application in the laboratory. Finally, we present the results obtained by a quantitative evaluation, based on the data collected during the experiment, followed by a qualitative evaluation based on the SWOT (Strength, Weakness, Opportunity and Threats) analysis from the perspective of the participants and, soon after, the final considerations are presented.

Paper Nr: 23
Title:

Deriving Programs by Reliability Enhancement

Authors:

Marwa Benabdelali and Lamia L. Jilani

Abstract: This paper concerns the exploration of an approach that deals with formal program derivation in contrast to the traditional approach that begins with a formal specification, derive different refinements of that specification until generating the final correct program code. Hence, we use a rigorous theoretical framework which is based on the concept of relative correctness; the property of a program to be more correct than another program with respect to a specification. Program derivation process by relative correctness presents several advantages as for example deriving reliable software. In fact, for most software products, as for products in general, perfect correctness is not necessary; very often, adequate reliability threshold is sufficient. Our aim is to continue experimenting with the discipline of reliable program derivation by correctness enhancement by conducting an analytical and empirical study of this approach as a proof of concept. Then, to analyze the results and compare them (give feedback) to what is predicted and proposed by the analytical approach and decide on the usability of the approach and/or adjust/complete it. Finally, we propose a mechanism that helps and guides developer in the program derivation process using relative correctness.

Paper Nr: 32
Title:

A Software Cost Estimation Taxonomy for Global Software Development Projects

Authors:

Manal El Bajta and Ali Idri

Abstract: Nowadays, software cost estimation plays an important role in the management and development of distributed projects. The state of the art and cost estimation practice for Global Software Development (GSD) have recently been identified. This knowledge has still not been structured. The objective of this paper is to structure the knowledge about cost estimation for GSD. We used a design method to organize the knowledge identified as a cost estimation taxonomy for GSD. The proposed taxonomy offers a classification scheme for the cost estimation of distributed projects. The cost estimation taxonomy consists of four dimensions: cost estimation context, estimation technique, cost estimate and cost estimators. Each dimension in turn has multiple facets. The taxonomy could then be used as a tool to developing a repository for cost estimation knowledge.

Paper Nr: 41
Title:

Code Reuse between Java and Android Applications

Authors:

Yoonsik Cheon, Carlos V. Chavez and Ubaldo Castro

Abstract: Java and Android applications can be written in the same programming language. Thus, it is natural to ask how much code can be shared between them. In this paper we perform a case study to measure quantitatively the amount of code that can be shared and reused for a multiplatform application running on the Java platform and the Android platform. We first configure a multiplatform development environment consisting of platform-specific tools. We then propose a general architecture for a multiplatform application under a guiding design principle of having clearly defined interfaces and employing loose coupling to accommodate platform differences and variations. Specifically, we separate our application into two parts, a platform-independent part (PIP) and a platform-dependent part (PDP), and share the PIP between platform-specific versions. Our finding is that 37%–40% of code can be shared and reused between the Java and the Android versions of our application. Interestingly, the Android version requires 8% more code than Java due to platform-specific constraints and concerns. We also learned that the quality of an application can be improved dramatically through multiplatform development.

Paper Nr: 44
Title:

Applications of Automated Model’s Extraction in Enterprise Systems

Authors:

Cristina Marinescu

Abstract: As enterprise software systems become more and more complex, the need of automated approaches for their understanding and quality assessment increases. Usually the automated approaches make use of a meta-model according to the information is mainly extracted from the source code but when considering enterprise systems, the meta-model should contain information from two different paradigms (e.g., object-oriented and relational) which are not enough to be loaded only from the source code. In this paper, based on a specific meta-model for enterprise systems, we present a set of applications (approaches) which help us to understand and assess the quality of the design, as part of the maintenance process.

Paper Nr: 46
Title:

Ripple Effect Analysis of Data Flow Requirements

Authors:

Bui T. Hung, Takayuki Omori and Atsushi Ohnishi

Abstract: Ripple effect in the modification of software requirements should be properly analyzed, since it may cause errors of software requirements. We have already proposed a ripple effect analysis method in deletion or update of data flow requirements. In this paper, we enhance our method considering ripple effect analysis in adding new data flows requirements. Our method will be illustrated with examples.

Paper Nr: 62
Title:

Investigating Fault Localization Techniques from Other Disciplines for Software Engineering

Authors:

Árpád Beszédes

Abstract: In many different engineering fields, fault localization means narrowing down the cause of a failure to a small number of suspicious components of the system. This activity is an important concern in many areas, and there have been a large number of techniques proposed to aid this activity. Some of the basic ideas used are common to different fields, but generally quite diverse approaches are applied. Our long-term goal with the presented research is to identify potential techniques from non-software domains that have not yet been fully leveraged to software faults, and investigate their applicability and adaptation to our field. We performed an analysis of related literature, not limiting the search to any specific engineering field, with the aim to find solutions in non-software areas that could be most successfully adapted to software fault localization. We found out that few areas have significant literature in the topic that are good candidates for adaptation (computer networks, for instance), and that although some classes of methods are less suitable, there are useful ideas in almost all fields that could potentially be reused. As an example of potential novel techniques for software fault localization, we present three concrete techniques from other fields and how they could potentially be adapted.

Paper Nr: 64
Title:

Event-B Decomposition Analysis for Systems Behavior Modeling

Authors:

Kenza Kraibi, Rahma Ben Ayed, Joris Rehm, Simon Collart-Dutilleul, Philippe Bon and Dorian Petit

Abstract: Applications of formal methods to critical systems such as railway systems have been studied by several research works. Their ultimate goal is to increase confidence and to ensure the behavior correctness of these systems. In this paper, we propose to use the Event-B formal method. As a central concept in Event-B, refinement is used to progressively introduce the details of systems requirements, but in most cases, it leads to voluminous and complex models. For this purpose, this paper focuses on decomposition techniques in order to manage the complexity issue in Event-B modeling. It presents a state of the art and an analysis of existing decomposition techniques. Then, an approach will be proposed following this analysis.

Paper Nr: 98
Title:

On Improving Parallel Rebuilding of R-TNCESs

Authors:

Mohamed Ramdani, Laid Kahloul and Mohamed Khalgui

Abstract: This study presents an improved parallel rebuilding of reconfigurable timed net condition-event systems (R-TNCESs) modeling reconfigurable discrete-event systems (RDESs). Computation tree logic (CTL) model repair is one of the existing approaches that extends formal verification using model checking, by an automatized debugging phase and updating directly the model to cope with the desired behavior. Our proposition aims to generalize simple rebuilding with one CTL-based function property to parallel rebuilding which allows both verification and modification of a model according to a set of non verified functional properties simultaneously. A couple of transformation algorithms are proposed to conserve the coherency of the model and a property classification method is proposed to frame the parallel execution order. To demonstrate the paper’s contribution, a FESTO MPS platform is used as a case study.

Paper Nr: 99
Title:

Problem of Incompleteness in Textual Requirements Specification

Authors:

David Šenkýř and Petr Kroha

Abstract: In this contribution, we investigate the incompleteness problem in textual requirements specifications. Incompleteness is a typical problem that arises when stakeholders (e.g., domain experts) hold some information for generally known, and they do not mention it to the analyst. A model based on the incomplete requirements suffers from missing objects, properties, or relationships as we show in an illustrating example. Our presented methods are based on grammatical inspection, semantic networks (ConceptNet and BabelNet), and pre-configured data from on-line dictionaries. Additionally, we show how a domain model has to be used to reveal some missing parts of it. Our experiments have shown that the precision of our methods is about 60–82 %.

Paper Nr: 106
Title:

Exploration and Mining of Source Code Level Traceability Links on Stack Overflow

Authors:

András Kicsi, Márk Rákóczi and László Vidács

Abstract: Test-to-Code traceability is a valid problem of software engineering that arises naturally in the development of larger software systems. Traceability links can be uncovered through various techniques including information retrieval. The immense amount of data shared daily on Stack Overflow behaves similarly in many aspects. In the current work, we endeavor to discover test-to-code connections in the code shared and propose some applications of the findings. Semantic connections can also be explored between different software systems, information retrieval can be used both in cross-post and in cross-system scenarios. The information can also be used to discover new testing possibilities and ideas and has the potential to contribute to the development and testing of new systems as well.

Paper Nr: 113
Title:

A Bird’s Eye View on Social Network Sites and Requirements Engineering

Authors:

Nazakat Ali and Jang-Eui Hong

Abstract: Social network sites have become popular and their popularity is growing exponentially every day. From the requirements engineering point of view, social network sites have provided unprecedented opportunities for software development organizations to understand the requirements of unknown end-users. Using social network sites, end-users express their experiences, needs, or concerns about a particular system or a product. Such information can be useful for software developers to address the concerns of users quickly. To get an overview of how social network sites are helping requirements engineering and new research trends in this area, we have surveyed a large number of research papers. We found that social network sites can be a major source that can be used for requirements elicitation, requirements prioritization, and negotiation. We also found that the research in this domain is at its beginning stage, but it is rapidly growing with the passage of time.

Area 3 - Software Systems and Applications

Full Papers
Paper Nr: 25
Title:

Genetic Algorithm to Detect Different Sizes’ Communities from Protein-Protein Interaction Networks

Authors:

Marwa Ben M’Barek, Amel Borgi, Sana Ben Hmida and Marta Rukoz

Abstract: The community detection in large networks is an important problem in many scientific fields ranging from Biology to Sociology and Computer Science. In this paper, we are interested in the detection of communities in the Protein-protein or Gene-gene Interaction (PPI) networks. These networks represent protein-protein or gene-gene interactions which corresponds to a set of proteins or genes that collaborate at the same cellular function. The goal is to identify such communities from gene annotation sources such as Gene Ontology. We propose a Genetic Algorithm based approach to detect communities having different sizes from PPI networks. For this purpose, we use a fitness function based on a similarity measure and the interaction value between proteins or genes. Moreover, a specific solution for representing a community and a specific mutation operator are introduced. In the computational tests carried out in this work, the introduced algorithm achieved excellent results to detect existing or even new communities from Protein-protein or Gene-gene Interaction networks.

Paper Nr: 34
Title:

A Framework for Evaluating Business Process Performance

Authors:

Wiem Khlif, Mariem Kchaou and Faiez Gargouri

Abstract: Measuring the performance of business processes is an essential task that enables an organization to achieve effective and efficient results. It is by measuring processes that data on their performance is provided, thus showing the evolution of the organization in terms of its strategic objectives. To be efficient in such task, organizations need a set of measures, thereby enabling them to support planning, inducing control and making it possible to diagnose the current situation. Indeed, several researchers have defined specific measures for assessing the business process (BP) performance. Our approach proposes new temporal and cost measures to assess the performance of business process models. The aim of this paper is to classify the performance measures proposed so far within a framework defined in terms of characteristics, design and temporal perspectives, and to evaluate the performance of business process models. This framework uses business and social contexts to improve particular measures. It helps the designer to select a subset of measures corresponding to each perspective and to calculate and interpret their values in order to improve the performance of their model.

Paper Nr: 42
Title:

Simulating the Impact of Annotation Guidelines and Annotated Data on Extracting App Features from App Reviews

Authors:

Faiz A. Shah, Kairit Sirts and Dietmar Pfahl

Abstract: The quality of automatic app feature extraction from app reviews depends on various aspects, e.g. the feature extraction method, training and evaluation datasets, evaluation method etc. Annotation guidelines used to guide the annotation of training and evaluation datasets can have a considerable impact to the quality of the whole system but it is one of the aspects that is often overlooked. We conducted a study in which we explore the effects of annotation guidelines to the quality of app feature extraction. We propose several changes to the existing annotation guidelines with the goal of making the extracted app features more useful to app developers. We test the proposed changes via simulating the application of the new annotation guidelines and evaluating the performance of the supervised machine learning models trained on datasets annotated with initial and simulated annotation guidelines. While the overall performance of automatic app feature extraction remains the same as compared to the model trained on the dataset with initial annotations, the features extracted by the model trained on the dataset with simulated new annotations are less noisy and more informative to app developers.

Paper Nr: 47
Title:

Verifying Complex Software Control Systems from Test Objectives: Application to the ETCS System

Authors:

Rabea Ameur-Boulifa, Ana Cavalli and Stephane Maag

Abstract: Ensuring the correctness of complex distributed software systems is a challenging task, the issue of building frameworks for developing such safe and correct systems still remains a difficult issue. Where test coverage is dissatisfying, formal analysis grants much higher potential to discover bugs during the development phase. This paper presents a framework for formal verification of complex systems based on standardized test objectives. The framework integrates a transformation of test objectives into formal properties that are verified on the system by model checking. The overall proposed approach for formal verification is evaluated by the application to the standard European Train Control System (ETCS). Some critical safety properties have been proved on the model, ensuring that the model is correct and reliable.

Paper Nr: 53
Title:

A Methodology for Enterprise Resource Planning Automation Testing Application to the Open Source ERP-ODOO

Authors:

Thierno B. Sambe, Stephane Maag and Ana Cavalli

Abstract: This paper presents an approach for the automated testing of Enterprise Resource Planning (ERP) systems. ERPs are complex software systems that provide a chain management covering all corporate activities. Testing of these systems is a complex task given the close interrelation between the functionalities of system modules. Test automation is also an important issue, in order to reduce testing time and cost. To this end, our approach for the automation of ERP systems testing, is based on the system modeling as a set of business processes to meet the needs of the business and reduce, for this purpose, the risks of errors. The methodology used in this context is to combine the system modeling and the tools to manage automation of ERP-tests. Following this approach, from the basic requirements, a number of test purposes are defined and test cases are generated using a test generation tool, which are then run on the ERP system. In order to illustrate the application of the proposed method, test experiments have been performed on a real case study, the ERP ODOO.

Paper Nr: 71
Title:

Detecting, Opening and Navigating through Doors: A Unified Framework for Human Service Robots

Authors:

Francesco Savarese, Antonio Tejero-de-Pablos, Stefano Quer and Tatsuya Harada

Abstract: For an autonomous robotic system, detecting, opening, and navigating through doors remains a very challenging problem. It involves several hard-to-solve sub-tasks such as recognizing the door, grasping the handle, discriminating between pulling or pushing the door, and detecting locked doors. Previous works tackle individual sub-problems, assuming that the robot is already facing the door handle or that the robot knows in advance the exact location of the door. However, ignoring the navigation through the door, using specialized robots, or specific types of doors, reduce the applicability of existing approaches. In this paper, we present a unified framework for the door opening problem, by taking a navigation scenario as a reference. We implement specific algorithms to solve each sub-task, and describe the hierarchical automata which integrates the control of the robot during the entire process. Moreover, we implement error recovery mechanisms to add robustness and to guarantee a high success rate. We carry out experiments on a realistic scenario using a standard service robot, the Toyota Human Support Robot. We show that our framework can successfully detect, open, and navigate through doors in a reliable way, with low error rates, and without adapting the environment to the robot. Our experiments demonstrate the high applicability of our framework.

Paper Nr: 89
Title:

Cooperative Energy Management Software for Networked Microgrids

Authors:

Ilyes Naidji, Olfa Mosbahi, Mohamed Khalgui and Abdelmalik Bachir

Abstract: Smart distribution systems are critical cyber-physical energy systems that consists of multiple networked microgrids (MGs) with a distributed architecture. The main problem behind these cyber-physical energy systems is how to manage energy sources to have an efficient and economic energy supply. This paper proposes a cooperative energy management software (EMS) for networked microgrids (MGs) by explicitly modeling the cooperative behavior of MGs. The network of MGs is autonomously self-organized into multiple stable coalitions to achieve an efficient and economic energy exchange. The coalition consists of several MGs that exchange energy with a competitive energy prices to maximize their utility. We formulate the problem of energy management in networked MGs by a coalition formation game between MGs. We develop a merge-and-split-based coalition formation (MSCF) algorithm to ensure the stability of the formed coalitions and maximize the profits of MGs. Then, we design an intra coalition energy transfer (ICET) algorithm for transferring energy between MGs within the same coalition to minimize power loss. The simulation results demonstrate a satisfactory performance in terms of profit maximization that exceeds 21% and in terms of power loss reduction that exceeds 51%, thanks to the proposed cooperative energy management software.

Short Papers
Paper Nr: 13
Title:

A MapReduce based Approach for Circle Detection

Authors:

Mateus A. Coelho, Dylan N. Sugimoto, Gabriel A. Melo, Vitor V. Curtis and Juliana M. Bezerra

Abstract: Circle detection algorithms applied on images are used in different contexts and areas, such as bacteria identification in Medicine and ball identification in a humanoid robot soccer competition. Specialization and processing time are critical issues in existing algorithms and implementations so that good detection results to different situations usually impact the execution time. Aiming to deal with trade-off of specialization and performance, this paper proposes a parallel algorithm for circle detection using Hough Transform and MapReduce paradigm. We also compared its performance relative to its serial implementation and the one provided by the OpenCV library. The proposed approach is useful for maintaining an accessible time execution while increasing results’ quality, moreover it is general in terms of usability, which aid the identification of circles for different circumstances and inconstant environment.

Paper Nr: 22
Title:

Exploring DDoS Mechanisms

Authors:

Bruno S. Neves, Filipe M. Leite, Lucas S. Jorge, Rahyan G. Paiva, Juliana M. Bezerra and Vitor V. Curtis

Abstract: The concern about cyber security has attracted attention by organizations and public services over the last few years due to the importance of confidentiality, integrity, and availability of services and sensitive data. Many recent episodes of cyber attacks causing strong impact have been performed using extremely simple techniques and taking advantage of the hidden weaknesses of current systems. This article has the main purpose to motivate the academy to mitigate unexpected potential threats from the attackers perspective, exposing the latent weakness of current systems, since the majority of such papers focus only on the defense perspective. We analyze the potential impact of Denial of Service (DoS), a very popular cyber attack that affects the availability of a victim server, through different mechanisms and topologies. Specifically, we simulate and analyze the impact of Distributed DoS (DDoS) using the List and Binary Tree topologies along with Continuous or Pulsating stream of requests. In order to improve the potential of the analyzed DDoS methods, we also introduce a new technique to protect them against mitigation from security systems.

Paper Nr: 65
Title:

The APOGEE Software Platform for Construction of Rich Maze Video Games for Education

Authors:

Boyan Bontchev, Dessislava Vassileva and Yavor Dankov

Abstract: Nowadays, the integration of serious video games into educational and training processes tends to be more and more popular. The present paper outlines the software architecture of an innovative online platform for an automatized construction of educational video games, which is going to allow non-IT professionals such as teachers, pedagogues, and educationalists to design, automatically generate and personalize educational video games based on a formal descriptive game model. The games represent rich educational video mazes providing didactic multimedia content personalized upon various characteristics of the player. The construction process includes three stages: game design, game validation, and game generation. The integration of analytics tools into the platform will monitor all of the platform's data and processes hence will facilitate the platform users to make more adaptive, effective, and efficient video maze games for education.

Paper Nr: 76
Title:

AES: Automated Evaluation Systems for Computer Programing Course

Authors:

Shivam, Nilanjana Goswami, Veeky Baths and Soumyadip Bandyopadhyay

Abstract: Evaluation of descriptive type questions automatically is an important problem. In this paper, we have concentrated on programing language course which is a core subject in CS discipline. In this paper, we have given a tool prototype which evaluates the descriptive type of questions in computer programing course using the notion of program equivalence.

Paper Nr: 86
Title:

Ethics by Agreement in Multi-agent Software Systems

Authors:

Vivek Nallur and Rem Collier

Abstract: Most attempts at inserting ethical behaviour into autonomous machines adopt the ‘designer’ approach, i.e., the ethical principles/behaviour to be implemented are known in advance. Typical approaches include rule-based evaluation of moral choices, reinforcement-learning, and logic based approaches. All of these approaches assume a single moral agent interacting with a moral recipient. This paper argues that there will be more frequent cases where the moral responsibility for a situation will lie among multiple actors, and hence a designed approach will not suffice. We posit that an emergence-based approach offers a better alternative to designed approaches. Further we outline one possible mechanism by which such an emergent morality might be added into autonomous agents.

Paper Nr: 91
Title:

Towards Extracting the Role and Behavior of Contributors in Open-source Projects

Authors:

Michail D. Papamichail, Themistoklis Diamantopoulos, Vasileios Matsoukas, Christos Athanasiadis and Andreas L. Symeonidis

Abstract: Lately, the popular open source paradigm and the adoption of agile methodologies have changed the way software is developed. Effective collaboration within software teams has become crucial for building successful products. In this context, harnessing the data available in online code hosting facilities can help towards understanding how teams work and optimizing the development process. Although there are several approaches that mine contributions’ data, they usually view contributors as a uniform body of engineers, and focus mainly on the aspect of productivity while neglecting the quality of the work performed. In this work, we design a methodology for identifying engineer roles in development teams and determine the behaviors that prevail for each role. Using a dataset of GitHub projects, we perform clustering against the DevOps axis, thus identifying three roles: developers that are mainly preoccupied with code commits, operations engineers that focus on task assignment and acceptance testing, and the lately popular role of DevOps engineers that are a mix of both. Our analysis further extracts behavioral patterns for each role, this way assisting team leaders in knowing their team and effectively directing responsibilities to achieve optimal workload balancing and task allocation.

Paper Nr: 94
Title:

Towards using Data to Inform Decisions in Agile Software Development: Views of Available Data

Authors:

Christoph Matthies and Guenter Hesse

Abstract: Software development comprises complex tasks which are performed by humans. It involves problem solving, domain understanding and communication skills as well as knowledge of a broad variety of technologies, architectures, and solution approaches. As such, software development projects include many situations where crucial decisions must be made. Making the appropriate organizational or technical choices for a given software team building a product can make the difference between project success or failure. Software development methods have introduced frameworks and sets of best practices for certain contexts, providing practitioners with established guidelines for these important choices. Current Agile methods employed in modern software development have highlighted the importance of the human factors in software development. These methods rely on short feedback loops and the self-organization of teams to enable collaborative decision making. While Agile methods stress the importance of empirical process control, i.e. relying on data to make decisions, they do not prescribe in detail how this goal should be achieved. In this paper, we describe the types and abstraction levels of data and decisions within modern software development teams and identify the benefits that usage of this data enables. We argue that the principles of data-driven decision making are highly applicable, yet underused, in modern Agile software development.

Paper Nr: 102
Title:

QCOF: New RPL Extension for QoS and Congestion-Aware in Low Power and Lossy Network

Authors:

Yousra Ben Aissa, Hanen Grichi, Mohamed Khalgui, Anis Koubaa and Abdelmalik Bachir

Abstract: Low power and lossy networks (LLNs) require a routing protocol under real-time and energy constraints, congestion aware and packet priority. Thus, Routing Protocol for Low power and lossy network (RPL) is recommended by Internet Engineering Task force (IETF) for LLN applications. In RPL, nodes select their optimal paths towards their preferred parents after meeting routing metrics that are injected in the objective function (OF). However, RPL did not impose any routing metric and left it open for implementation. In this paper, we propose a new RPL objective function which is based on the quality of service (QoS) and congestion-aware. In the case paths fail, we define new RPL control messages for enriching the network by adding more routing nodes. Extensive simulations show that QCOF achieves significant improvement in comparison with the existing objective functions, and appropriately satisfies real-time applications under QoS and network congestion.

Paper Nr: 104
Title:

Towards Integrated Failure Recovery for Web Service Composition

Authors:

Paul Diac and Emanuel Onica

Abstract: Web Service Composition (WSC) aims at facilitating the use of multiple Web Services for solving a given task. Automatic WSC is particularly focused on automating the composition process following specifications of involved actors. There are many initiatives developed in this direction, but most of them stop at describing the process of building the composition, while some also study the coordination of the execution. However, research of specific solutions in cases of failure of services is typically not integrated with composition algorithms and mostly relies on costly additional measures. Fault tolerance is important for developers relying on the compositions, since their applications are rarely for transient usage, requiring high availability over time. Naively re-computing the compositions from scratch can be a waste of resources for the composition engine. In this paper, we propose addressing web service faults by preparing backup compositions as part of a fallback mechanism, which efficiently integrates with the base composition algorithm.

Paper Nr: 115
Title:

Knowledge Hubs in Competence Analytics: With a Case Study in Recruitment and Selection

Authors:

Emiel Caron and Saša Batistic

Abstract: There is a lack of consensus on the usefulness of Human Resource (HR) analytics to achieve better business results. The authors suggest this is due to lack of empirical evidence demonstrating how the use of data in the HR field makes a positive impact on performance, due to the detachment of the HR function from accessible data, and due to the typically poor IT infrastructure in place. We provide an in-depth case study of Strategic Competence analytics, as an important part of HR analytics, in a large multinational company, labelled ABC, which potentially shows two important contributions. First, we contribute to HR analytics literature by providing a data-driven competency model to improve the recruitment and selection process. This is used by the organization to search more effectively for talents in their knowledge networks. Second, we further develop a model for data-driven competence analytics, thus also contributing to the information systems literature, in developing specialized analytics for HR, and by finding appropriate forms of computerized network analysis for identifying and analysing knowledge hubs. Overall, our approach, shows how internal and external data triangulation and better IT integration makes a difference for the recruitment and selection process. We conclude by discussing our model’s implications for future research and practical implications.

Paper Nr: 118
Title:

Efficiently Finding Optimal Solutions to Easy Problems in Design Space Exploration: A* Tie-breaking

Authors:

Thomas Rathfux, Hermann Kaindl, Ralph Hoch and Franz Lukasch

Abstract: Using design space exploration (DSE), certain real-world problems can be made solvable through (heuristic) search. A meta-model of the domain and transformation rules defined on top of it specify the search space. For example, we previously (meta-)modeled specific problems in the context of reusing hardware/software interfaces (HSIs) in automotive systems, and defined transformation rules that lead from a model of one specific HSI to another one. Based on that, a minimal number of adaptation steps can be found using best-first A* searches that lead from a given HSI to another one fulfilling new requirements. A closer look revealed that these problems involved a few reconfigurations, but often no reconfiguration is necessary at all. For such trivial problem instances, no real search should be necessary, but in general, it is. After it became clear that a good tie-breaker for the many nodes with the same minimal value of the evaluation function of A* was the key to success, we performed experiments with the tie-breakers recently found best in the literature, all based on a last-in-first-out (LIFO) strategy in contrast to the previous belief that using minimal h-values would be the best tie-breakers. However, our experiments provide empirical evidence in our real-world domain that tie-breakers based on minimal h-values can indeed be (statistically significantly) better than a tie-breaker based on LIFO. In addition, they show that the best tie-breakers for the more difficult problems are also best for the trivial problem instances without reconfigurations, where they really make a difference.

Paper Nr: 120
Title:

Formalizing a Policy-based Compliance Control Solution with Event-B

Authors:

Laura González and Raúl Ruggia

Abstract: Compliance management is gaining increasing interest in inter-organizational service-oriented systems, which are usually supported by integration platforms. Due to their mediation role and capabilities, these platforms constitute a convenient infrastructure for controlling compliance requirements affecting message exchanges between organizations. This paper proposes a formal model for a policy-based compliance control solution introduced in our previous work for such platforms. The model, which was developed using Event-B, provides unambiguous specifications and enables formal proofs as well as the verification of the solution operation.

Paper Nr: 6
Title:

Business Processes and Chains of Kazakhstan How do Organisations Start Blockchain Projects and Evolve Throughout?

Authors:

Bolatzhan Kumalakov, Yassynzhan Shakan and Moldir Nakibayeva

Abstract: The blockchain is an emerging technology, which promises to revolutionise the way trust in the digital world is perceived. Nonetheless, it has been an issue that we know little of how and when to apply it in a modern business setting. In particular, how does one identify a business process under the greatest threat or the one, which would bring the most benefit being blockchained. Paper presents our first attempt to understand the reasoning behind the introduction of the blockchain in numerous organisations. We were especially interested in how decisions were made, what perception of the technology was at a time, and which changes had to be implemented to facilitate the introduction. The research method was qualitative - a set of semi-structured interviews as there is no academic source information that we could identify. Obtained results show that there is a worrying difference in technology perception between technical and managerial staff, and decision on technology adaption has clear hype-driven and top-down nature to it. Eventually, we conclude that there is a clear need for further study using quantitative tools and a wider selection of respondents to develop recommendations or come up with some systematic approach.

Paper Nr: 12
Title:

A Blockchain Approach to Support Digital Contracts

Authors:

Victor H. Breder, Laurival C. Neto, Thiago F. Medeiros, Ivan M. Padalko, Guilherme S. Oliveira, Vitor V. Curtis and Juliana M. Bezerra

Abstract: Distributing computing that work with policies that flirt with democracy, in which parties can interact without an intermediary, has gained strength. In this way, an attractive idea is to support digital contracts by removing the third party and allowing the group to create and store contracts in a reliable and secure way, where contracts are immutable and easily retrievable. We propose a Blockchain approach to aid the management of digital contracts. The proposal considers contract encryption, digital signature, and protocols for chaining blocks with data related to digital contracts. We develop a prototype to ensure the viability of our proposal. We also present a case of use to demonstrate the prototype usage. We argue that proposal and implementation together create an appropriate environment for research and education purposes.

Paper Nr: 24
Title:

Proactive Model for Handling Conflicts in Sensor Data Fusion Applied to Robotic Systems

Authors:

Gilles Neyens and Denis Zampunieris

Abstract: Robots have to be able to function in a multitude of different situations and environments. To help them achieve this, they are usually equipped with a large set of sensors whose data will be used in order to make decisions. However, the sensors can malfunction, be influenced by noise or simply be imprecise. Existing sensor fusion techniques can be used in order to overcome some of these problems, but we believe that data can be improved further by computing context information and using a proactive rule-based system to detect potentially conflicting data coming from different sensors. In this paper we will present the architecture and scenarios for a generic model taking context into account.

Paper Nr: 29
Title:

Cloud Decision Support System for Risk Management in Railway Transportation

Authors:

Wojciech Górka, Jacek Bagiński, Michał Socha, Tomasz Steclik, Dawid Leśniak, Marek Wojtas, Barbara Flisiuk and Marcin Michalak

Abstract: The paper features the development of a decision support system for railway transportation based on risk management. The implementation of risk management in this area is required by EU and national regulations. At present, there are no dedicated systems that would provide complex support of this process and, at the same time, enable to exchange experience and develop a common knowledge base not only on the level of a particular railway company, but also the whole industry. The solution presented in this paper assists the personnel responsible for risk management, allows to use a knowledge base and expertise, and supports data exchange between particular organizations on the railway market.

Paper Nr: 49
Title:

The Saga Pattern in a Reactive Microservices Environment

Authors:

Martin Štefanko, Ondřej Chaloupka and Bruno Rossi

Abstract: Transaction processing is a critical aspect of modern software systems. Such criticality increased over the years with the emergence of microservices, calling for appropriate management of transactions across separated application domains, ensuring the whole system can recover and operate in a possible degraded state. The Saga pattern emerged as a way to define compensating actions in the context of long-lived transactions. In this work, we discuss the relation between traditional transaction processing models and the Saga pattern targeting specifically the distributed environment of reactive microservices applications. In this context, we provide a comparison of the current state of transaction support in four Java-based enterprise application frameworks for microservices support: Axon, Eventuate Event Sourcing (ES), Eventuate Tram, and MicroProfile Long Running Actions (LRA).

Paper Nr: 75
Title:

Teaching Design-by-Contract for the Modeling and Implementation of Software Systems

Authors:

Mert Ozkaya

Abstract: Defensive programming is considered as a software design approach that promotes the reliable software development via the considerations of different cases for the software modules. Design-by-Contract (DbC) applies defensive programming systematically in terms of contracts that are a pair of pre-conditions on the module input and post-conditions on the module output. In this paper, a DbC-based teaching methodology is proposed, which aims to teach undergraduate students how to use contracts for the modeling and implementation of software systems. The teaching methodology consists of three consecutive steps. Firstly, the students will learn how to model software architectures in terms of components and their communication links. The component behaviours are specified as contracts, which are attached to the messages that the components exchange. In the second step, the students will learn how to implement the contractual software architectures in Java using Java’s assertion mechanisms in a way that the contractual design decisions are all preserved in the code. Lastly, the students will learn the Java Modeling Language for combining the contractual modeling and Java implementation in a single model to avoid any inconsistencies between the model and implementation and automatically verify the correctness of the implementation against the modeled behaviours.

Paper Nr: 82
Title:

Approach for Variability Management of Legal Rights in Human Resources Software Product Lines

Authors:

M. Derras, L. Deruelle, J.-M. Douin, N. Levy, F. Losavio, R. O. Mahamane and V. Reiner

Abstract: This work concerns software product lines (SPL); it comes from the experience gained collaborating with Berger-Levrault, a French society leader in Human Resources systems. This enterprise serves many French and European territorial communities. They had a variability problem associated to the differences of applicable legal rights in different countries or territories, and this activity was performed manually at a high cost. On the other hand, functionalities were common and mandatory and did not vary much. The crucial issue in SPL development and practice is to manage the correct selection of variants. However, no standard methods have been developed yet, and industry builds SPL using on-the-market or in-house techniques and methods, aware of the benefits a product line can provide; nevertheless, this development must return the investment, and this is not always the case. In this work an approach to variability management in case of legal rights applicability to different entities is proposed. This architecture-centric and quality-based approach uses a reference architecture that has been built with a bottom-up strategy. Variability is incorporated to the reference architecture at abstract level considering non-functional properties. A “production plan” to reduce the gap between abstraction and implementation levels is defined.

Paper Nr: 83
Title:

Evaluating Gant Project, Orange Scrum, and ProjeQtOr Open Source Project Management Tools using QSOS

Authors:

Catarina R. Proença and Jorge Bernardino

Abstract: The task of managing a software project is an extremely complex job, drawing on many personal, team, and organizational resources. We realized that exist many project management tools and software being developed every day to help managers to automate the administration of individual projects or groups of projects during their life-cycle. Therefore, it is important to identify the software functionalities and compare them to the intended requirements of the company, to select which software complements the expectations. This paper presents a comparison between three of the most popular open source project management tools: Gantt Project, OrangeScrum, ProjeQtOr. To assess these project management tools is used the Qualification and Selection Open Source (QSOS) methodology.

Paper Nr: 93
Title:

npm Packages as Ingredients: A Recipe-based Approach

Authors:

Kyriakos C. Chatzidimitriou, Michail D. Papamichail, Themistoklis Diamantopoulos, Napoleon-Christos Oikonomou and Andreas L. Symeonidis

Abstract: The sharing and growth of open source software packages in the npm JavaScript (JS) ecosystem has been exponential, not only in numbers but also in terms of interconnectivity, to the extend that often the size of dependencies has become more than the size of the written code. This reuse-oriented paradigm, often attributed to the lack of a standard library in node and/or in the micropackaging culture of the ecosystem, yields interesting insights on the way developers build their packages. In this work we view the dependency network of the npm ecosystem from a “culinary” perspective. We assume that dependencies are the ingredients in a recipe, which corresponds to the produced software package. We employ network analysis and information retrieval techniques in order to capture the dependencies that tend to co-occur in the development of npm packages and identify the communities that have been evolved as the main drivers for npm’s exponential growth.

Paper Nr: 103
Title:

Java Web Services: A Performance Analysis

Authors:

Pedro E. Silva and Jorge Bernardino

Abstract: Service-oriented architecture (SOA) is being increasingly used by developers both in web applications and in mobile applications. Within web services there are two main implementations: SOAP communication protocol and REST. This work presents a comparative study of performance between these two types of web services, SOAP versus REST, as well as analyses factors that may affect the efficiency of applications that are based on this architecture. In this experimental evaluation we used an application deployed in a Wildfly server and then used the JMeter test tool to launch requests in different numbers of threads and calls. Contrary to the more general idea that REST web services are significantly faster than SOAP, our results show that REST web services are 1% faster than SOAP. As this programming paradigm is increasingly used in a growing number of client and server applications, we conclude that the REST implementation is more efficient for systems which have to respond to less calls but have more requests in a connection.