ICSOFT 2017 Abstracts


Area 1 - Software Systems and Applications

Full Papers
Paper Nr: 8
Title:

Multi-agent Deep Reinforcement Learning for Task Allocation in Dynamic Environment

Authors:

Dhouha Ben Noureddine, Atef Gharbi and Samir Ben Ahmed

Abstract: The task allocation problem in a distributed environment is one of the most challenging problems in a multiagent system. We propose a new task allocation process using deep reinforcement learning that allows cooperating agents to act automatically and learn how to communicate with other neighboring agents to allocate tasks and share resources. Through learning capabilities, agents will be able to reason conveniently, generate an appropriate policy and make a good decision. Our experiments show that it is possible to allocate tasks using deep Q-learning and more importantly show the performance of our distributed task allocation approach.

Paper Nr: 13
Title:

An Improved Approach for Class Test Ordering Optimization using Genetic Algorithms

Authors:

Istvan Gergely Czibula, Gabriela Czibula and Zsuzsanna Marian

Abstract: Identifying the order in which the application classes have to be tested during the integration testing of object-oriented software systems is essential for reducing the testing effort. The Class Integration Test Order (CITO) problem refers to determining the test class order that minimizes stub creation cost, and subsequently testing effort. The goal of this paper is to propose an efficient approach for class integration test order optimization using a genetic algorithm with stochastic acceptance. The main goal of the class integration test order problem is to minimize the stubbing effort needed during the class-based integration testing. In our proposal, the complexity of creating a stub is estimated by assigning weights to different types of dependencies in the software system’s Object Relation Diagram. The experimental evaluation is performed on two synthetic examples and five software systems often used in the literature for the class integration test ordering. The results obtained using our approach are better than the results of the existing related work which provide experimental results on the case studies considered in this paper.

Paper Nr: 24
Title:

A Fuzzy Logic-based Approach for Assessing the Quality of Business Process Models

Authors:

Fadwa Yahya, Khouloud Boukadi, Hanêne Ben-Abdallah and Zakaria Maamar

Abstract: Similar to software products, the quality of a Business Process model is vital to the success of all the phases of its lifecycle. Indeed, a high quality BP model paves the way to the successful implementation, execution and performance of the business process. In the literature, the quality of a BP model has been assessed through either the application of formal verification, or most often the evaluation of quality metrics calculated in the static and/or simulated model. Each of these assessment means addresses different quality characteristics and meets particular analysis needs. In this paper, we adopt metrics-based assessment to evaluate the quality of business process models, modeled with Business Process Modeling and Notation (BPMN), in terms of their comprehensibility and modifiability. We propose a fuzzy logic-based approach that uses existing quality metrics for assessing the attainment level of these two quality characteristics. By analyzing the static model, the proposed approach is easy and fast to apply. In addition, it overcomes the threshold determination problem by mining a repository of BPMN models. Furthermore, by relying on fuzzy logic, it resembles human reasoning during the evaluation of the quality of business process models. We illustrate the approach through a case study and its tool support system developed under the eclipse framework. The preliminary experimental evaluation of the proposed system shows encouraging results.

Paper Nr: 28
Title:

From Document Warehouse to Column-Oriented NoSQL Document Warehouse

Authors:

Ines Ben Messaoud, Refka Ben Ali and Jamel Feki

Abstract: NoSQL (Not only SQL) gathers recent solutions that differ from the SQL model by a different logic of data representation. It is characterized by its performance and its ability to handle a large amount of data. Due to the absence of a clear approach to implement a Document Warehouse (DocW) under NoSQL model, we propose, in this paper, a set of rules to transform the multidimensional galaxy model of the DocW into the column-oriented NoSQL model. We suggest two types of transformations namely Simple and Hierarchical. In order to validate our proposed transformation rules, we have used Cassandra as a Column-oriented NoSQL system to implement a DocW for each type of transformation. We used Talend Data Integration tool to load data in the implemented DocWs. We evaluate these two DocWs with two metrics WRL (Write Request Latency) and RRL (Read Request Latency) using a medical collection.

Paper Nr: 32
Title:

A Wise Object Framework for Distributed Intelligent Adaptive Systems

Authors:

Ilham Alloui and Flavien Vernier

Abstract: Designing Intelligent Adaptive Distributed Systems is an open research issue addressing nowadays technologies such as Communicating Objects (COT) and the Internet of Things (IoT) that increasingly contribute to our daily life (mobile phones, computers, home automation, etc.). Complexity and sophistication of those systems make them hard to understand and to master by human users, in particular end-users and developers. Those are very often involved in learning processes that capture all their attention while being of little interest for them. To alleviate human interaction with such systems and help developers to produce them, we propose WOF, an object oriented framework founded on the concept of \emph{Wise Object} (WO). A WO is a software-based entity that is able to learn on itself and also on the others (e.g. its environment). Wisdom refers to the experience (on its own behavior and on the usage done of it) such object acquires by its own during its life. In the paper, we present the WOF conceptual architecture and the Java implementation we built from it. Requirements and design principles of wise systems are presented. To provide application (e.g. home automation system) developers with relevant support, we designed WOF with the minimum intrusion in the application source code. The adaptiveness, intelligence and distribution related mechanisms defined in WOF are inherited by application classes. In our Java implementation of WOF, object classes produced by a developer inherit the behavior of Wise Object (WO) class. An instantiated system is then a Wise Object System (WOS) composed of wise objects that interact through an event bus according to publish-subscribe design pattern.

Paper Nr: 47
Title:

A Min-Max Tchebycheff Based Local Search Approach for MOMKP

Authors:

Imen Ben Mansour, Ines Alaya and Moncef Tagina

Abstract: The multi-objective multidimensional knapsack problem (MOMKP) which is one of the hardest multi-objective combinatorial optimization problems, presents a formal model for many real world problems. Its main goal consists in selecting a subset of items in order to maximize m objective functions with respect to q resource constraints. For that purpose, we present in this paper a resolution approach based on a Min-Max Tchebycheff iterated Local Search algorithm called Min-Max TLS. In this approach, we propose designing a neighborhood structure employing a permutation process to exploit the most promising regions of the search space while considering the diversity of the population. Therefore, Min-Max TLS uses Min-Max N(s) as a neighborhood structure, combining a Min-Extraction-Item algorithm and a Max-Insertion-Item algorithm. Moreover, in Min-Max TLS two Tchebycheff functions, used as a selection process, are studied: the weighted Tchebycheff (WT) and the augmented weighted Tchebycheff (AugWT). Experimental results are carried out with nine well-known benchmark instances of MOMKP. Results have shown the efficiency of the proposed approach in comparison to other approaches.

Paper Nr: 53
Title:

Big Data Analytic Approaches Classification

Authors:

Yudith Cardinale, Sonia Guehis and Marta Rukoz

Abstract: Analytical data management applications, affected by the explosion of the amount of generated data in the context of Big Data, are shifting away their analytical databases towards a vast landscape of architectural solutions combining storage techniques, programming models, languages, and tools. To support users in the hard task of deciding which Big Data solution is the most appropriate according to their specific requirements, we propose a generic architecture to classify analytical approaches. We also establish a classification of the existing query languages, based on the facilities provided to access the big data architectures. Moreover, to evaluate different solutions, we propose a set of criteria of comparison, such as OLAP support, scalability, and fault tolerance support. We classify different existing Big Data analytics solutions according to our proposed generic architecture and qualitatively evaluate them in terms of the criteria of comparison. We illustrate how our proposed generic architecture can be used to decide which Big Data analytic approach is suitable in the context of several use cases.

Paper Nr: 54
Title:

Towards the Layered Evaluation of Interactive Adaptive Systems using ELECTRE TRI Method

Authors:

Amira Dhouib, Abdelwaheb Trabelsi, Christophe Kolski and Mahmoud Neji

Abstract: The layered evaluation of interactive adaptive systems has to consider many evaluation methods. The best evaluation method to be used for individual layers depend on many parameters such as the evaluation criteria, the stage of the development cycle, and the characteristics of the layer under consideration. This paper presents a decision model for selecting the appropriate evaluation methods for individual layers of the interactive adaptive system. Our proposal is based on one multi-criteria method, namely ELECTRE TRI method. The proposed decision model is applied to determine the suitable evaluation methods for an adaptive hypermedia system.

Paper Nr: 58
Title:

An Energy Aware Scheduling for Reconfigurable Heterogeneous Systems

Authors:

Ines Ghribi, Riadh Ben Abdallah and Mohamed Khalgui

Abstract: One of the major challenges of computer system design is the management and conservation of energy while satisfying QoS requirements. Recently, Dynamic Voltage and Frequency Scaling (DVFS) has been integrated to various embedded processors as a mean to increase the battery life without affecting the responsiveness of tasks. This paper proposes an enhancement for I-codesign methodology [1] optimizing the energy consumption of the designed system.We propose an energy aware real-time scheduling algorithm. This algorithm makes use of the defferable server for the scheduling of aperiodic tasks along with DVFS. Simulation results demonstrate a decrease in the resulting energy consumption compared to the previously published work.

Paper Nr: 60
Title:

EcoLogic: IoT Platform for Control of Carbon Emissions

Authors:

Tsvetan Tsokov and Dessislava Petrova-Antonova

Abstract: Today, the sensors and the Internet of Things (IoT) presence naturally in the people’s lives. Billions of interactive devices exchange information about variety of objects in the physical world. The IoT technologies affect the business processes of all major industries such as transportation, manufacturing, healthcare, agriculture, etc. Despite the fact that the IoT has a positive impact to both people and industry, it also provides benefits for the environment. The IoT is recognized as a powerful tool in the fight against climate change. More specially, it has a significant potential in saving carbon emissions. Taking into account the promising areas of IoT application, this paper proposes a solution for real-time monitoring of vehicles and detection of rising levels of carbon emissions, called EcoLogic. The EcoLogic consists of hardware module that collects sensor data related to vehicles’ carbon emissions and cloud based applications for data processing, analysis and visualisation. Its primary purpose is to control the carbon emissions through smart notifications and vehicle’s power limitations.

Short Papers
Paper Nr: 6
Title:

Lightweight Multilingual Software Analysis

Authors:

Damian M. Lyons, Anne Marie Bogar and David Baird

Abstract: Large software systems can often be multilingual – that is, software systems are written in more than one language. However, many popular software engineering tools are monolingual by nature. Nonetheless, companies are faced with the need to manage their large, multilingual codebases to address issues with security, efficiency, and quality metrics. This paper presents a novel lightweight approach to multilingual software analysis – MLSA. The approach is modular and focused on efficient static analysis computation for large codebases. One topic is addressed in detail – the generation of multilingual call graphs to identify language boundary problems in multilingual code. The algorithm for extracting multilingual call graphs from C/Python codebases is described, and an example is presented. Finally, the state of current testing on a database of programs downloaded from the internet is detailed and the implications for future work are discussed.

Paper Nr: 12
Title:

SAND - A Dashboard for Analyzing Employees’ Social Actions over Google Hangouts

Authors:

Edvan Soares, Marcos Eduardo, Vanilson Burégio, Emir Ugljanin and Zakaria Maamar

Abstract: This paper presents SAND standing for Social ActioN Dashboard. It reports from different perspectives the social actions that employees in an enterprise execute over social media with focus on Google Hangouts. This execution might violate the enterprise’s use policies of social actions forcing decision makers take corrective measures. SAND is implemented using different technologies like Spring Boot and AngularJS.

Paper Nr: 18
Title:

DESTINY: a moDel-driven procESs-aware requiremenTs engineerINg methodologY

Authors:

Mona Ibrahim Jammal, Nouchène Elleuch Ben Ayed and Hanêne Ben-Abdallah

Abstract: Today’s enterprises, independently of their size, depend on the successful development of an auto-mated Information System (IS); this moves them to the software development world. The success of this move is often hindered by the difficulty in collecting the IS requirements to produce a software that is aligned with the business logic of the enterprise. This paper proposes an MDA-compliant method that helps the system analyst to generate the IS requirements – modelled through a use case diagram - from the enterprise’s business process modelled in the standard notation BPMN. The pro-posed method has the merit of producing IS requirements that are aligned with the business process, accounting for the structural and semantic perspectives of the business process, and respecting "best practice" quality guidelines. Compared to existing requirements engineering methods, the proposed method respects the best-practice granularity level of a use case and allows the requirements relation-ships derivation as well as requirements refinement.

Paper Nr: 29
Title:

Agile Product Line Engineering: The AgiFPL Method

Authors:

Hassan Haidar, Manuel Kolp and Yves Wautelet

Abstract: Integrating Agile Software Development (ASD) with Software Product Line Engineering (PLE) has resulted in proposing Agile Product Line Engineering (APLE). The goal of combining both approaches is to overcome the weaknesses of each other while maximizing their benefits. However, combining them represents a big challenge in software engineering. Several methods have been proposed to provide a practical process for applying APLE in organizations, but none covers all the required APLE features. This paper proposes an APLE methodology called AgiFPL: Agile Framework for managing evolving PL with the intention to address the deficiencies identified in current methods, while making use of their advantages.

Paper Nr: 33
Title:

A Selection of Development Processes, Tools, and Methods for Organizations that Share a Software Framework between Internal Projects

Authors:

Ciprian I. Paduraru

Abstract: One of the ways organizations are saving development costs nowadays is to share code between internal projects. Shared frameworks with highly reusable components are usually desired, but their development and maintenance processes usually generate important challenges. This paper describes development processes and methodologies that can be used to reduce costs in developing and maintaining this kind of shared framework inside an organization considering distributed development and collaboration between teams which have limited resources. Technical aspects for providing extensibility, components reusing and tools that assist the process of integration, release, and development are also presented. The work is sustained by the experiments and best practices taken from the development of such a shared framework inside a real organization.

Paper Nr: 37
Title:

A Motivating Social Robot to Help Achieve Cognitive Consonance During STEM Learning

Authors:

Khaoula Youssef, Walid Boukadida and Michio Okada

Abstract: In this paper, we show that cognitive consonance could be measured using the perceived cognitive consonance questionnaire that we present in this paper or using three different constructs which are the prospect, anxiety and learned helplessness. We used different motivating agents and we verified whether the student's motivation would increase too. In the second study, we measured the cognitive consonance using the related questionnaire and the three constructs that we proofed that they help on measuring the cognitive consonance during the first study. This method is called the triangulation and help us to make sure that the cognitive consonance has truly increased or not when we manipulate the motivation construct. Finally, since cognitive consonance increased when we use a motivating agent, we decided to investigate which of three agents (a teacher, a tablet or a robot) may lead to better motivation outcome and thus may help the student to strive for answering and focusing on the difficult scientific questions. Results show that using a robot is the best solution that may increase the student's motivation and help him/her to adopt a positive attitude change on a long term basis while the student starts to concentrate on the difficult questions rather than jumping to the easy ones.

Paper Nr: 42
Title:

REHLib: New Optimal Implementation of Reconfigurable Energy Harvesting Multiprocessor Systems

Authors:

Wiem Housseyni, Olfa Mosbahi and Mohamed Khalgui

Abstract: The designs of reconfigurable embedded real-time energy harvesting multiprocessor systems are evolving for higher energy efficiency, high-performance and flexible computing. Energy management has long been a limiting factor in real-time embedded systems. A reconfiguration is defined as a dynamic operation that offers to the system the capability to adjust and adapt its behavior i.e., scheduling policy, power consumption, or to modify the applicative functions i.e., add-remove-update software tasks, according to environment and the fluctuating behavior of renewable source. This paper provides an implementation of reconfigurable multiprocessor energy harvesting systems. The objective of this work is to develop software components for the design of real-time operating systems. We propose a novel adaptive approach in order to address the limitations in energy harvesting systems. We develop a reconfigurable real-time energy harvesting system based on POSIX implementation. The proposed approach is assessed from two aspects, energy management and real-time scheduling. Experimental results show the effectiveness of the proposed approach compared with state-of-the-art techniques.

Paper Nr: 57
Title:

WebRTC Testing: State of the Art

Authors:

Boni García, Micael Gallego, Francisco Gortázar and Eduardo Jiménez

Abstract: WebRTC is the umbrella term for a number of emerging technologies that extends the web browsing model to exchange real-time media (Voice over IP, VoIP) with other browsers. The mechanisms to provide quality assurance for WebRTC are key to release this kind of applications to production environments. Nevertheless, testing WebRTC based application, consistently automated fashion is a challenging problem. The aim of this piece of research is to provide a comprehensive summary of the current trends in the domain of WebRTC testing. For the sake of completeness, we have carried out this survey by aggregating the results from three different sources of information: i) Scientific and academia research papers; ii) WebRTC testing tools (both commercial and open source); iii) "Grey literature”, that is, materials produced by organizations outside of the traditional commercial or academic publishing and distribution channels.

Paper Nr: 71
Title:

Towards a Middleware and Policy-based Approach to Compliance Management for Collaborative Organizations Interactions

Authors:

Laura González and Raúl Ruggia

Abstract: Organizations increasingly need to collaborate with each other in order to achieve their business goals, which requires the integration of systems running in different organizations. Such integration is usually supported by middleware-based integration platforms that enable different styles of interactions (e.g. service-oriented, message-based) between heterogeneous and distributed systems. In addition, these collaborative and integrated environments have to satisfy compliance requirements originating from different sources (e.g. laws, agreements) that may, in particular, apply to the interactions between organizations. This paper proposes a middleware and policy-based approach to compliance management for collaborative organizations interactions. The approach comprises design time mechanisms (e.g. a domain specific language, a policy language) and runtime mechanisms (e.g. a policy enforcement point, an obligations service) which extend a middleware-based integration platform. The proposal aims to promote the maintanaibility, flexibility, agility and reuse of compliance solutions in these contexts by providing the means to uniformly specify compliance requirements as well as to define how these requirements are to be managed within an integration platform.

Paper Nr: 77
Title:

Analyzing and Validating Virtual Network Requests

Authors:

Jorge López, Natalia Kushik, Nina Yevtushenko and Djamal Zeghlache

Abstract: In this paper, we address platforms developed to provide and configure virtual networks according to user’s request and needs. User requests are, however, not always accurate and can contain a number of inconsistencies. The requests need to be thoroughly analyzed and verified before being applied to such platforms. We consequently identify some important properties for the verification and classify them into three groups: a) functional or logic issues, b) resource allocation/dependency issues, and c) security issues. For each group, we propose an effective way to check the request consistency. The issues of the first group are checked with the use of scalable Boolean matrix operations. The properties of the second group can be verified through the use of an appropriate system of logic implications. When checking the issues of the third group, the corresponding string analysis can be utilized. All the techniques discussed in the paper are followed by a number of illustrating examples.

Paper Nr: 79
Title:

Multi-disciplinary Optimization with Standard Co-simulation Interfaces

Authors:

Marco Inzillo and Carlos Kavka

Abstract: Numerical simulations and optimization are at the base of the design process of modern complex engineering systems. Typically, individual components are simulated by using highly specialized software tools applicable to single or narrow domains (mechanical stress, fluid dynamics, thermodynamics, acoustic, etc.) and then combined together in order to build complex systems to be co-simulated and optimized. This distributed engineering development process requires that model components must be developed in such a way, that they could be easily interchanged between different departments of the same company, may be geographically distributed or even between independent companies. This position paper provides a short discussion about the currently available standards and presents work in progress concerning the definition of new standards for the interconnection of complex engineering systems and its optimization as required in modern engineering design. The paper is complemented with a few examples which provides a base for further discussion.

Paper Nr: 87
Title:

Automatic Derivation and Validation of a Cloud Dataset for Insider Threat Detection

Authors:

Pamela Carvalllo, Ana R. Cavalli and Natalia Kushik

Abstract: The malicious insider threat is often listed as one of the most dangerous cloud threats. Considering this threat, the main difference between a cloud computing scenario and a traditional IT infrastructure, is that once perpetrated, it could damage other clients due to the multi-tenancy and virtual environment cloud features. One of the related challenges concerns the fact that this threat domain is highly dependent on human behavior characteristics as opposed to the more purely technical domains of network data generation. In this paper, we focus on the derivation and validation of the dataset for cloud-based malicious insider threat. Accordingly, we outline the design of synthetic data, while discussing cloud-based indicators, and socio-technical human factors. As a proof of concept, we test our model on an airline scheduling application provided by a flight operator, together with proposing realistic threat scenarios for its future detection. The work is motivated by the complexity of the problem itself as well as by the absence of the open, realistic cloud-based datasets.

Posters
Paper Nr: 23
Title:

Analyzing Functional Changes in BPMN Models using COSMIC

Authors:

Wiem Khlif, Mariem Haoues, Asma Sellami and Hanêne Ben-Abdallah

Abstract: When performing functional requirements analysis, software developers need to understand the application domain to fulfil organizational needs. This is essential for making trade-off decisions and achieving the success of the software development project. An application domain is dealt within the modelling phase of the business process lifecycle. Assuming that functional changes are inevitable, we propose to use the standard COSMIC to evaluate these changes and provide indicators of change status in the business domain. Expressing functional changes in terms of COSMIC Function Point units can be helpful in identifying changes leading to potential impact on the business process's functional size. In addition, we propose a top-down decomposition approach to specify requirements and analyse change impact on BPMN models at different abstraction levels.

Paper Nr: 35
Title:

An Evaluation of Cloud-based Platforms for Web-Application Development

Authors:

Jens Albrecht and Kai Wadlinger

Abstract: A wide variety of service models and options is being offered by cloud solution providers, ranging from simple infrastructure to complex business applications. While the use of cloud-based infrastructure and software services has become common in many enterprises, the Platform-as-a-Service model has yet to take off. Platform providers have invested heavily in their offerings in the last years. The result is a big toolbox consisting of cloud-based components for everything that is needed to implement, deploy and run custom software applications. The developer’s expectation is that these components just have to be configured and plugged together to get scalable multi-tiered applications. In our research, we practically evaluated major cloud development platforms on the basis of the requirements of a typical web-based business application.

Paper Nr: 75
Title:

Big Data Analytics: A Preliminary Study of Open Source Platforms

Authors:

Jorge Nereu, Ana Almeida and Jorge Bernardino

Abstract: Nowadays organizations look for Big Data as an opportunity to manage and explore their data with the objective to support decisions within its different operational areas. Therefore, it is necessary to analyse several concepts about Big Data Analytics, including definitions, features, advantages and disadvantages. By investigating today's big data platforms, current industrial practices and related trends in the research world, it is possible to understand the impact of Big Data Analytics on smaller organizations. This paper analyses the following five open source platforms for Big Data Analytics: Apache Hadoop, Cloudera, Spark, Hortonworks, and HPCC.

Paper Nr: 78
Title:

The Ability of Cloud Computing Performance Benchmarks to Measure Dependability

Authors:

Eduardo Carvalho, Raul Barbosa and Jorge Bernardino

Abstract: Current benchmarks focus on evaluating performance with little efforts made to evaluate the dependability characteristics of the cloud. Cloud computing has several advantages like scalability, elasticity and cost reduction, this led companies to move their applications to the cloud. The availability of applications and consequently businesses are then dependant on the cloud’s efforts to keep its services running. To guarantee reliability and trust, benchmarking dependability is a challenging task because of the cloud layered model which makes it difficult to predict the root of faults as higher layers are dependant of lower layers. By integrating dependability in benchmarks as a metric, we can evaluate how well does the cloud handle itself when faults occur, prevent those faults and check not only raw performance but also trust. In this paper, we study the following cloud benchmarks: Spec IaaS 2016, TPCx-V, YCSB, Perfkit Benchmarker, CloudBench, DS-Bench/D-Cloud, and evaluate if they are suitable for benchmarking dependability.

Paper Nr: 85
Title:

Model for Quality of Life Evaluation of Countries European Union with using Rule-based Systems

Authors:

Martin Šanda and Jiří Křupka

Abstract: This paper deals with the quality of life (QL) evaluation of countries European Union (EU) and progress of this evaluation in years 2007, 2011 and 2015. QL evaluation is based on official Eurostat methodology for QL evaluation - QL indicators for the EU, the data presented here come from several sources from within the European Statistical System (ESS). The set of indicators is organised along the areas: Material living conditions, Productive or main activity, Health, Education, Economic and physical safety, Governance and basic rights and Natural and living environment. QL is evaluated with using rule-based systems method: Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) - modification fuzzy TOPSIS fuzzy inference system (FIS) and Analytic hierarchy process (AHP). The aim of this paper is creating model for QL evaluation with using these methods, comparing results of these methods and their progression. Result of model is final recommendation to reach the grant allocation for the countries or regional development.

Area 2 - Software Engineering

Full Papers
Paper Nr: 15
Title:

Using Runtime State Analysis to Decide Applicability of Dynamic Software Updates

Authors:

Oleg Šelajev and Allan Gregersen

Abstract: Updating application code while it is running is a popular approach to the dynamic software update problem. But in many cases the behavior of the updated application bears side effects of the update in the form of a runtime phenomena that breaks application state assumptions leading to unwanted complications. We present a runtime state analysis system, Genrih, that enhances a dynamic system update solution and automatically decides if the state transformation functions of a DSU solution are sufficient for the given update. Genrih analyzes the atomic changes in the updated code compared to the already running version and based on these changes automatically determines whether updating the system’s runtime state will lead to the observable runtime phenomena. The designed system does not break the update procedure, but observes the state and produces notifications for enhanced analysis and crash management. The practical evaluation shows that the designed system imposes acceptable overhead and can help the developer be aware of several kinds of runtime phenomena.

Paper Nr: 21
Title:

Game Bot Detection in Online Role Player Game through Behavioural Features

Authors:

Mario Luca Bernardi, Marta Cimitile, Fabio Martinelli and Francesco Mercaldo

Abstract: The market of online games is highly increasing in the last years and thanks to the availability of always more effective gaming infrastructures and the increased quality of the developed games. The diffusion of on line games also increases the use of game bots to automatically perform malicious tasks obtaining some rewards (with a consequent economical advantage or popularity) in the game community with low effort. These causes the disappointment of the game players community becoming a critical issue for the game developers. For this reason, distinguishing between game bots and human behaviour being became essential in order to detect the malicious tasks and consequently increase the players satisfaction. In this paper authors propose an approach to the game bot detection in the online role player games based on the adoption of machine learning techniques in order to discriminate between users and game bots basing on some user behavioral features. The approach is applied to a real-world dataset of a popular role player game and the obtained results are encouraging.

Paper Nr: 25
Title:

Towards Modeling the User-perceived Quality of Source Code using Static Analysis Metrics

Authors:

Valasia Dimaridou, Alexandros-Charalampos Kyprianidis, Michail Papamichail, Themistoklis Diamantopoulos and Andreas Symeonidis

Abstract: Nowadays, software has to be designed and developed as fast as possible, while maintaining quality standards. In this context, developers tend to adopt a component-based software engineering approach, reusing own implementations and/or resorting to third-party source code. This practice is in principle cost-effective, however it may lead to low quality software products. Thus, measuring the quality of software components is of vital importance. Several approaches that use code metrics rely on the aid of experts for defining target quality scores and deriving metric thresholds, leading to results that are highly context-dependent and subjective. In this work, we build a mechanism that employs static analysis metrics extracted from GitHub projects and defines a target quality score based on repositories’ stars and forks, which indicate their adoption/acceptance by the developers’ community. Upon removing outliers with a one-class classifier, we employ Principal Feature Analysis and examine the semantics among metrics to provide an analysis on five axes for a source code component: complexity, coupling, size, degree of inheritance, and quality of documentation. Neural networks are used to estimate the final quality score given metrics from all of these axes. Preliminary evaluation indicates that our approach can effectively estimate software quality.

Paper Nr: 34
Title:

Specification Approach using GR-TNCES: Application to an Automotive Transport System

Authors:

Oussama Khlifi, Christian Siegwart, Olfa Mosbahi, Mohamed Khalgui and Georg Frey

Abstract: The features of probabilistic adaptive systems are especially the uncertainty and reconfigurability. The structure of a part of the system may be totally unknown or partially unknown at a particular time. Openness is also an inherent property, as agents may join or leave the system throughout its lifetime. This poses severe challenges for state-based specification. The languages in which probabilistic reconfigurable systems are specified should be clear and intuitive, and thus accessible to generation, inspection and modification by humans. This paper introduces a new approach for specifying adaptive probabilistic discrete event systems. We introduce the semantics of GR-TNCES to optimize the specification of unpredictable timed reconfiguration scenario running under resources constraints. We also apply this approach to specify the requirements of an automotive transport system and we evaluate its benefits.

Paper Nr: 38
Title:

An Intentional Perspective on Partial Agile Adoption

Authors:

Soreangsey Kiv, Samedi Heng, Manuel Kolp and Yves Wautelet

Abstract: Nowadays, the agile paradigm is one of the most important approach used for software development besides structured and traditional life cycles. To facilitate its adoption and minimize the risks, different meta-models have been proposed trying to unify it. Yet, very few of them have focused on one fundamental question: How to partially adopt agile methods? Intuitively, choosing which practices to adopt in agile methods should be made based on their most prioritized goals in the software development process. To answer this issue, this paper proposes a model for partial agile methods adoption based on intentional (i.e., goal) perspectives. Hence, adoption can be considered as defining the goals in the model, corresponding to the intentions of the software team. Next, by mapping with our goal-based model, suitable practices for adoption could be easily found. Moreover, the relationship between roles and their dependencies to achieve a specific goal can also be visualized. This will help the software team to identify easily the vulnerabilities associated with each goal and, in turn, help to minimize risks.

Paper Nr: 39
Title:

A New Approach for Traceability between UML Models

Authors:

Dhikra Kchaou, Nadia Bouassida and Hanêne Ben-Abdallah

Abstract: Software systems are inevitably subject to continuous evolution causing model changes introduced by new or modified requirements. To maintain the consistency of the various software models from requirements to code, a change impact analysis and management means is necessary. Such a means identifies the effects of each change on both a particular model and all related models. This paper proposes an approach that analyzes and manages the impact of changes on software requirements and design modeled in UML. The proposed approach has the advantages of dealing with both structural and semantic traceability. It uses semantic relationships and an information retrieval technique to determine the traceability between the requirements and design models. In addition, it exploits intra and inter UML diagram dependencies to assist developers in identifying the necessary changes that their diagrams must undergo after each requirement change. The quantitative evaluation of our approach shows that its structural and semantic traceability makes it reach a precision of 84% and a recall of 91%.

Paper Nr: 76
Title:

Employing Linked Data in Building a Trace Links Taxonomy

Authors:

Nasser Mustafa and Yvan Labiche

Abstract: Software traceability provides a means for capturing the relationship between artifacts at all phases of software and systems development. The relationships between the artifacts that are generated during systems development can provide valuable information for software and systems Engineers. It can be used for change impact analysis, systems verification and validation, among other things. However, there is no consensus among researchers about the syntax or semantics of trace links across multiple domains. Moreover, existing trace links classifications do not consider a unified method for combining all trace links types in one taxonomy that can be utilized in Requirement Engineering, Model Driven Engineering and Systems Engineering. This paper is one step towards solving this issue. We first present requirements that a trace links taxonomy should satisfy. Second, we present a technique to build a trace links taxonomy that has well-defined semantics. We implemented the taxonomy by employing the Link data and the Resource Description Framework (RDF). The taxonomy can be configured with traceability models using Open Service for Lifecycle Collaboration (OSLC) in order to capture traceability information among different artifacts and at different levels of granularity. In addition, the taxonomy offers reasoning and quantitative and qualitative analysis about trace links. We presented validation criteria for validating the taxonomy requirements and validate the solution through an example.

Short Papers
Paper Nr: 7
Title:

Estimating the Survival Rate of Mutants

Authors:

Imen Marsit, Mohamed Nazih Omri and Ali Mili

Abstract: Mutation testing is often used to assess the quality of a test suite by analyzing its ability to distinguish between a base program and its mutants. The main threat to the validity/ reliability of this assessment approach is that many mutants may be syntactically distinct from the base, yet functionally equivalent to it. The problem of identifying equivalent mutants and excluding them from consideration is the focus of much recent research. In this paper we argue that it is not necessary to identify individual equivalent mutants and count them; rather it is sufficient to estimate their number. To do so, we consider the question: what makes a program prone to produce equivalent mutants? Our answer is: redundancy does. Consequently, we introduce a number of program metrics that capture various dimensions of redundancy in a program, and show empirically that they are statistically linked to the rate of equivalent mutants.

Paper Nr: 17
Title:

A Novel Method for Improving Productivity in Software Administration and Maintenance

Authors:

Andrei Panu

Abstract: After the initial development and deployment, keeping software applications and their execution environments up to date comes with some challenges. System administrators have no insight into the internals of the applications running on their infrastructure, thus if an update is available for the interpreter or for a library packaged separately on which an application depends, they do not know if the new release will bring some changes that will break parts of the application. It is up to the development team to assess the changes and to support the new version. Such tasks take time to accomplish. In this paper we propose an approach consisting of automatic analysis of the applications and automatic verification if the changes in a new version of a software dependency affect them. With this solution, system administrators will have an insight and will not depend on the developers of the applications in such situations, and the latter will be able to find out faster which is the impact of the new release on their applications.

Paper Nr: 20
Title:

Investigating Differences and Commonalities of Software Metric Tools

Authors:

Lerina Aversano, Carmine Grasso, Pasquale Grasso and Maria Tortorella

Abstract: The availability of quality models and metrics that permit an objective evaluation of the quality level of a software product is a relevant aspect for supporting software engineers during their development tasks. In addition, the adoption of software analysis tools that facilitate the measurement of software metrics and application of the quality models can ease the evaluation tasks. This paper proposes a preliminary investigation on the behaviour of existing software metric tools. Specifically, metrics values have been computed by using the different software analysis tools for three software systems of different sizes. Measurements show that, for the same software system and metrics, the software analysis tools provide different values. This could impact on the overall software quality evaluation for the aspect based on the selected metrics.

Paper Nr: 22
Title:

Metamodeling Approach for Hazard Management Systems

Authors:

Anca Daniela Ionita and Mariana Mocanu

Abstract: The management of natural and human-caused hazards is performed by reuniting a large variety of stakeholders, non-homogeneous collections of data, and systems that may not have been conceived for interoperability. The interdependency between hazards and the need of coordinated response also lead to the necessity to develop multi-hazard solutions, resulting in systems with a high complexity. This paper presents a metamodeling approach for hazard management systems, and a specific modeling environment, which considers the hazard, emergency, and geospatial views. The use of the model editor is exemplified on a system for early warning in case of accidental water pollution.

Paper Nr: 36
Title:

KYPO Cyber Range: Design and Use Cases

Authors:

Jan Vykopal, Radek Oslejsek, Pavel Celeda, Martin Vizvary and Daniel Tovarnak

Abstract: The physical and cyber worlds are increasingly intertwined and exposed to cyber attacks. The KYPO cyber range provides complex cyber systems and networks in a virtualized, fully controlled and monitored environment. Time-efficient and cost-effective deployment is feasible using cloud resources instead of a dedicated hardware infrastructure. This paper describes the design decisions made during it’s development. We prepared a set of use cases to evaluate the proposed design decisions and to demonstrate the key features of the KYPO cyber range. It was especially cyber training sessions and exercises with hundreds of participants which provided invaluable feedback for KYPO platform development.

Paper Nr: 40
Title:

Model-based Tool Support for the Development of Visual Editors - A Systematic Mapping Study

Authors:

David Granada, Juan M. Vara, Francisco Pérez Blanco and Esperanza Marcos

Abstract: Visual Domain Specific Languages play a fundamental role in the development of model-driven software. The increase in this type of visual languages and the inherent complexity as regards the development of graphical editors for them has, in recent years, led to the emergence of several tools that provide technical support for this task. Most of these tools are based on the use of models and increase the level of automation of software development, which are the basic principles of Model Driven Engineering. This paper therefore reviews the main features, potential advantages and current limitations of the main tools that exist for the development of graphical editors for visual DSLs.

Paper Nr: 61
Title:

Improving Requirements Engineering through Goal-oriented Models and Tools: Feedback from a Large Industrial Deployment

Authors:

Christophe Ponsard and Robert Darimont

Abstract: Nowadays, mastering the requirements phase is still challenging for companies of any size and often impacts the quality, delay or cost of the delivered software system. While smaller companies may suffer from maturity, resource or tooling problems, larger companies have to cope with the larger size, complexity and cross-dependencies between their projects. This paper reports about the work carried out over the past three years to address such challenges within Huawei, a very large Chinese company active worldwide in the high-tech and telecommunication sectors, with the help of experts from the requirements engineering community. We show how goal-oriented requirements engineering (GORE) is able to provide a strong foundation to support the evolution of requirements engineering practices and also in connection with related processes such as business analysis, technical specification and testing. We also report about our experience in developing adequate tool support to achieve successful industrial adoption and address team-work, scalability and toolchain integration needs. Although anchored in a specific case, most of the reported issues are shared by many companies in many domains. To further abstract away from our case, we also formulate some ”Chinese wisdom” learned, identify useful strategies for successful technology transfer and point further research challenges.

Paper Nr: 62
Title:

Towards a Mechanism for Controlling Meta-model Extensibility

Authors:

Santiago P. Jácome-Guerrero and Juan de Lara

Abstract: Model-Driven Engineering (MDE) considers the systematic use of models in software development. A model must be specified through a well-defined modeling language with precise syntax and semantics. In MDE, this syntax is defined by a meta-model. There are several scenarios that require the extension or adaptation of existing meta-models. For example, OMG standards such as KDM or DD are based on the extension of base meta-models, according to certain norms. However, these norms are not "operational", but are described in natural language, and therefore not supported by tools. Although modeling is an activity regulated by meta-models, there are no commonly accepted mechanisms to regulate how meta-models can be extended. To solve this problem, we propose a mechanism that allows establishing norms of extensibility for meta-models, as well as a tool that makes it possible to extend the meta-models according to those norms. The tool is based on EMF, implemented as an Eclipse plugin, and has been validated to guide the extension of OMG standard meta-models such as KDM and DD.

Paper Nr: 66
Title:

An End-to-end Formal Verifier for Parallel Programs

Authors:

Soumyadip Bandyopadhyay, Santonu Sarkar and Kunal Banerjee

Abstract: Among the various models of computation (MoCs) which have been used to model parallel programs, Petri net has been one of the mostly adopted MoC. The traditional Petri net model is extended into the PRES+ model which is specially equipped to precisely represent parallel programs running on heterogeneous and embedded systems. With the inclusion of multicore and multiprocessor systems in the domain of embedded systems, it has become important to validate the optimizing and parallelizing transformations which system specifications go through before deployment. Although PRES+ model based equivalence checkers for validating such transformations already exist, construction of the PRES+ models from the original and the translated programs was carried out manually in these equivalence checkers, thereby leaving scope for inaccurate representation of the programs due to human intervention. Furthermore, PRES+ model tends to grow more rapidly with the program size when compared to other MoCs, such as FSMD. To alleviate these drawbacks, we propose a method for automated construction of PRES+ models from high-level language programs and use an existing translation scheme to convert PRES+ models to FSMD models to validate the transformations using a state-of-the-art FSMD equivalence checker. Thus, we have composed an end-to-end fully automated equivalence checker for validating optimizing and parallelizing transformations as demonstrated by our experimental results.

Paper Nr: 67
Title:

Inference-based Detection of Architectural Violations in MVC2

Authors:

Shinpei Hayashi, Fumiki Minami and Motoshi Saeki

Abstract: Utilizing software architecture patterns is important for reducing maintenance costs. However, maintaining code according to the constraints defined by the architecture patterns is time-consuming work. As described herein, we propose a technique to detect code fragments that are incompliant to the architecture as fine-grained architectural violations. For this technique, the dependence graph among code fragments extracted from the source code and the inference rules according to the architecture are the inputs. A set of candidate components to which a code fragment can be affiliated is attached to each node of the graph and is updated step-by-step. The inference rules express the components’ responsibilities and dependency constraints. They remove candidate components of each node that do not satisfy the constraints from the current estimated state of the surrounding code fragment. If the current result does not include the current component, then it is detected as a violation. By defining inference rules for MVC2 architecture and applying the technique to web applications using Play Framework, we obtained accurate detection results.

Paper Nr: 73
Title:

Automated Unit Testing in Model-based Embedded Software Development

Authors:

Christoph Luckeneder, Hermann Kaindl and Martin Korinek

Abstract: Automating software tests is generally desirable, and especially for the software of safety-critical real-time systems such as automotive control systems. For such systems, also conforming with the ISO 26262 standard for functional safety of road vehicles is absolutely necessary. These are embedded systems, however, which pose additional challenges with regard to test automation. In particular, the questions arise on which hardware platform the tests should be performed and by use of which workflow and tools. This is especially relevant in terms of cost, while still ensuring conformance with ISO 26262. In this paper, we present a practical approach for automated unit testing in model-based embedded software development for a safety-critical automotive application. Our approach includes both a workflow and supporting tools for performing automated unit tests. In particular, we analyze an as-is workflow and propose changes to the workflow for reducing costs and time needed for performing such tests. In addition, we present an improved tool chain for supporting the test workflow. In effect, without manually implementing each test case twice unit tests can be performed both in a simulation environment and on an open-loop test environment including the embedded platform target hardware.

Paper Nr: 84
Title:

Designing Situation Awareness - Addressing the Needs of Medical Emergency Response

Authors:

Julia Kantorovitch, Ilkka Niskanen, Jarmo Kalaoja and Toni Staykova

Abstract: The effective support of Situation Awareness (SA) is the core of many applications. In this paper, we report a progress on the research towards the complementing of the existing studies with new knowledge, on engineering of SA in particular keeping in mind a complex multi-stakeholder context of existing and future knowledge intensive intelligent environments. A medical emergency response use case is used as an instantiation example to evaluate our engineering thoughts.

Posters
Paper Nr: 19
Title:

What Techniques Can Be Used for GUI Risk-based Testing?

Authors:

Behzad Nazarbakhsh and Dietmar Pfahl

Abstract: Risk-based testing (RBT) is an approach that uses metrics to find critical parts of software applications under test. In order to understand to what extend RBT has been applied for GUI testing, and to capture the lessons learned, we conducted a literature review. Based on the selected literature, we discuss the advantages that RBT may bring to the various activities involved in testing. Moreover, we analyze the rationale for applying different variants of RBT presented in the selected literature. Finally, we discuss the RBT techniques which can be specifically used for GUI testing.

Paper Nr: 31
Title:

The Two-Hemisphere Modelling Approach to the Composition of Cyber-Physical Systems

Authors:

Oksana Nikiforova, Nisrine El Marzouki, Konstantins Gusarovs, Hans Vangheluwe, Tomas Bures, Rima Al-Ali, Mauro Iacono, Priscill Orue Esquivel and Florin Leon

Abstract: The Two-hemisphere model-driven (2HMD) approach assumes modelling and use of procedural and conceptual knowledge on an equal and related basis. This differentiates 2HMD approach from pure procedural, pure conceptual, and object oriented approaches. The approach may be applied in the context of modelling of a particular business domain as well as in the context of modelling the knowledge about the domain. Cyber-physical systems are heterogeneous systems, which require multi-disciplinary approach to their modelling. Modelling of cyber-physical systems by 2HMD approach gives an opportunity to transparently compose and analyse system components to be provided and components actually provided, and, thus, to identify and fill the gaps between desirable and actual system content.

Paper Nr: 41
Title:

Aligning Requirements-driven Software Processes with IT Governance

Authors:

Vu H. A. Nguyen, Manuel Kolp, Yves Wautelet and Samedi Heng

Abstract: Requirements Engineering is closely intertwined with Information Technology (IT) Governance. Aligning IT Governance principles with Requirements-Driven Software Processes allows then to propose governance and management rules for software development to cope with stakeholders’ requirements and expectations. Typically, the goal of IT Governance in software engineering is to ensure that the results of a software organization business processes meet the strategic requirements of the organization. Requirements-driven software processes, such as (I-)Tropos, are development processes using high-level social-oriented models to drive the software life cycle both in terms of project management and forward engineering techniques. To consolidate both perspectives, this paper proposes a process framework called GI-Tropos including meta-model formalization to extend I-Tropos allowing to align requirements-driven software processes with IT governance.

Paper Nr: 48
Title:

New Verification Approach for Reconfigurable Distributed Systems

Authors:

Oussama Khlifi, Olfa Mosbahi, Mohamed Khalgui and Georg Frey

Abstract: Adaptive systems are able to modify their behaviors to cope with unpredictable significant changes at run-time such as component failures. These systems are critical for future project and other intelligent systems. Reconfiguration is often a major undertaking for systems: it might make its functions unavailable for some time and make potential harm to human life or large financial investments. Thus, updating a system with a new configuration requires the assurance that the new configuration will fully satisfy the expected requirements. Formal verification has been widely used to guarantee that a system specification satisfies a set of properties. However, applying verification techniques at run time for any potential change can be very expensive and sometimes unfeasible. In this paper, we propose a new verification approach to deal with the formal verification of these reconfiguration scenarios. New reconfigurable CTL semantics is introduced to cover the verification of reconfigurable properties. It consists of two verification steps: design time and run-time verification. A railway case study will be also presented.

Paper Nr: 69
Title:

Program Understanding Models: An Historical Overview and a Classification

Authors:

Eric Harth and Philippe Dugerdil

Abstract: During the last three decades several hundred papers have been published on the broad topic of “program comprehension”. The goal was always the same: to develop models and tools to help developers with program understanding during program maintenance. However few authors targeted the more fundamental question: “what is program understanding” or, other words, proposed a model of program understanding. Then we reviewed the proposed program understanding models. We found the papers to be classifiable in three period of time in accordance with the following three subtopics: the process, the tools and the goals. Interestingly, studying the fundamental goal came after the tools. We conclude by highlighting that it is required to go back to the fundamental question to have any chance to develop effective tools to help with program understanding which is the most costly part of program maintenance.

Paper Nr: 72
Title:

A Harmonization between CERTICS and CMMI-DEV Assets - A Joint Implementation of Product and Process Models in the Business Management Competence Area

Authors:

Sandro Ronaldo Bezerra Oliveira, Fabrício Wickey da Silva Garcia and Clênio Figueiredo Salviano

Abstract: This paper presents a harmonization proposal between a product quality model, the CERTICS (a national Brazilian model) and a software process model, CMMI-DEV (an international model) used in the industry. The goal of this harmonization is on the Business Management Competence Area of CERTICS, which focus whether “the software leverages knowledge-based business and it is driven by these business”. The results of the harmonization also used the CMMI-SVC model assets and are examined step by step, as well as including a review of the harmonization, and were assisted by an expert on the CERTICS and CMMI-DEV models. Thus, this paper correlates the structures of the two models to reduce the implementation time and costs, and to stimulate the execution of multi-model implementations in software development

Paper Nr: 81
Title:

Conformance Checking in Integration Testing of Time-constrained Distributed Systems based on UML Sequence Diagrams

Authors:

Bruno Lima and João Faria

Abstract: The provisioning of a growing number of services depends on the proper interoperation of multiple products, forming a new distributed system, often subject to timing requirements. To ensure the interoperability and timely behavior of this new distributed system, it is important to conduct integration tests that verify the interactions with the environment and between the system components. Integration test scenarios for that purpose may be conveniently specified by means of UML sequence diagrams (SDs) enriched with time constraints. The automation of such integration tests requires that test components are also distributed, with a local tester deployed close to each system component, coordinated by a central tester. The distributed observation of execution events, combined with the impossibility to ensure clock synchronization in a distributed system, poses special challenges for checking the conformance of the observed execution traces against the specification, possibly yielding inconclusive verdicts. Hence, in this paper we investigate decision procedures and criteria to check the conformance of observed execution traces against a specification set by a UML SD enriched with time constraints. The procedures and criteria are specified in a formal language that allows executing and validating the specification. Examples are presented to illustrate the approach.