ICSOFT 2010 Abstracts


Area 1 - Enterprise Software Technology

Short Papers
Paper Nr: 95
Title:

A VIDEO SURVEILLANCE MODEL INTEGRATION IN SMALL AND MEDIUM ENTERPRISES

Authors:

Dan Benta and Stefan Ioan Nitchi

Abstract: Rapid evolution of the Internet inevitably leads toward adapting to new technologies. To survive in the increasingly fierce competition, companies must keep pace with new trends and developments. Software solutions should be adapted to not change the structure of the company. A common case is the classical surveillance and monitoring system that switch from analogue to digital and from a system accessed from inside the company to a system accessed via IP. Wireless technologies are used on a large scale, are flexible, cheap, and accessible and without wiring systems or elements that disturb. The evolution of wireless communications was done in close dependence with the development of communication networks. A video surveillance and monitoring system (VSaMS) is a good tool that offers secure targets or premises or to monitor the activity of the perimeter. Access to the system will allow real time monitoring, recording and accessing records. Using a standard depends on the location and the geographical area and equipment. The aim of this paper is to highlight existing standards and solution and to propose a model for a VSaMS. Also, experimental results are presented.

Paper Nr: 98
Title:

THE INFERENCE EFFICIENCY PROBLEM IN BUSINESS AND TECHNOLOGICAL RULES MANAGEMENT SYSTEMS

Authors:

Barbara Baster and Andrzej Macioł

Abstract: In the following paper we present the results of our works related to development of rule engine for automated interpretation of business rules. Our experiences and experimental results show that knowledge description for this purpose may be stored in the form of relational databases. The aim of the experimental research presented in this paper was to determine degree in which organization of knowledge base and assumed inference strategy influence the efficiency of inference process itself. Experiments proved that owing to the application of mechanism characteristic for relational data bases, knowledge base can be easily arranged so as to maximize efficiency of inference. The efficiency of inference is strongly influenced by preliminary knowledge transformation from the set of examples or random rules into arranged form.

Paper Nr: 102
Title:

PARALLELISM, ADAPTATION AND FLEXIBLE DATA ACCESS IN ON-DEMAND ERP SYSTEMS

Authors:

Vadym Borovskiy, Wolfgang Koch and Alexander Zeier

Abstract: On-premise enterprise resource planning (ERP) systems are costly to maintain and adapt to specific needs. To lower the cost of ERP systems an on-demand consumption model can be employed. This requires ERP systems to support multi-tenancy and multi-threading to enable consolidation of multiple businesses onto the same operational system. To simplify the adaptation of ERP systems to customer-specific requirements the former must natively support extensions, meaning that customer-specific behavior must be factored out from a system and placed into an extension module. In this paper we propose the architecture of an ERP system that i) exploits parallelism and ii) is able to accommodate custom requirements by means of enterprise composite applications. We emphasize the importance of ERP data accessibility and contribute with a concept of business object query language that allows building fine-grained queries. All suggestions made in the paper have been prototyped.

Paper Nr: 149
Title:

TOWARDS THE AUTOMATIC IDENTIFICATION OF VIOLATIONS TO THE NORMALIZED SYSTEMS DESIGN THEOREMS

Authors:

Kris Ven, David Bellens, Philip Huysmans and Dieter Van Nuffel

Abstract: Contemporary organizations are operating in increasingly volatile environments and must be able to respond quickly to their environment. Given the importance of information technology within organizations, the evolvability of information systems will to a large degree determine how quickly organizations are able to react to changes in their environment. Unfortunately, current information systems struggle to provide the required levels of evolvability. Recently, the Normalized Systems approach has been proposed which aims to address this issue. The Normalized Systems approach is based on the systems theoretic concept of stability to ensure the evolvability of information systems. To this end, the Normalized Systems approach proposes four design theorems that act as constraints on the modular structure of software. In this paper, we explore the feasibility of building a tool that is able to automatically identify manifestations of violations to these Normalized Systems design theorems in the source code of information systems. This would help organizations in identifying limitations to the evolvability of their information systems. We describe how a prototype of such tool was developed, and illustrate how it can help to analyze the source code of an existing application.

Paper Nr: 153
Title:

FEATURE ASSEMBLY MODELLING - A New Technique for Modelling Variable Software

Authors:

Lamia Abo Zaid, Frederic Kleinermann and Olga de Troyer

Abstract: For over two decades feature modelling techniques are used in the software research community for domain analysis and modelling of variable software. However, feature modelling has not found its way to the industry. In this paper we present a new feature modelling technique, developed in the context of a new approach called Feature Assembly, which overcomes some of the limitations of the current feature modelling techniques. We use a multi-perspective approach to deal with the complexity of large systems, we provide a simpler and easier to use modelling language, and last but not least we separated the variability specifications from the feature specifications which allow reusing features in different contexts.

Paper Nr: 180
Title:

TOWARDS MODELING LARGE SCALE PROJECT EXECUTION MONITORING - Project Status Model

Authors:

Ciprian-Leontin Stanciu, Dacian Tudor and Vladimir-Ioan Creţu

Abstract: Software projects are problematic considering their high overruns in terms of execution time and budget. In large scale projects, the monitoring activity is a very difficult task, due to the very complex relation between resources and constraints, and must be based on a well established methodology. The outputs of the monitoring process refer mostly to the current status of the project, which must be reflected as accurate as possible. We propose a model for project status determination. This is a sub-model of a future monitoring model subject of our current research. The project status model not only considers the perspective of the project manager, which defines the macro-universe of the project, but also the perspective of every worker involved in the project, who can be seen as manager of their assigned tasks, which defines the micro-universe of the worker.

Paper Nr: 221
Title:

CONTEXT-AWARE SHARING CONTROL USING HYBRID ROLES IN INTER-ENTERPRISE COLLABORATION

Authors:

Ahmad Kamran Malik and Schahram Dustdar

Abstract: In enterprise-based collaborations, humans working in dynamic overlapping teams controlled by their respective enterprises, share personal context and team related context for accomplishment of their activities. Privacy of personal context becomes vital in this scenario. Personal context contains information that user may not want to share, for example, her current location and current activity. We propose a role-based dynamic sharing control model which is owner centric and extends role-based access control model. We provide privacy of owner’s personal context by separating it from team related context through the use of owner defined roles. Owner has full control of her personal data and is able to dynamically change her own access rules facing any new situation. We describe a role-based dynamic sharing control architecture which makes use of enterprisedefined roles as well as owner-defined roles for separating user context from team context. We evaluate our approach by providing a real world scenario, its running example, and implementation as sharing control messenger using Web services in Java.

Paper Nr: 234
Title:

ONTOLOGY BASED INTEGRATION OF TRAINING SYSTEMS - The Electrical Power Production Operators Domain

Authors:

Ricardo Molina-González, Guillermo Rodríguez-Ortíz, Víctor-Hugo León-Sagahón and Jaime-Israel Paredes-Rivera

Abstract: An ontology based approach to loosely integrate independent training management systems is presented. The three systems are: the traditional training management system, the labour skills management system and the talent and innovation management system. The method first represents the data of each of the three independent systems using a simplified ontology structure, and then the integration relationships among the systems are specified and implemented.

Paper Nr: 237
Title:

INFORMATION MICROSYSTEMS

Authors:

Jordi Pradel, Jose Raya and Xavier Franch

Abstract: Given their need to manage the information they have under control, organizations usually choose among two types of widely used IT solutions: 1) information systems based on databases (DBIS) that are powerful but expensive to develop and not much flexible; 2) spreadsheets, which threaten the integrity of the data and are limited in exploiting it. In this paper we propose a new type of IT solution, namely information microsystems (MicroIS), that aims at reconciling the best of these two worlds: the low development and maintenance costs, ease of use and flexibility of spreadsheets, with the structure, semantics and integrity of DBIS. The goal is not to replace any of the above two paradigms but to lie somewhere in between depending on the changing needs of the organization. From the various possible points of interest of this IT solution, the article will focus specifically on issues related to data management, introducing the conceptual model of MicroIS, the transformations and validations that can be done around it, and they way in which the structure of the information is inferred from the data that users provide.

Paper Nr: 253
Title:

A METAMODEL INTEGRATION FOR METRICS AND PROCESSES CORRELATION

Authors:

Xabier Larrucea and Eider Iturbe

Abstract: Nowadays organizations need to improve their efficiency mainly due to the current economic situation. Several organizations are involved in process improvement initiatives in order to become more competitive in the market. These initiatives require processes definition and performance measurement activities. This paper describes briefly a metamodel integration between metrics metamodel and software and business execution metamodels in order to support this kind of improvement initiatives. In fact this integration implies to control coherently Software Metrics Metamodel for metrics, Software Process Engineering Metamodel 2.0 for defining processes and JBPM Process Definition Language for executing processes. This approach is supported by a prototype.

Paper Nr: 269
Title:

CONSTRUCTING AND EVALUATING SUPPLY-CHAIN SYSTEMS IN A CLOUD-CONNECTED ENTERPRISE

Authors:

Donald F. Ferguson and Ethan Hadar

Abstract: An enterprise that exploits its IT-services from the cloud, and optionally provides some of the services to its customers via the cloud, is defined by us as cloud-connected enterprise (CCE). Consumption from the cloud and provisioning to the cloud of IT services defines an IT supply-chain environment. Considering the conceptual similar offerings from different vendors is economical attractive, as specialization in services increases the quality and cost-effectiveness of the service. The overall value of a service is composed of characteristics that may be summarized as QARCC: Quality, Agility, Risk, Capability and Cost. Tradeoffs between implementing services internally and consuming services externally may depend on these characteristics and their sub-characteristics. Regardless of the origin of the services or sub-services, we propose that the construction or consumption of the solution should follow dedicated cloud-oriented lifecycle for managing such services. The proposed incremental and iterative process, fosters an agile approach of refactoring and optimization. It is based on the assumptions that services change their QARCC characteristics over time due to emerging opportunities for replacement of sub-components. It is designed to operate in internal clouds as well as external and hybrid ones.

Posters
Paper Nr: 89
Title:

MODELLING COLLABORATIVE SERVICES - The COSEMO Model

Authors:

Thanh Thoa Pham Thi, Thang Le Dinh, Markus Helfert and Michel Leonard

Abstract: Despite the dominance of the service sector in the last decades, there is still a need for a strong foundation on service design and innovation. Little attention has been paid on service modelling, particularly in the collaboration context. Collaboration is considered as one of the solutions for surviving or sustaining the business in the high competitive atmosphere. Collaborative services require various service providers working together according to agreements between them, along with service consumers, in order to co-produce services. In this paper, we address crucial issues in collaborative services such as collaboration levels, sharing data and processes due to business interdependencies between service stakeholders. Subsequently, we propose a model for Collaborative Service Modelling – the COSEMO model, which is able to cover identified issues. We also apply our proposed model to modelling an example of Travelling services in order to illustrate the relevance of our modelling approach to the matter in hand.

Paper Nr: 99
Title:

APPLICATION OF RULES ENGINES IN TECHNOLOGY MANAGEMENT

Authors:

Barbara Baster, Andrzej Macioł and Bogdan Rębiasz

Abstract: In the paper we present the initial results of our research aimed at development of the tool which will benefit from virtues of BRMS and will enable support of technological decisions. Our task was focused on preparation of use cases set along with precise description of rules used for solving specific decision problems. For this purpose two decision problems were analysed which covered such issues as selection of feedstock or executive production planning. These problems were analyzed in view of a company producing cold-rolled strips in a wide dimension range and diversified grades of steel. The general conclusion which is the answer to the question of the possibility to create a tool similar to BRE but capable of technological decision supporting is a statement that it is necessary to combine two forms of knowledge presentation: declarative and procedural. It is also necessary to ensure the possibility of communication between this type of instrument and external data sources as well as various types of IT tools supporting specific technological decisions.

Paper Nr: 205
Title:

COMMON SERVICES FRAMEWORK - An Application Development Framework

Authors:

Jeanette Bruno, Michael Kinstrey and Louis Hoebel

Abstract: The Common Services Framework (CSF) is developed by GE’s Global Research Center (GRC) as a design pattern and framework for application development. The CSF is comprised of a set of service-oriented API’s and components that implement the design pattern. GE GRC supports a wide diversity of R&D for GE and external customers. The motivation was for a reusable, extensible, domain and implementation agnostic framework that could be applied across various research projects and production applications. The CSF has been developed for use in finance, diagnostics, logistics and healthcare. The design pattern is an extension of the Model-View-Controller pattern and the reference implementation is in Java.

Paper Nr: 265
Title:

TOWARDS THE INTEGRATION OF BIOINFORMATICS DATA AND SERVICES USING SOA AND MASHUPS

Authors:

Elarbi Badidi and Larbi Esmahi

Abstract: Worldwide biological research activities are generating publicly available biological data at a phenomenal pace. Data is usually stored in different formats (fasta, genbank, embl, xml, etc.). Therefore, retrieving, analyzing, parsing, and integrating these heterogeneous data require substantial programming expertise and effort that scientists do not have overall. Bioinformaticians have often considered several approaches to integrate heterogeneous data and software applications. Most of these integration approaches require significant computer skills. Recently, a new technology, called mashups, has emerged to simplify this integration. In this paper, we discuss widely used approaches for integrating data and applications in bioinformatics and our ongoing effort to use mashups in conjunction with Service Oriented Architecture (SOA) for integrating data and applications in Life Sciences.

Area 2 - Software Engineering

Full Papers
Paper Nr: 17
Title:

WORK PRODUCT-DRIVEN SOFTWARE DEVELOPMENT METHODOLOGY IMPROVEMENT

Authors:

Paul Bogg, Graham Low, Brian Henderson-Sellers and Ghassan Beydoun

Abstract: A work product is a tangible artifact used during a software development project; for example, a requirements specifications or class model diagram. Towards a general approach for evaluating and potentially improving the quality of methodologies, this paper proposes utilizing a work product-based approach to method construction known as the “work product pool” approach to situational method engineering to accomplish this quality improvement. Starting from the final software application and identifying work product pre-requisites by working backwards through the methodology process, work product inter-dependencies are revealed. Using method fragments from a specific methodology (here, MOBMAS), we use this backward chaining approach to effectively recreate that methodology. Evaluation of the artificially recreated methodology allows the identification of missing and/or extraneous method elements and where process steps could be improved.

Paper Nr: 29
Title:

A REQUIREMENTS METAMODEL FOR RICH INTERNET APPLICATIONS

Authors:

Esteban Robles Luna, María José Escalona and Gustavo Rossi

Abstract: The evolution of the Web has motivated the development of several Web design approaches to support the systematic building of Web software. Together with the constant technological advances, these methods must be constantly improved to deal with a myriad of new feasible application features. In this paper we focus on the field of Rich Internet Applications (RIA); specifically we aim to offer a solution for the treatment of Web Requirements in RIA development. For this aim we present WebRE+, a requirements metamodel which incorporates RIA features into the modelling repertoire. We illustrate our ideas with a meaningful example of a business intelligence application.

Paper Nr: 40
Title:

HOW DEVELOPERS TEST THEIR OPEN SOURCE SOFTWARE PRODUCTS - A Survey of Well-known OSS Projects

Authors:

Davide Tosi and Abbas Tahir

Abstract: Open Source Software (OSS) projects do not usually follow the traditional software engineering developmentparadigms found in textbooks, thus influencing the way OSS developers test their products. In this paper, we explore a set of 33 well-known OSS projects to identify how software quality assurance is performed under the OSS model. The survey investigates the main characteristics of the projects and common testing issues to understand whether a correlation exists between the complexity of the project and the quality of its testing activity. We compare the results obtained in our survey with the data collected in a previous survey by L. Zhao and S. Elbaum. Our results confirm that OSS is usually not validated enough and therefore its quality is not revealed enough. To reverse this negative trend, the paper suggests the use of a testing framework that can support most of the phases of a well-planned testing activity, and describes the use of Aspect Oriented Programming (AOP) to expose dynamic quality attributes of OSS projects.

Paper Nr: 93
Title:

USING AToM3 FOR THE VERIFICATION OF WORKFLOW APPLICATIONS

Authors:

Leila Jemni Ben Ayed, Ahlem Ben Younes and Amin Ben Brahim Achouri

Abstract: In this paper, we propose an approach for the verification of workflow applications using AToM3 and Event B. Workflow carries applications where many actors take part and cooperate in order to execute operations. Upon composing those operations, many problems such as deadlock, freeness and livelock might appear. In this context, we are going to show how to build a meta-model for UML activity diagram in AToM3. From this meta-model, AToM3 generates a visual tool to build and to specify workflow applications where syntactical verification is made. Further, we are going to define a graph grammar to generate a textual code from the graphically specified workflow. This code will maintain information about all the activities and their dependencies. Another role of the graph grammar is to generate an Event B machine used for the verification of the workflow. Structural errors like deadlock and absence of synchronization can be captured from the resulted Event B model. Functional requirements are also verified using the resulted Event B model.

Paper Nr: 125
Title:

A PROGRAMMING LANGUAGE TO FACILITATE THE TRANSITION FROM RAPID PROTOTYPING TO EFFICIENT SOFTWARE PRODUCTION

Authors:

Francisco Ortin, Daniel Zapico and Miguel Garcia

Abstract: Dynamic languages are becoming increasingly popular for developing different kinds of applications, being rapid prototyping one of the scenarios where they are widely used. The dynamism offered by dynamic languages is, however, counteracted by two main limitations: no early type error detection and fewer opportunities for compiler optimizations. To obtain the benefits of both dynamically and statically typed languages, we have designed the StaDyn programming language to provide both approaches. Our language implementation keeps gathering type information at compile time, even when dynamic references are used. This type information is used to offer compile-time type error detection, direct interoperation between static and dynamic code, and better runtime performance. Following the Separation of Concerns principle, dynamically typed references can be easily turned into statically typed ones without changing the application source code, facilitating the transition from rapid prototyping to efficient software production. This paper describes the key techniques used in the implementation of StaDyn to obtain these benefits.

Paper Nr: 140
Title:

APPLICATION OF SERVICE-ORIENTED COMPUTING AND MODEL-DRIVEN DEVELOPMENT PARADIGMS TO BUSINESS PROCESSES - A Systematic Review

Authors:

Andrea Delgado, Francisco Ruiz, Ignacio García-Rodríguez de Guzmán and Mario Piattini

Abstract: To achieve the defined value for their businesses, current organizations need to manage their business processes in an integrated manner, interconnecting the software systems that support these processes. Over the last few years, new paradigms have appeared to respond to this and other organizational and software needs: Business Process Management (BPM) and Service-Oriented Computing (SOC) which are closely interconnected. Additionally, the Model-Driven Development (MDD) paradigm has been called upon to play an important role in supporting business process implementation by software services. BPM handles the management of business processes, including their modelling, deployment, execution, analysis and improvement. Service-Oriented Computing bases software development on services, which correspond to business concepts and are created in order to perform business processes. Model-Driven Development promotes software development based on models which enable, among other things, transformations and the automatic generation of code for different platforms. With the aim of establishing the bases for research into the integration of these paradigms to support business process management in organizations, a systematic review was carried out, focusing on the current state of the literature concerning the application of service-oriented and model-driven paradigms to business processes.

Paper Nr: 163
Title:

A MODEL-BASED NARRATIVE USE CASE SIMULATION ENVIRONMENT

Authors:

Veit Hoffmann and Horst Lichter

Abstract: Since their introduction use cases are one of the most widespread used techniques to specify functional requirements. Because low quality use cases often cause serious problems in later phases of the development process the simulation of use cases may be an important technique to assure the quality of use case de-scriptions. In this paper we present a model based use case simulation environment for narrative use cases. At first we motivate core requirements of a simulation environment and an underlying execution model. Moreover we describe our model based simulation approach and present some first experiences.

Paper Nr: 181
Title:

TRACE TRANSFORMATION REUSE TO GUIDE CO-EVOLUTION OF MODELS

Authors:

Bastien Amar, Hervé Leblanc, Bernard Coulette and Phillipe Dhaussy

Abstract: With the advent of languages and tools dedicated to model-driven engineering (e.g., ATL, Kermeta, EMF), as well as reference metamodels (MOF, Ecore), model-driven development processes can be used more easily. These processes are based on a large range of closely inter-related models and transformation covering the whole software devolopment lifecycle. When a model is transformed, designers must re-implement the transformation choices for all the related models, which raises some inconsistency problems. To prevent this, we proposed trace transformation reuse to guide co-evolution of models. The contribution of this paper is a conceptual framework where repercussion transformation can be easily deployed. The maturity of a software engineering technology should be evaluated by the use of traceability practices.aceability practices.

Paper Nr: 186
Title:

CONSTRAINT REASONING IN FOCALTEST

Authors:

Matthieu Carlier, Catherine Dubois and Arnaud Gotlieb

Abstract: Property-based testing implies selecting test data satisfying coverage criteria on user-specified properties. However, current automatic test data generation techniques adopt direct generate-and-test approaches for this task. In FocalTest, a testing tool designed to generate test data for programs and properties written in the functional language Focal, test data are generated at random and rejected when they do not satisfy selected coverage criteria. In this paper, we improve FocalTest with a test-and-generate approach, through the usage of constraint reasoning. A particular difficulty is the generation of test data satisfying MC/DC on the precondition of a property, when it contains function calls with pattern matching and higher-order functions. Our experimental results show that a non-naive implementation of constraint reasoning on these constructions outperform traditional generation techniques when used to find test data for testing properties.

Paper Nr: 188
Title:

TOWARDS A HACKER ATTACK REPRESENTATION METHOD

Authors:

Peter Karpati, Guttorm Sindre and Andreas L. Opdahl

Abstract: Security must be addressed at an early stage of information systems development, and one must learn from previous hacker attacks to avoid similar exploits in the future. Many security threats are hard to understand for stakeholders with a less technical background. To address this issue, we present a five-step method that represents hacker intrusions diagrammatically. It lifts specific intrusions to a more general level of modelling and distils them into threats that should be avoided by a new or modified IS design. It allows involving different stakeholder groups in the process, including non-technical people who prefer simple, informal representations. For this purpose, the method combines five different representation techniques that together provide an integrated view of security attacks and system architecture. The method is illustrated with a real intrusion from the literature, and its representation techniques are tied together as a set of extensions of the UML metamodel.

Paper Nr: 229
Title:

MODEL-DRIVEN DEPLOYMENT OF DISTRIBUTED COMPONENTS-BASED SOFTWARE

Authors:

Mariam Dibo and Noureddine Belkhatir

Abstract: Software deployment encompasses all post-development activities that make an application operational. The development of system-based components has made it possible to better highlight this piece of the global software lifecycle, as illustrated by numerous industrial and academic studies. However these are generally developed ad hoc and, consequently platform-dependent. Deployment systems supported by middleware environments (CCM, .Net and EJB), specifically develop mechanisms and tools related to pre-specified deployment strategies. For this topic of distributed component-based software applications, our goal is to define what could be a unified meta modeling architecture for deployment of distributed components based software systems. To illustrate the feasibility of the approach, we introduce a tool called UDeploy (Unified Deployment architecture) which firstly, manages the planning process from meta-information related to the application, the infrastructure and the deployment strategies; secondly, the generation of specific deployment descriptors related to the application and the environment (i.e. the machines connected to a network where a software system is deployed); and finally, the execution of a plan produced by means of deployment strategies used to elaborate a deployment plan.

Short Papers
Paper Nr: 15
Title:

TOWARDS A META-MODEL FOR WEB SERVICES’ PREFERENCES

Authors:

Zakaria Maamar, Ghazi Al-Khatib, Said Elnaffar and Youcef Baghdadi

Abstract: This paper presents a meta-model for describing preferences of Web services. Two types of preferences are examined namely privacy and membership. Privacy restricts the data that Web services exchange, and membership restricts the peers that Web services interact with. Both types have risen lately with force in response to the open and dynamic nature of the Internet. While most of the research work on Web services has been driven by the concerns of users, this paper stresses out the concerns of providers of Web services. Different meta-classes like Web service, functionality, and WSDL are included the meta-model for Web services’ preferences. To guarantee the satisfaction of these preferences, policies are developed in this paper.

Paper Nr: 38
Title:

X-FEE - An Extensible Framework for Providing Feedback IN the Internet of Services

Authors:

Anja Strunk

Abstract: In the Internet of Services, there is a big demand for feedback about services and their attributes. For example, on service market places, feedback is used by service discovery to help a service user to find the right service or at runtime, feedback is employed to detect and compensate errors. Thus, the research community suggests a large amount of techniques to make feedback available. However, there is a lack of adequate feedback frameworks to be used to implement these techniques. In this paper we suggest the feedback framework X-Fee, which is highly extensible, flexible and interoperable to easily realize feedback components and integrate them in arbitrary infrastructures in the Internet of Services.

Paper Nr: 48
Title:

WHAT IS WRONG WITH AOP?

Authors:

Adam Przybylek

Abstract: Modularity is a key concept that programmers wield in their struggle against the complexity of software systems. The implementation of crosscutting concerns in a traditional programming language (e.g. C, C#, Java) results in software that is difficult to maintain and reuse. Although modules have taken many forms over the years from functions and procedures to classes, no form has been capable of expressing a crosscutting concern in a modular way. The latest decomposition unit to overcome this problem is an aspect promoted by aspect-oriented programming (AOP). The aim of this paper is to review AOP within the context of software modularity.

Paper Nr: 62
Title:

DESIGN PATTERNS WITH ASPECTJ, GENERICS, AND REFLECTIVE PROGRAMMING

Authors:

Adam Przybylek

Abstract: Over the past decade, there has been a lot of interest towards aspect-oriented programming (AOP). Hannemann and Kiczales developed AspectJ implementations of the Gang-of-Four (GoF) design patterns. Their study was continued by Hachani, Bardou, Borella, and others. However, no one has tried to improve the implementations by using generics or reflective programming. This research faces up to this issue. As a result, highly reusable implementations of Decorator, Proxy, and Prototype are presented.

Paper Nr: 82
Title:

QuEF - An Environment for Quality Evaluation on Model-driven Web Engineering Approaches

Authors:

F. J. Domínguez-Mayo, M. Mejías, M. J. Escalona and A. H. Torres

Abstract: Due to the high number and wide variety of methodologies which currently exist in the field of Model-Driven Web Engineering (MDWE) methodologies, it has become necessary to evaluate the quality of the existing methodologies to provide helpful information for the developers. Since proposals are constantly appearing, the need may arise not only to evaluate quality but also to find out how it can be improved. This article presents work being carried out in this field and describes tasks to define QuEF (Quality Evaluation Framework), which is an environment to evaluate, under objective measures, the quality of Model-Driven Web Engineering methodologies.

Paper Nr: 96
Title:

COEVOLUTIVE META-EXECUTION SUPPORT - Towards a Design and Execution Continuum

Authors:

Gilles Dodinet, Michel Zam and Geneviève Jomier

Abstract: Despite its promises, the lack of support for consistent coevolution of models with theirs meta-models and instances prevents a broader adoption of MDE. This article presents a coevolution support for reflective meta-models and their instances tightly integrated into an execution platform. The platform allows stakeholders, developers and final users to define, update and run models and theirs instances concurrently. Design changes are reflected immediately in the running applications, hosted by the platform. Both instances and models are stored in a shared multi-version database that brings persistency, consistency and traceability support. A web-based implementation of the platform validates the approach and sets the foundations for a collaborative integrated development environment that evolves continuously.

Paper Nr: 128
Title:

WEB SERVICES FOR HIGHER INTEGRITY INSTRUMENT CONTROL

Authors:

Phillip R. Huffman and Susan A. Mengel

Abstract: This paper relates the experience in using a modified life cycle development process which is proposed herein for integrity planning applied to web services as reusable software components in order to enhance the web services’ reliability, safety, and security in an instrument control environment. Using the integrity-enhanced lifecycle, a test bed instrument control system is developed using .NET web services. A commercial web service is also included in the test bed system for comparison. Both systems are monitored over a one-year period and failure data is collected. For a further comparison, a similar instrument control system is developed to a high quality pedigree but lacking the focus on integrity and reusable components. Most of the instrumentation is the same between the two systems; however, the comparative system uses a more traditional approach with a single, integrated software control package. As with the test bed system, this comparative system is monitored over a one-year period. The data for the two systems is compared and the results demonstrate a significant increase in integrity for the web service-based test bed system. The failure rate for the test bed system is approximately 1 in 8100 as compared to 1 in 1600 for the comparison system.

Paper Nr: 144
Title:

SLR-TOOL - A Tool for Performing Systematic Literature Reviews

Authors:

Ana M. Fernández-Sáez, Marcela Genero Bocco and Francisco P. Romero

Abstract: Systematic literature reviews (SLRs) have been gaining a significant amount of attention from Software Engineering researchers since 2004. SLRs are considered to be a new research methodology in Software Engineering, which allow evidence to be gathered with regard to the usefulness or effectiveness of the technology proposed in Software Engineering for the development and maintenance of software products. This is demonstrated by the growing number of publications related to SLRs that have appeared in recent years. While some tools exist that can support some or all of the activities of the SLR processes defined in (Kitchenham & Charters, 2007), these are not free. The objective of this paper is to present the SLR-Tool, which is a free tool and is available on the following website: http://alarcosj.esi.uclm.es/SLRTool/, to be used by researchers from any discipline, and not only Software Engineering. SLR-Tool not only supports the process of performing SLRs proposed in (Kitchenham & Charters, 2007), but also provides additional functionalities such as: refining searches within the documents by applying text mining techniques; defining a classification schema in order to facilitate data synthesis; exporting the results obtained to the format of tables and charts; and exporting the references from the primary studies to the formats used in bibliographic packages such as EndNote, BibTeX or Ris. This tool has, to date, been used by members of the Alarcos Research Group and PhD students, and their perception of it is that it is both highly necessary and useful. Our purpose now is to circulate the use of SLR-Tool throughout the entire research community in order to obtain feedback from other users.

Paper Nr: 159
Title:

FRAMEWORK AS SOFTWARE SERVICE (FASS) - An Agile e-Toolkit to Support Agile Method Tailoring

Authors:

Asif Qumer and Brian Henderson-Sellers

Abstract: In a real software application development environment, a pre-defined or fixed methodology, whether plan-based or agile, is unlikely to be successfully adopted “off-the-shelf”. Agile methods have recognised that a method should be tailored to each situation. The purpose of this paper is to present an agile e-toolkit software service to facilitate the tailoring of agile processes in the overall context of agile method adoption and improvement. The agile e-toolkit is a web-based tool to store and manage agile practices extracted from various agile methods and frameworks. The core component of the e-toolkit is the agile knowledge-base or repository. The agile knowledge-base contains agile process fragments. Agile consultants or teams can then use agile process fragments stored in the agile knowledge-base for the tailoring of situation-specific agile processes by using a situational method engineering approach. The e-toolkit software service has been implemented using a service-oriented cloud computing technology platform (Software as a Service – SaaS). The agile e-toolkit specifications and software application details have been summarized in this paper.

Paper Nr: 162
Title:

MODEL CHECKING IS REFINEMENT - From Computation Tree Logic to Failure Trace Testing

Authors:

Stefan D. Bruda and Zhiyu Zhang

Abstract: Two major systems of formal conformance testing are model checking and algebraic model-based testing. Model checking is based on some form of temporal logic. One powerful and realistic logic being used is computation tree logic (CTL), which is capable of expressing most interesting properties of processes such as liveness and safety. Model-based testing is based on some operational semantics of processes (such as traces, failures, or both) and associated preorders. The most fine-grained preorder beside bisimulation (mostly of theoretical importance) is based on failure traces. We show that these two powerful variants are equivalent, in the sense that for any CTL formula there exists a set of failure trace tests that are equivalent to it. Combined with previous results, this shows that CTL and failure trace tests are equivalent.

Paper Nr: 172
Title:

AUTOMATIC GENERATION OF DATA MERGING PROGRAM CODES

Authors:

Hyeonsook Kim, Samia Oussena, Ying zhang and Tony Clark

Abstract: Data merging is an essential part of ETL (Extract-Transform-Load) processes to build a data warehouse system. To avoid rewheeling merging techniques, we propose a Data Merging Meta-model (DMM) and its transformation into executable program codes in the manner of model driven engineering. DMM allows defining relationships of different model entities and their merging types in conceptual level and our formalized transformation described using ATL (ATLAS Transformation Language) enables automatic generation of PL/SQL packages to execute data merging in commercial ETL tools. With this approach data warehouse engineers can be relieved from burden of repetitive complex script coding and pain of maintaining consistency of design and implementation.

Paper Nr: 174
Title:

TOWARDS AN INTEGRATED SUPPORT FOR TRACEABILITY OF QUALITY REQUIREMENTS USING SOFTWARE SPECTRUM ANALYSIS

Authors:

Haruhiko Kaiya, Kasuhisa Amemiya, Yuutarou Shimizu and Kenji Kaijiri

Abstract: In actual software development, software engineering artifacts such as requirements documents, design diagrams and source codes can be updated and changed respectively and simultaneously, and they should be consistent with each other. However, maintaining such consistency is one of the difficult problems especially for software quality features such as usability, reliability, efficiency and so on. Managing traceability among such artifacts is one of solutions, and several types of techniques for traceability have been already proposed. However, there is no silver bullet for solving the problem. In this paper, we categorized current techniques for managing traceability into three types: traceability links, central model and projection traceability. We then discuss how to cope with these types of techniques for managing traceability for software quality features. Because projection traceability seems to be suitable for quality features and there are few implementations of projection traceability, we implement a method based on projection traceability using spectrum analysis for software quality. We also apply the method to an example to confirm the usefulness of projection traceability as well as traceability links and central model.

Paper Nr: 183
Title:

A TOOL FOR USER-GUIDED DATABASE APPLICATION DEVELOPMENT - Automatic Design of XML Models using CBD

Authors:

Carlos Rossi, Antonio Guevara, Manuel Enciso, José Luis Caro, Angel Mora and Pablo Cordero

Abstract: Beyond the database normalization process, much work has been done on the use of functional dependencies (FDs), their discovery using mining techniques, their use in query optimization and in the design of algorithms dealing with the implication problem etc. Nevertheless, although much research expounds the benefits of using functional dependencies, only a few modeling tools actually use them. In this work we present CBD, a new software development tool which allows end users to specify their requirements. CBD allows the user to design his/her own GUI for the application using forms and interface elements and it builds a meta-data dictionary with information on functional dependencies. This data dictionary will be used to generate the unified data model and a behavior model.

Paper Nr: 185
Title:

EXTENDING UML TO REPRESENT INTERACTION ROLES AND VARIANTS OF DESIGN PATTERN

Authors:

Keen Ngee Loo and Sai Peck Lee

Abstract: There are various descriptions, structures and behavior on the solution for a design problem in a design pattern. However, there is not much visual aid on the internal workings of a design pattern in a visual design modeling tool. Currently, it is difficult to determine the pattern roles and variants of interaction groups of a design pattern as these information is not represented in the UML interaction diagram. There is a need to have a consistent way to define the pattern roles participating in a design pattern interaction and whether there is a variant in each interaction group. This paper proposes to extend the UML sequence diagram via UML profile to allow designers to define and visualise the pattern roles and the different types of interaction groups for a design pattern. The proposed extensions are able to capture the two ways of design pattern interaction variants in sequence diagram. An example of the approach is then applied to the observer design pattern. The benefit of the extension enables tool support on cataloguing and retrieval of design patterns’ structural and behavioural information as well as variant in a visual design modeling tool.

Paper Nr: 189
Title:

UNIFYING SOFTWARE AND DATA REVERSE ENGINEERING - A Pattern based Approach

Authors:

Francesca Arcelli , Gianluigi Viscusi and Marco Zanoni

Abstract: At the state of the art, objects oriented applications use data structured in relational databases by exploiting some patterns, like the Domain Model and Data Mapper. These approaches aim to represent data in the OO way, using objects for representing data entities. Furthermore, we point out that the identification of these patterns can show the link between the object model and the conceptual entities, exploiting their associations to the physical data objects. The aim of this paper is to present a unified perspective for the definition of an integrated approach for software and data reverse engineering. The discussion is carried out by means of a sample application and a comparison with results from current tools.

Paper Nr: 193
Title:

COGNITIVE INFLUENCES IN PRIORITIZING SOFTWARE REQUIREMENTS

Authors:

Nadina Martínez Carod and Alejandra Cechich

Abstract: In software development, the elicitation process and particularly the acquisition of software requirements are critical success factors. Elicitation is about learning the needs of users, and communicating those needs to system builders. Prioritizing requirements includes negotiation as an important issue, which becomes extremely difficult, as clients often do not know exactly what they need. To overcome this situation, aiming at improving stakeholder’s negotiation, we propose reducing the gap of misunderstanding between them by the use of cognitive science. Particularly, we suggest using cognitive styles to characterize people from the way their process information. In this paper, we introduce a case study showing that cognitive profiles may affect requirement understanding and prioritization. Our controlled experiment shows that considering cognitive profiles when performing elicitation might increase stakeholders’ satisfaction and prioritization accuracy.

Paper Nr: 195
Title:

SOFTWARE RELEASES MANAGEMENT IN THE TRIGGER AND DATA ACQUISITION OF ATLAS EXPERIMENT - Integration, Building, Deployment, Patching

Authors:

Andrei Kazarov, Mihai Caprini, Igor Soloviev and Reiner Hauser

Abstract: ATLAS is a general-purpose experiment in high-energy physics at Large Hadron Collider at CERN. ATLAS Trigger and Data Acquisition (TDAQ) system is a distributed computing system which is responsible for transferring and filtering the physics data from the experiment to mass-storage. TDAQ software is developed since 1998 by a team of few dozens developers. It is used for integration of all ATLAS subsystem participating in data-taking, providing framework and API for building the s/w pieces of TDAQ system. It is currently composed of more then 200 s/w packages which are available for ATLAS users in form of regular software releases. The s/w is available for development on a shared filesystem, on test beds and it is deployed to the ATLAS pit where it is used for data-taking. The paper describes the working model, the policies and the tools which are used by s/w developers and s/w librarians in order to develop, release, deploy and maintain the TDAQ s/w for the long period of development, commissioning and running the TDAQ system. In particular, the patching and distribution model based on RPM packaging is discussed, which is important for the s/w which is maintained for a long period on the running production system.

Paper Nr: 226
Title:

AN ASPECT-BASED APPROACH FOR CONCURRENT PROGRAMMING USING CSP FEATURES

Authors:

José Elias Araújo, Henrique Rebêlo, Ricardo Lima, Alexandre Mota, Fernando Castor, Tiago Lima, Juliana Lucena and Filipe Lima

Abstract: The construction of large scale parallel and concurrent applications is one of the greatest challenges faced by software engineers nowadays. Programming models for concurrency implemented by mainstream programming languages, such as Java, C, and C++, are too low-level and difficult to use by the average programmer. At the same time, the use of libraries implementing high level concurrency abstractions such as JCSP requires additional learning effort and produces programs where application logic is tangled with library-specific code. In this paper we propose separating concurrent concerns (CSP code) from the development of sequential Java processes. We explore aspect-oriented programming to implement this separation of concerns. A compiler generates an AspectJ code, which instruments the sequential Java program with JCSP concurrent constructors. We have conducted an experiment to evaluate the benefits of the proposed framework. We employ metrics for attributes such as separation of concerns, coupling, and size to compare our approach against the JCSP framework and thread based approaches.

Paper Nr: 249
Title:

A LEGACY SYSTEMS USE CASE RECOVERY METHOD

Authors:

Philippe Dugerdil and Sebastien Jossi

Abstract: During the development of a legacy system reverse engineering method we developed a technique to help with the recovery of the system’s use-cases. In fact, our reverse-engineering method starts with the re-documentation of the system’s use-case by observing its actual users. But these use-cases are never complete and accurate. In particular, the many alternative flows are often overlooked by the users. This paper presents our use-case recovery methodology as well as the techniques we implemented to identify all the flows of the legacy system’s use-case. Starting from an initial use-case based on the observation of the users, we gather the corresponding execution trace by running the system according to this use-case. The analysis of this execution trace coupled with a static analysis of the source code lets us find the possible alternative execution paths of the system. The execution conditions for these paths are analyzed to establish the link to the use-case level. This lets us synthesize alternative flows for the use-case. Next we run the system again following these alternative flows to uncover possible new alternative paths, until one converges to a stable use-case model.

Paper Nr: 252
Title:

LESSONS FROM ENGINEERING - Can Software Benefit from Product based Evidence of Reliability?

Authors:

Neal Snooke

Abstract: This paper argues that software engineering should not overlook the lessons learned by other engineering disciplines with longer established histories. As software engineering evolves it should focus not only on application functionality but also on mature engineering concepts such as reliability, dependability, safety, failure mode analysis, and maintenance. Software is rapidly approaching the level of maturity that other disciplines have already encountered where it is not merely enough to be able to make it work (sometimes), but we must be able to objectively assess quality, determine how and when it can fail and mitigate risk as necessary. The tools to support these tasks are in general not integrated into the design and implementation stages as they are for other engineering disciplines although recent techniques in software development have the potential to allow new types of analysis to be developed and integrated so that software justify its claim to be engineered. Currently software development relies primarily on development processes and testing to achieve these aims; but neither of these provide the hard design and product analysis that engineers find essential in other disciplines. This paper considers how software can learn from other engineering analyses and investigates failure modes and effects analysis as an example.

Paper Nr: 259
Title:

EVOLVABILITY IN SERVICE ORIENTED SYSTEMS

Authors:

Anca Daniela Ionita and Marin Litoiu

Abstract: The paper investigates the evolution and maintenance of service oriented systems deployed in SOA and cloud infrastructures. It analyzes the challenges entailed by the frequent modifications of business environments, discussing their causes, grasping the evolution points in service architectures, studying classifications of human actors involved across the whole life cycle, as well as pointing out possible risks and difficulties encountered in the process of change. Based on the lessons learned in our study, four pillars for improving service evolvability are identified: orientation towards the users, increasing the level of abstraction, supporting automation and enabling adaptivity through feedback loops.

Paper Nr: 263
Title:

A FRAMEWORK FOR PROACTIVE SLA NEGOTIATION

Authors:

Khaled Mahbub and George Spanoudakis

Abstract: In this position paper we propose a framework for proactive SLA negotiation that integrates this process with dynamic service discovery and, hence, can provide integrated runtime support for both these key activities which are necessary in order to achieve the runtime operation of service based systems with minimised interruptions. More specifically, our framework discovers candidate constituent services for a composite service, establishes an agreed but not enforced SLA and a period during which this pre-agreement can be activated should this become necessary.

Posters
Paper Nr: 20
Title:

SLICING OF UML MODELS

Authors:

K. Lano and S. Kolahdouz-Rahimi

Abstract: This paper defines techniques for the slicing of UML models, that is, for the restriction of models to those parts which specify the properties of a subset of the elements within them. The purpose of this restriction is to produce a smaller model which permits more effective analysis and comprehension than the complete model, and also to form a step in factoring of a model. We consider class diagrams, individual state machines, and communicating sets of state machines.

Paper Nr: 27
Title:

TOWARDS A ‘UNIVERSAL’ SOFTWARE METRICS TOOL - Motivation, Process and a Prototype

Authors:

Gordana Rakić, Zoran Budimac and Klaus Bothe

Abstract: In this paper we investigate main limitations of actual software metrics techniques/tools, propose a unified intermediate representation for calculation of software metrics, and describe a promising prototype of a new metrics tool. The motivation was the evident lack of wider utilization of software metrics in raising the quality of software products.

Paper Nr: 31
Title:

HEAP GARBAGE COLLECTION WITH REFERENCE COUNTING

Authors:

Wuu Yang, Huei-Ru Tseng and Rong-Hong Jan

Abstract: In algorithms based on reference counting, a garbage-collection decision has to be made whenever a pointer x !y is about to be destroyed. At this time, the node y may become dead even if y’s reference count is not zero. This is because y may belong to a piece of cyclic garbage. Some aggressive collection algorithms will put y on the list of potential garbage regardless of y’s reference count. Later a trace procedure starting from y will be initiated. Other algorithms, less aggressive, will put y on the list of potential garbage only if y’s reference count falls below a threshold, such as 3. The former approach may waste time on tracing live nodes and the latter may leave cyclic garbage uncollected indefinitely. The problem with the above two approaches (and with reference counting in general) is that it is difficult to decide if y is dead when the pointer x ! y is destroyed. We propose a new garbage-collection algorithm in which each node maintains two, rather than one, reference counters, gcount and hcount. Gcount is the number of references from the global variables and from the run-time stack. Hcount is the number of references from the heap. Our algorithm will put node y on the list of potential garbage if and only if y’s gcount becomes 0. The better prediction made by our algorithm results in more efficient garbage collectors.

Paper Nr: 44
Title:

TRAINING GLOBAL SOFTWARE DEVELOPMENT SKILLS THROUGH A SIMULATED ENVIRONMENT

Authors:

Miguel J. Monasor, Aurora Vizcaíno and Mario Piattini

Abstract: Training and Education in Global Software Development (GSD) is a challenge that has recently emerged for companies and universities, which entails tackling certain drawbacks caused by the distance, such as communication problems, cultural and language differences or the inappropriate use of groupware tools. We have carried out a Systematic Literature Review in relation to the teaching of GSD which has proved that educators should provide learners with a wide set of realistic and practical experiences, since the skills required are best learned by doing. However, this is difficult as companies are not willing to incorporate students in their projects. In this paper we present an alternative: an environment that will simulate typical GSD problems and will allow students and practitioners to develop skills by interacting with virtual agents from different cultures, thus avoiding the risks of involving non-qualified people in real settings.

Paper Nr: 60
Title:

COLOR IMAGE ENCRYPTION SOLUTION BASED ON THE CHAOTIC SYSTEM OF LOGISTIC AND HENON

Authors:

Zhang Yunpeng, Sun Peng, Xie Jing and Huang Yunting

Abstract: The security of color image has become an important network information security research field. To meet the security requirements of color image and according to the characteristics of the image coding and chaotic system, the paper presented a color image encryption solution based on the chaotic systems. With the help of Logistic system, the solution generates the chaotic sequence, which is used to the parameters and the number of iterations of Henon system. And then, encrypt the color image through multiple iterating the Henon system. At last, we analyse and prove the solution in theory and experiment. The results show that the encrypted image has a uniform distribution of the pixel value, has a good solution diffusion, can effectively resist the phase space reconstruction attacks, and has a good security and reliability.

Paper Nr: 69
Title:

LANGUAGE-ORIENTED PROGRAMMING VIA DSL STACKING

Authors:

Bernhard G. Humm and Ralf S. Engelschall

Abstract: According to the paradigm of Language-Oriented Programming, an application for a problem should be implemented in the most appropriate domain-specific language (DSL). This paper introduces DSL stacking, an efficient method for implementing Language-Oriented Programming where DSLs and general-purpose languages are incrementally developed on top of a base language. This is demonstrated with components of a business information system that are implemented in different DSLs for Semantic Web technology in Lisp.

Paper Nr: 74
Title:

MODEL-DRIVEN APPROACHES FOR SERVICE-BASED APPLICATIONS DEVELOPMENT

Authors:

Selo Sulistyo and Andreas Prinz

Abstract: Service-based systems are considered as an architectural approach for managing software complexity and their development. With this, a software application is built by defining a set of interactions of autonomous, compound, and loosely-coupled software units called services. Another way of managing software complexity is using model-driven approaches. With this, the development of software applications is started from model levels and thereby, code for implementing the software application is generated automatically. This paper presents AMG (abstract, model and generate), a combination of the two approaches.

Paper Nr: 173
Title:

PERFORMANCE OPTIMIZATION OF EXHAUSTIVE RULES IN GRAPH REWRITING SYSTEMS

Authors:

Tamás Mészáros, Márk Asztalos, Gergely Mezei and Hassan Charaf

Abstract: Graph rewriting-based model transformation is a well known technique with strong mathematical background to process domain specific models represented as graphs. The performance optimization techniques realized in today’s graph transformation engines usually place focus on the optimization of a single execution of the individual rules, and do not consider the optimization possibilities in the repeated execution. In this paper we present a performance optimization technique called deep exhaustive matching for exhaustively executed rules. Deep exhaustive matching continues the matching of the same rule from the next possible position after a successful rewriting phase, thus we can achieve noticeable performance gain.

Paper Nr: 192
Title:

A SCALA-BASED DOMAIN SPECIFIC LANGUAGE FOR STRUCTURED DATA REPRESENTATION

Authors:

Kazuaki Maeda

Abstract: This paper describes Sibon, a new representation written in a text-based data format using Scala syntax. The design principle of Sibon is good readability and simplicity of structured data representation. An important feature of Sibon is an executable representation. Once Sibon-related definitions are loaded, the representation can be executed corresponding to the definitions. A program generator was developed to create Scala and Java programs from Sibon definitions. In the author’s experience, productivity was improved in the design and implementation of programs that manipulate structured data.

Paper Nr: 197
Title:

WEB TOOL FOR OBJECT ORIENTED DESIGN METRICS

Authors:

José R. Hilera, Luis Fernández-Sanz and Marina Cabello

Abstract: An open source web application to calculate metrics from UML class diagrams is presented. The system can process any class diagram encoded in XMI format. After processing the XMI document, a complete report can be obtained in two different formats, HTML and spreadsheet file. The application can be accessed freely in a website. Source code is available for downloading.

Paper Nr: 199
Title:

IMPLEMENTING QVT IN A DOMAIN-SPECIFIC MODELING FRAMEWORK

Authors:

István Madari, Márk Asztalos, Tamás Mészáros, László Lengyel and Hassan Charaf

Abstract: Meta Object Facility 2.0 Query/Views/Transformation (QVT) is OMG’s standard for specifying model transformations, views and queries. In this paper we deal with the QVT Relations language, which is a declarative specification of model transformation between two models. The QVT Relations language specifies several great features in practice, such as implicit trace creation support, or bidirectional transformations. However, QVT lacks implementation because its specification is not final and far too complex. The main contribution of this paper is to show how we integrated QVT constructs in our domain-specific modeling environment to facilitate a later implementation of QVT Relations-driven bidirectional model transformation.

Paper Nr: 200
Title:

TEXTUAL SYNTAX MAPPING CAN ENABLE SYNTACTIC MERGING

Authors:

László Angyal, László Lengyel, Tamás Mészáros and Hassan Charaf

Abstract: As the support is increasing for textual domain-specific languages (DSL), the reconstruction of visual models from the generated textual artifacts has also come into focus. The state-of-the-art bidirectional approaches support reversible text generation from models using single syntax mapping. However, even these tools have not gone such far to facilitate the synchronization between models and generated artifacts. This paper presents the importance of synchronization and how these mappings can enable syntactic reconciliation for custom DSLs. Our approach provides algorithms for supporting incremental DSL-driven software development, which enables the freedom of choosing between the textual or visual editing of artifacts. It depends on the developer which representation is more effective for her/him at a specific moment.

Paper Nr: 206
Title:

SPECIFICATION AND VERIFICATION OF WORKFLOW APPLICATIONS USING A COMBINATION OF UML ACTIVITY DIAGRAMS AND EVENT B

Authors:

Ahlem Ben Younes and Leila Jemni Ben Ayed

Abstract: This paper presents a transformation of UML activity diagrams (AD) into Event B for the specification and the verification of workflow applications. With this transformation, UML models could be verified by verifying derived event B models, automatically, using the B powerful support tools like B4free. The workflows is initially expressed graphically with UML AD and translated into Event B. The resulting model is then enriched with Invariants/Assertions describing functional properties of workflow models such as deadlock-inexistence. We present translation rules of UML AD into EventB, and we propose also a translation process of UML AD into EventB specifications based on the refinement technique of Event B to encode the hierarchical decomposition in UML AD. Also, we propose a solution to specify time in Event B, and by an example of workflow application, we illustrate the proposed technique.

Paper Nr: 208
Title:

EXPLORING EMPIRICALLY THE RELATIONSHIP BETWEEN LACK OF COHESION IN OBJECT-ORIENTED SYSTEMS AND COUPLING AND SIZE

Authors:

Linda Badri, Mourad Badri and Fadel Toure

Abstract: The study presented in this paper aims at exploring empirically the relationship between lack of cohesion of classes in object-oriented systems and their coupling and size. We designed and conducted an empirical study on various open source Java software systems. The experiment has been conducted using several well known code-based metrics related to cohesion, coupling and size. The results of this study provide evidence that a lack of cohesion may actually be associated with (high) coupling and (large) size.

Paper Nr: 214
Title:

MUTATION TESTING STRATEGIES - A Collateral Approach

Authors:

Mike Papadakis, Nicos Malevris and Marinos Kintis

Abstract: Mutation Testing is considered to be one of the most powerful techniques for unit testing and at the same time one of the most expensive. The principal expense of mutation is the vast number of imposed test requirements, many of which cannot be satisfied. In order to overcome these limitations, researchers have proposed many cost reduction techniques, such as selective mutation, weak mutation and a novel approach based on mutant combination, which combines first order mutants to generate second order ones. An experimental comparison involving weak mutation, strong mutation and various proposed strategies was conducted. The experiment shows that all proposed approaches are quite effective in general as they result in high collateral coverage of strong mutation (approximately 95%), while recording remarkable effort savings. Additionally, the results suggest that some of the proposed approaches are more effective than others making it possible to reduce the mutation testing application cost with only a limited impact on its effectiveness.

Paper Nr: 215
Title:

AN UML ACTIVITIES DIAGRAMS TRANSLATION INTO EVENT B SUPPORTING THE SPECIFICATION AND THE VERIFICATION OF WORKFLOW APPLICATION MODELS - From UML Activities Diagrams to Event B

Authors:

Leila Jemni Ben Ayed, Najet Hamdi and Yousra Bendaly Hlaoui

Abstract: This paper exposes the transformation of UML activity diagrams into Event B for the specification and the verification of parallel and distributed workflow applications. With this transformation, UML models could be verified by verifying derived event B models. The design is initially expressed graphically with UML and translated into Event B. The resulting model is then enriched with invariants describing dynamic properties such as deadlock freeness, livelock freeness and reachability. The approach uses activity diagrams metamodel.

Paper Nr: 219
Title:

JSIMIL - A Java Bytecode Clone Detector

Authors:

Luis Quesada, Fernando Berzal and Juan Carlos Cubero

Abstract: We present JSimil, a code clone detector that uses a novel algorithm to detect similarities in sets of Java programs at the bytecode level. The proposed technique emphasizes scalability and efficiency. It also supports customization through profiles that allow the user to specify matching rules, system behavior, pruning thresholds, and output details. Experimental results reveal that JSimil outperforms existing systems. It is even able to spot similarities when complex code obfuscation techniques have been applied.

Paper Nr: 225
Title:

META-DESIGN PARADIGM BASED APPROACH FOR ITERATIVE RAPID DEVELOPMENT OF ENTERPRISE WEB APPLICATIONS

Authors:

Athula Ginige

Abstract: Developing enterprise software or web applications that meet user requirements within time and budget still remains a challenge. The success of these applications mostly depends on how well the user requirements have been captured. The literature shows progress has been made on two fronts; improving ways requirements are captured and increasing interaction between users and developers to detect gaps or miscommunication of requirements early in the lifecycle by using iterative rapid development approaches. This paper presents a Meta-Design paradigm based approach that builds on work already done in the area of Model Driven Web Engineering to address this issue. It includes a Meta-Model of an enterprise web application to capture the requirements and an effective way of generating the application.

Paper Nr: 266
Title:

TESTING IN PARALLEL - A Need for Practical Regression Testing

Authors:

Zhenyu Zhang, Zijian Tong and Xiaopeng Gao

Abstract: When software evolves, its functionalities are evaluated using regression testing. In a regression testing process, a test suite is augmented, reduced, prioritized, and run on a software build version. Regression testing has been used in industry for decades; while in some modern software activities, we find that regression testing is yet not practical to apply. For example, according to our realistic experiences in Sohu.com Inc., running a reduced test suite, even concurrently, may cost two hours or longer. Nevertheless, in an urgent task or a continuous integration environment, the version builds and regression testing requests may come more often. In such a case, it is not strange that a new round of test suite run needs to start before all the previous ones have terminated. As a solution, running test suites on different build versions in parallel may increase the efficiency of regression testing and facilitate evaluating the fitness of software evolutions. On the other hand, hardware and software resources limit the number of paralleled tasks. In this paper, we raise the problem of testing in parallel, give the general problem settings, and use a pipeline presentation for data visualization. Solving this problem is expected to make practical regression testing.

Area 3 - Distributed Systems

Full Papers
Paper Nr: 50
Title:

NOTES ON PRIVACY-PRESERVING DISTRIBUTED MINING AND HAMILTONIAN CYCLES

Authors:

Renren Dong and Ray kresman

Abstract: Distributed storage and retrieval of data is both the norm and a necessity in today’s computing environment. However, sharing and dissemination of this data is subject to privacy concerns. This paper addresses the role of graph theory, especially Hamiltonian cycles, on privacy preserving algorithms for mining distributed data. We propose a new heuristic algorithm for discovering disjoint Hamiltonian cycles in the underlying network. Disjoint Hamiltonian cycles are useful in a number of applications; for example, to ensure that someone’s private data remains private even when others collude to discover the data.

Short Papers
Paper Nr: 25
Title:

THE OVERHEAD OF SAFE BROADCAST PERSISTENCY

Authors:

Rubén de Juan-Marín, Francesc D. Muñoz-Escoí, J. Enrique Armendáriz-Íñigo and J. R. González de Mendívil

Abstract: Although the need of logging messages in secondary storage once they have been received has been stated in several papers that assumed a recoverable failure model, none of them analysed the overhead implied by that logging in case of using reliable broadcasts in a group communication system guaranteeing virtual synchrony. At a glance, it seems an excessive cost for its apparently limited advantages, but there are several scenarios that contradict this intuition. This paper surveys some of these configurations and outlines some benefits of this persistence-related approach.

Paper Nr: 33
Title:

FALL DETECTION SYSTEMS - A Solution based on Low Cost Sensors

Authors:

Miguel A. Laguna, María J. Tirado, Javier Finat and José M. Marqués

Abstract: The problem of fall detection in elderly patients is particularly critical in persons who live alone or are alone most of the day. The use of information and communication technologies to facilitate their autonomy is a clear example of how technological advances can improve the quality of life of dependent people. This article presents a prototype developed with a low cost device (the gamepad of a known video console) using its Bluetooth communication capabilities and built-in accelerometer. The latter is much more sensitive than other similar devices integrated in mobile phones and much cheaper than industrial accelerometers. Besides its stand-alone use, the system can be connected to a generic remote monitoring system that has been developed as a software product line for use in aged people’s residences.

Paper Nr: 61
Title:

AN IMPROVED HIGH-DENSITY KNAPSACK-TYPE PUBLIC KEY CRYPTOSYSTEM

Authors:

Zhang Yunpeng, Lin Xia and Liu Xi

Abstract: Almost all knapsack-type public key cryptography has been proven unsafe. To solve this problem, more secure public key cryptographic algorithms are urgently needed. This article first discusses the basic theory of knapsack-type public key and methods that used to attack the knapsack public key. Then, it analysis the literature (Wang & Hu 2006, p.2930), and points out the potential defects of its cryptography safety. Meanwhile, the article gives out an improved algorithm, and discusses the safety and efficiency of the algorithm. The analysis of the algorithm shows that the improved algorithm is better than the original one in security.

Paper Nr: 150
Title:

A STUDY OF SECURITY APPROACHES FOR THE DEVELOPMENT OF MOBILE GRID SYSTEMS

Authors:

David G. Rosado, Eduardo Fernández-Medina and Javier Lopez

Abstract: Mobile Grid systems allow us to build highly complex information systems with various and remarkable features (interoperability between multiple security domains, cross-domain authentication and authorization, dynamic, heterogeneous and limited mobile devices, etc), which demand secure development methodologies to build quality software, offering methods, techniques and tools that facilitate the work of the entire team involved in software development. These methodologies should be supported by Grid security architectures that define the main security aspects to be considered, and by solutions to the problem of how to integrate mobile devices within Grid systems. Some approaches regarding secure development methodologies of Grid security architectures and of the integration of mobile devices in the Grid have been found in literature, and these are analyzed and studied in this paper, offering a comparison framework of all the approaches related to security in Mobile Grid environments.

Paper Nr: 166
Title:

AN APPROACH TO DATA-DRIVEN ADAPTABLE SERVICE PROCESSES

Authors:

George Athanasopoulos and Aphrodite Tsalgatidou

Abstract: Within the currently forming pervasive computing environment, services and information sources thrive. Instantiations of the service oriented computing paradigm, e.g. Web, Peer-to-Peer (P2P) and Grid services, are continuously emerging, whilst information can be collected from several information sources, e.g. materializations of the Web 2.0 and Web 3.0 trends, Social Networking apps and Sensor Networks. Within this context the development of adaptable service oriented processes utilizing heterogeneous services, in addition to available information, is an emerging trend. This paper presents an approach and an enabling architecture that leverage the provision of data-driven, adaptable, heterogeneous service processes. Core within the proposed architecture is a set of interacting components that accommodate the acquisition of information, the execution of service chains and their adaptation, based on collected information.

Paper Nr: 178
Title:

QUANTUM CRYPTOGRAPHY BASED KEY DISTRIBUTION IN WI-FI NETWORKS - Protocol Modifications in IEEE 802.11

Authors:

Shirantha Wijesekera, Xu Huang and Dharmendra Sharma

Abstract: Demand for wireless communications around the world is growing. IEEE 802.11 wireless networks, also known as Wi-Fi, are one of the popular wireless networks with over millions of users across the globe. Hence, providing secure communication for wireless networks has become one of the prime concerns. We have proposed a Quantum Key Distribution (QKD) based novel protocol to exchange the encryption key in Wi-Fi networks. In this paper, we present the protocol modifications done in the existing IEEE 802.11 standard to implement the proposed QKD based key exchange.

Paper Nr: 190
Title:

DECENTRALIZED SYSTEM FOR MONITORING AND CONTROL OF RAIL TRAFFIC IN EMERGENCIES - A New Distributed Support Tool for Rail Traffic Management

Authors:

Itziar Salaberria, Unai Gutierrez, Roberto Carballedo and Asier Perallos

Abstract: Traditionally Rail Traffic Management is performed automatically using centralized systems based on wired sensors and electronic elements fixed on the tracks. These systems, called Centralized Traffic Control systems (CTC) are robust and high availability, but when these systems fail, traffic management must be done manually. This paper is the result of 4 years of work with railway companies in the development of a distributed support tool for rail traffic control and management. The new system developed combines train-side systems and terrestrial applications that exchange information via a hybrid mobile and radio wireless communications architecture.

Paper Nr: 201
Title:

AN EXTENSIBLE, MULTI-PARADIGM MESSAGE-ORIENTED MOBILE MIDDLEWARE

Authors:

Yuri Morais and Glêdson Elias

Abstract: Message-oriented middleware (MOM) platforms are usually based in asynchronous, peer-to-peer interaction styles, leading to more loosely coupled architectures. As a consequence, MOMs have the potential for supporting the development of networked mobile applications. However, MOM platforms have been implemented under a limited set of message-based communication paradigms, each one being specifically adapted to a given application domain or network model. In such a context, this paper proposes a mobile middleware solution which offers a comprehensive set of extensible, message-based communication paradigms, such as publish/subscribe, message queue and tuple spaces. Supported by a Software Product Line (SPL) approach, the proposed middleware is suitable for constrained devices as all supported communication paradigms share and reuse a reasonable number of software components that deal with common messaging features. Additionally, by means of an extensible design, new communication paradigms can be easily accommodated, as well as existing ones can be removed in order to better fit in more constrained devices.

Paper Nr: 251
Title:

PYSENSE: PYTHON DECORATORS FOR WIRELESS SENSOR MACROPROGRAMMING

Authors:

Davide Carboni

Abstract: PySense aims at bringing wireless sensor (and "internet of things") macroprogramming to the audience of Python programmers. WSN macroprogramming is an emerging approach where the network is seen as a whole and the programmer focuses only on the application logic. The PySense runtime environment partitions the code and transmits code snippets to the right nodes finding a balance between energy consumption and computing performances.

Posters
Paper Nr: 7
Title:

NETWORK CONVERGENCE AND MODELING - Design of Interconnecting SW for Intranets and Fieldbuses

Authors:

Miroslav Sveda

Abstract: The paper deals with the current software architectures for intermediate system for Intranet and small-range wireless interconnections. This article brings two case studies founded on real-world applications that demonstrate another input to network convergence and network modeling in software architecture development stemming from design experience based on industrial network applications and on metropolitan networking. The first case study focuses on IEEE 1451 family of standards that provides a design framework for creating applications based not only on IP/Ethernet profile but also on ZigBee. Next case study explores how security and safety properties of Intranets can be verified under every network configuration using model checking.

Paper Nr: 10
Title:

ARTIFICIAL IMMUNE SYSTEM FRAMEWORK FOR PATTERN EXTRACTION IN DISTRIBUTED ENVIRONMENT

Authors:

Rafał Pokrywka

Abstract: Information systems today are dynamic, heterogeneous environments and places where a lot of critical data is stored and processed. Such an envrionment is usually build over many virtualization layers on top of backbone which is hardware and network. The key problem within this environment is to find, in realtime, valuable information among large sets of data. In this article a framework for a pattern extraction system based on artificial immune system is presented and discussed. As an example a system for anomalous pattern extraction for intrusion detection in a computer network is presented.

Paper Nr: 36
Title:

AN ASSESSMENT OF HEURISTICS FOR FAST SCHEDULING OF GRID JOBS

Authors:

Florian Möeser, Wolfgang Süß, Wilfried Jakob, Alexander Quinte and Karl-Uwe Stucky

Abstract: Due to the dynamic nature of the grid and the frequent arrival of new jobs, rescheduling of already planned and new jobs is a permanent process that is in need of good and fast planning algorithms. This paper extends previous work and deals with newly implemented heuristics for our Global Optimizing Resource Broker and Allocator GORBA. Of a range of possibly usable heuristics, the most promising ones have been chosen for implementation and evaluation. They serve for the following two purposes: Firstly, the heuristics are used to quickly generate feasible schedules. Secondly, these schedules go into the start population of a subsequent run of our Evolutionary Algorithm incorporated in GORBA for improvement. The effect of the selected heuristics is compared to our best simple one used in the first version of GORBA. The investigation is based on two synthetically generated benchmarks representing a load of 300 grid jobs each. A formal definition of the scheduling problem is given together with an assessment of its complexity. The results of the evaluation underline the described intricacy of the problem, because none of the heuristics performs better than our simple one, although they work well on other presumably easier problems.

Paper Nr: 87
Title:

THE TASK GRAPH ASSIGNMENT FOR KASKADA PLATFORM

Authors:

Henryk Krawczyk and Jerzy Proficz

Abstract: The paper describes a computational model of the KASKADA platform. It consists of two main elements: a computational cluster, and a task graph. The cluster is represented by a finite set of the nodes with the specific maximum loads. The graph contains nodes representing tasks to be executed, and the edges representing continuous data flow between the tasks. The tasks are executed concurrently and the data flows between them are directed and acyclic. For such a model, the problem of task-to-nodes assignment is analysed, and two optimisation goals are defined: low cluster fragmentation, and minimum processing latency. For both problems the heuristic algorithms are described. The simulation results of the described algorithms are provided, and their evaluation is performed. Finally, the future algorithm improvements are suggested.

Paper Nr: 126
Title:

STRUCTURED USE-CASES AS A BASIS FOR SELF-MANAGEMENT OF DISTRIBUTED SYSTEMS

Authors:

Reza Haydarlou, Michel Oey, Martijn Warnier and Frances M. T. Brazier

Abstract: Automated support for management of complex distributed object-oriented systems is a challenge: selfmanagement of such systems the goal. This paper presents a use-case based approach to self-management of such systems, focusing on autonomic monitoring and diagnosis. The existing notion of use-case has been extended to different levels of system design: explicitly specifying system behavior at different levels, and the relations between these levels, coupling structural models to these descriptions when and where appropriate. The proposed model is illustrated with a small example.

Paper Nr: 220
Title:

IT INFRASTRUCTURE DESIGN AND IMPLEMENTATION CONSIDERATIONS FOR THE ATLAS TDAQ SYSTEM

Authors:

M. Dobson, G. Unel, C. Caramarcu, I. Dumitru, L. Valsan, G. L. Darlea, F. Bujor, A. Bogdanchikov, A. Korol, A. Zaytsev and S. Ballestrero

Abstract: This paper gives a thorough overview of the ATLAS TDAQ SysAdmin group activities which deals with administration of the TDAQ computing environment supporting Front End detector hardware, Data Flow, Event Filter and other subsystems of the ATLAS detector operating on the LHC accelerator at CERN. The current installation consists of approximately 1500 netbooted nodes managed by more than 60 dedicated servers, a high performance centralized storage system, about 50 multi-screen user interface systems installed in the control rooms and various hardware and critical service monitoring machines. In the final configuration, the online computer farm will be capable of hosting tens of thousands applications running simultaneously. The ATLAS TDAQ computing environment is now serving more than 3000 users subdivided into approximately 300 categories in correspondence with their roles in the system. The access and role management system is custom built on top of an LDAP schema. The engineering infrastructure of the ATLAS experiment provides 340 racks for hardware components and 4 MW of cooling capacity. The estimated data flow rate exported by the ATLAS TDAQ system for future long term analysis is about 2.5 PB/year. The number of CPU cores installed in the system will exceed 10000 during 2010.

Paper Nr: 245
Title:

A SOFTWARE FRAMEWORK TO SUPPORT AGRICULTURE ACTIVITIES USING REMOTE SENSING AND HIGH PERFORMANCE COMPUTING

Authors:

Shamim Akhter and Kento Aida

Abstract: Agricultural activity monitoring, enclosed quantifying the irrigation scheduling, tracing the soil hydraulic properties, generating the crop calendar etc., is very important for ensuring food security. Farmers want to know these information in a regular basis. Additionally, large scale agricultural activity monitoring requires to congregate information from Remote Sensing (RS) images and that type of processing takes a huge amount of computational time. Thus, optimization on the computational time is a vital requirement. In such cases, High Performance Computing (HPC) can help to reduce the processing time by increasing the computational resources. Moreover, web based technology can contribute an understandable, efficient and effective monitoring system. Still, the merging domain researches on RS image processing, agriculture and HPC are mainly in hypothetical or conjectural theme rather than practical implementation. Thus, this research contributes a new software system to support agriculture activities in real time using both RS and HPC. The main purpose of the system is to serve the valuable crop parameters information to the farmers through a web base system in real time. Additionally, we are going to discuss in details about the implementation issues of the proposed software system.

Area 4 - Data Management

Full Papers
Paper Nr: 86
Title:

DATABASE AUTHENTICATION BY DISTORTION FREE WATERMARKING

Authors:

Sukriti Bhattacharya and Agostino Cortesi

Abstract: In this paper we introduce a distortion free watermarking technique that strengthen the verification of integrity of the relational databases by using a public zero distortion authentication mechanism based on the Abstract Interpretation framework. The watermarking technique is partition based. The partitioning can be seen as a virtual grouping, which does not change neither the value of the table’s elements nor their physical positions. Instead of inserting the watermark directly to the database partition, we treat it as an abstract representation of that concrete partition, such that any change in the concrete domain reflects in its abstract counterpart. The main idea is to generate a gray scale image of the partition as a watermark of that partition, that serves as tamper detection procedure, followed by employing a public zero distortion authentication mechanism to verify the ownership.

Paper Nr: 92
Title:

DISCOVERING LARGE SCALE MANUFACTURING PROCESS MODELS FROM TIMED DATA - Application to STMicroelectronics’ Production Processes

Authors:

Pamela Viale, Nabil Benayadi, Marc Le Goc and Jacques Pinaton

Abstract: Modeling manufacturing process of complex products like electronic chips is crucial to maximize the quality of the production. The Process Mining methods developed since a decade aims at modeling such manufacturing process from the timed messages contained in the database of the supervision system of this process. Such process can be complex making difficult to apply the usual Process Mining algorithms. This paper proposes to apply the TOM4L Approach to model large scale manufacturing processes. A series of timed messages is considered as a sequence of class occurrences and is represented with a Markov chain from which models are deduced with an abductive reasoning. Because sequences can be very long, a notion of process phase based on a concept of class of equivalence is defined to cut the sequences so that a model of a phase can be locally produced. The model of the whole manufacturing process is then obtained from the concatenation of the models of the different phases. This paper presents the application of this method to model STMicroelectronics’ manufacturing processes. STMicroelectronics’ interest in modeling its manufacturing processes is based on the necessity to detect the discrepancies between the real processes and experts’ definitions of them.

Paper Nr: 97
Title:

SEARCHING KEYWORD-LACKING FILES BASED ON LATENT INTERFILE RELATIONSHIPS

Authors:

Tetsutaro Watanabe, Takashi Kobayashi and Haruo Yokota

Abstract: Current information technologies require file systems to contain so many files that searching for desired files is a major problem. To address this problem, desktop search tools using full-text search techniques have been developed. However, those files lacking any given keywords, such as picture files and the source data of experiments, cannot be found by tools based on full-text searches, even if they are related to the keywords. It is even harder to find files located in different directories from the files that include the keywords. In this paper, we propose a method for searching for files that lack keywords but do have an association with them. The proposed method derives relationship information from file access logs in the file server, based on the concept that those files opened by a user in a particular time period are related. We have implemented the proposed method, and evaluated its effectiveness by experiment. The evaluation results indicate that the proposed method is capable of searching keyword-lacking files and has superior precision and recall compared with full-text and directory-search methods.

Paper Nr: 133
Title:

A DOMAIN-RELATED AUTHORITY MODEL FOR WEB PAGES BASED ON SOURCE AND RELATED INFORMATION

Authors:

Liu Yang, Chunping Li and Ming Gu

Abstract: The Internet has become a great source for searching and acquiring information, while the authority of the resources is difficult to evaluate. In this paper we propose a domain-related authority model which aims to calculate the authority of web pages in a specific domain using the source and related information. These two factors, together with link structure, are what we mainly consider in our model. We also add the domain knowledge to adapt to the characteristics of the domain. Experiments on the finance domain show that our model is able to provide good authority scores and ranks for web pages and is helpful for people to better understand the pages.

Paper Nr: 158
Title:

OBSERVATION-BASED FINE GRAINED ACCESS CONTROL FOR RELATIONAL DATABASES

Authors:

Raju Halder and Agostino Cortesi

Abstract: Fine Grained Access Control (FGAC) provides users the access to the non-confidential database information while preventing unauthorized leakage of the confidential data. It provides two extreme views to the database information: completely public or completely hidden. In this paper, we propose an Observation-based Fine Grained Access Control (OFGAC) mechanism based on the Abstract Interpretation framework where data are made accessible at various level of abstraction. In this setting, unauthorized users are not able to infer the exact content of a cell containing confidential information, while they are allowed to get partial information out of it, according to their access rights. Different level of sensitivity of the information correspond to different level of abstraction. In this way, we can tune different parts of the same database content according to different level of abstraction at the same time. The traditional FGAC can be seen as a special case of the OFGAC framework.

Paper Nr: 203
Title:

BUILDING A VIRTUAL VIEW OF HETEROGENEOUS DATA SOURCE VIEWS

Authors:

Lerina Aversano, Roberto Intonti, Clelio Quattrocchi and Maria Tortorella

Abstract: In order to make possible the analysis of data stored in heterogeneous data sources, it could be necessary a preliminary building of an aggregated view of these sources, also referred as virtual view. The problem is that the data sources can use different technologies and represent the same information in different ways. The use of a virtual view allows the unified access to heterogeneous data sources without knowing details regarding each single source. This paper proposes an approach for creating a virtual view of the views of the heterogeneous data sources. The approach provides features for the automatic schema matching and schema merging. It exploits both syntax-based and semantic-based techniques for performing the matching; it also considers both semantic and contextual features of the concepts. The usefulness of the approach is validated through a case study.

Short Papers
Paper Nr: 54
Title:

A STRUCTURED WIKIPEDIA FOR MATHEMATICS - Mathematics in a Web 2.0 World

Authors:

Henry Lin

Abstract: In this paper, we propose a new idea for developing a collaborative online system for storing mathematical work similar to Wikipedia, but much more suitable for storing mathematical results and concepts. The main idea proposed in this paper is to design a system that would allow users to store mathematics in a structured manner, which would make related work easier to find. The proposed system would have users use indentation to add a hierarchical structure to mathematical results and concepts entered into the system. The hierarchical structure provided by the indentation of results and concepts would provide users with additional search functionality useful for finding related work. Additionally, the system would automatically link related results by using the structure provided by users, and also provide other useful functionality. The system would be flexible in terms of letting users decide how much structure to add to each mathematical result or concept to ensure that contributors are not overly burdened with having to add too much structure to each result. The system proposed in this paper serves as a starting point for discussion on new ideas to organize mathematical results and concepts, and many open questions remain for new research.

Paper Nr: 56
Title:

AUTOMATIC MINING OF HUMAN ACTIVITY AND ITS RELATIONSHIPS FROM CGM

Authors:

Nguyen Minh The, Takahiro Kawamura, Hiroyuki Nakagawa, Yasuyuki Tahara and Akihiko Ohsuga

Abstract: The goal of this paper is to describe a method to automatically extract all basic attributes namely actor, action, object, time and location which belong to an activity, and the relationships (transition and cause) between activities in each sentence retrieved from Japanese CGM (consumer generated media). Previous work had some limitations, such as high setup cost, inability of extracting all attributes, limitation on the types of sentences that can be handled, insufficient consideration of interdependency among attributes, and inability of extracting causes between activities. To resolve these problems, this paper proposes a novel approach that treats the activity extraction as a sequence labeling problem, and automatically makes its own training data. This approach has advantages such as domain-independence, scalability, and unnecessary hand-tagged data. Since it is unnecessary to fix the positions and the number of the attributes in activity sentences, this approach can extract all attributes and relationships between activities by making only a single pass over its corpus. Additionally, by converting to simpler sentences, removing stop words, utilizing html tags, google map api, and wikipedia, the proposed approach can deal with complex sentences retrieved from Japanese CGM.

Paper Nr: 71
Title:

ON USING THE NORMALIZED COMPRESSION DISTANCE TO CLUSTER WEB SEARCH RESULTS

Authors:

Alexandra Cernian, Liliana Dobrica, Dorin Carstoiu and Valentin Sgarciu

Abstract: Current Web search engines return long lists of ranked documents that users are forced to sift through to find relevant documents. This paper introduces a new approach for clustering Web search results, based on the notion of clustering by compression. Compression algorithms allow defining a similarity measure based on the degree of common information. Classification methods allow clustering similar data without any previous knowledge. The clustering by compression procedure is based on a parameter-free, universal, similarity distance, the normalized compression distance or NCD, computed from the lengths of compressed data files. Our goal is to apply the clustering by compression algorithm in order to cluster the documents returned by a Web search engine in response to a user query.

Paper Nr: 107
Title:

ON BINARY SIMILARITY MEASURES FOR PRIVACY-PRESERVING TOP-N RECOMMENDATIONS

Authors:

Alper Bilge, Cihan Kaleli and Huseyin Polat

Abstract: Collaborative filtering (CF) algorithms fundamentally depend on similarities between users and/or items to predict individual preferences. There are various binary similarity measures like Kulzinsky, Sokal-Michener, Yule, and so on to estimate the relation between two binary vectors. Although binary ratings-based CF algorithms are utilized, there remains work to be conducted to compare the performances of binary similarity measures. Moreover, the success of CF systems enormously depend on reliable and truthful data collected from many customers, which can only be achieved if individual users’ privacy is protected. In this study, we compare eight binary similarity measures in terms of accuracy while providing top-N recommendations. We scrutinize how such measures perform with privacy-preserving top-N recommendation process. We perform real data-based experiments. Our results show that Dice and Jaccard measures provide the best outcomes.

Paper Nr: 161
Title:

TOWARDS A FASTER SYMBOLIC AGGREGATE APPROXIMATION METHOD

Authors:

Muhammad Marwan Muhammad Fuad and Pierre-François Marteau

Abstract: The similarity search problem is one of the main problems in time series data mining. Traditionally, this problem was tackled by sequentially comparing the given query against all the time series in the database, and returning all the time series that are within a predetermined threshold of that query. But the large size and the high dimensionality of time series databases that are in use nowadays make that scenario inefficient. There are many representation techniques that aim at reducing the dimensionality of time series so that the search can be handled faster at a lower-dimensional space level. The symbolic aggregate approximation (SAX) is one of the most competitive methods in the literature. In this paper we present a new method that improves the performance of SAX by adding to it another exclusion condition that increases the exclusion power. This method is based on using two representations of the time series: one of SAX and the other is based on an optimal approximation of the time series. Pre-computed distances are calculated and stored offline to be used online to exclude a wide range of the search space using two exclusion conditions. We conduct experiments which show that the new method is faster than SAX.

Paper Nr: 177
Title:

TRIOO - Keeping the Semantics of Data Safe and Sound into Object-oriented Software

Authors:

Sergio Fernández, Diego Berrueta, Miguel García Rodríguez and José E. Labra

Abstract: Data management is a key factor in any software effort. Traditional solutions, such as relational databases, are rapidly losing weight in the market towards more flexible approaches and data models due to the fact that data stores as monolithic components are not valid in many current scenarios. The World Wide Consortium proposes RDF as a suitable framework for modeling, describing and linking resources on the Web. Unfortunately the current methods to access to RDF data can be considered a kind of handcrafted work. Therefore the Trioo project aims to provide powerful and flexible methods to access RDF datasets from objectoriented programming languages, allowing the usage of this data without negative influences in object-oriented designs and trying to keep the semantics of data as accurate as possible.

Posters
Paper Nr: 30
Title:

VASCULAR NETWORK SEMI-AUTOMATIC SEGMENTATION - Using Computed Tomography Angiography

Authors:

Petr Maule, Jiří Polívka and Jana Klečková

Abstract: The article describes simple and straightforward method for vascular network segmentation of computed tomography examinations. Proposed method is shown step by step with illustrations on liver's portal vein segmentation. There is also described method of creating and exporting mesh and simple way of its visualization which is possible also from a web-browser. The method was developed to provide satisfactory results in a short time and is supposed to be used as geometry input for mathematical models.

Area 5 - Knowledge-Based Systems

Full Papers
Paper Nr: 83
Title:

LEARNING DYNAMIC BAYESIAN NETWORKS WITH THE TOM4L PROCESS

Authors:

Ahmad Ahdab and Marc Le Goc

Abstract: This paper addresses the problem of learning a Dynamic Bayesian Network from timed data without prior knowledge to the system. One of the main problems of learning a Dynamic Bayesian Network is building and orienting the edges of the network avoiding loops. The problem is more difficult when data are timed. This paper proposes a new algorithm to learn the structure of a Dynamic Bayesian Network and to orient the edges from the timed data contained in a given timed data base. This algorithm is based on an adequate representation of a set of sequences of timed data and uses an information based measure of the relations between two edges. This algorithm is a part of the Timed Observation Mining for Learning (TOM4L) process that is based on the Theory of the Timed Observations. The paper illustrates the algorithm with a theoretical example before presenting the results on an application on the Apache system of the Arcelor-Mittal Steel Group, a real world knowledge based system that diagnoses a galvanization bat

Paper Nr: 100
Title:

MALAPROPISMS DETECTION AND CORRECTION USING A PARONYMS DICTIONARY, A SEARCH ENGINE AND WORDNET

Authors:

Valentin Cojocaru, Costin-Gabriel Chiru, Stefan Trausan-Matu and Traian Rebedea

Abstract: This paper presents a method for the automatic detection and correction of malapropism errors found in documents using the WordNet lexical database, a search engine (Google) and a paronyms dictionary. The malapropisms detection is based on the evaluation of the cohesion of the local context using the search engine, while the correction is done using the whole text cohesion evaluated in terms of lexical chains built using the linguistic ontology. The correction candidates, which are taken from the paronyms dictionary, are evaluated versus the local and the whole text cohesion in order to find the best candidate that is chosen for replacement. The testing methods of the application are presented, along with the obtained results.

Paper Nr: 118
Title:

A STUDY ON ALIGNING DOCUMENTS USING THE CIRCLE OF INTEREST TECHNIQUE

Authors:

Daniel Joseph and César A. Marín

Abstract: In this paper we present a study on applying a technique called Circle of Interest, along with Formal Concept Analysis and Rough Set Theory to semantically align documents such as those found in a business domain. Indeed, when companies try to engage in business it becomes crucial to keep the semantics when exchanging information usually known as a business document. Typical approaches are not practical or require a high cost to implement. In contrast, we consider the concepts and their relationships discovered within an exchanged business document to find automatically an alignment to a local interpretation known as a document type. We present experimental results on applying Formal Concept Analysis as the ontological representation of documents, the Circle of Interest for selecting the most relevant document types to choose from, and Rough Set Theory for discerning among them. The results on a set of business documents show the feasibility of our approach and its direct application to a business domain.

Paper Nr: 132
Title:

NLU METHODOLOGIES FOR CAPTURING NON-REDUNDANT INFORMATION FROM MULTI-DOCUMENTS - A Survey

Authors:

Michael T. Mills and Nikolaos G. Bourbakis

Abstract: This paper provides a comparative survey of natural language understanding (NLU) methodologies for capturing non-redundant information from multiple documents. The scope of these methodologies is to generate a text output with reduced information redundancy and increased information coverage. The purpose of this paper is to inform the reader what methodologies exist and their features based on evaluation criteria selected by users. Tables of comparison at the end of this survey provide a quick glance of these technical attributes indicators abstracted from available information in the publications.

Paper Nr: 138
Title:

GENETIC HEURISTICS FOR REDUCING MEMORY ENERGY CONSUMPTION IN EMBEDDED SYSTEMS

Authors:

Maha Idrissi Aouad, René Schott and Olivier Zendra

Abstract: Nowadays, reducing memory energy has become one of the top priorities of many embedded systems designers. Given the power, cost, performance and real-time advantages of Scratch-Pad Memories (SPMs), it is not surprising that SPM is becoming a common form of SRAM in embedded processors today. In this paper, we focus on heuristic methods for SPMs careful management in order to reduce memory energy consumption. We propose Genetic Heuristics for memory management which are, to the best of our knowledge, new original alternatives to the best known existing heuristic (BEH). Our Genetic Heuristics outperform BEH. In fact, experimentations performed on our benchmarks show that our Genetic Heuristics consume from 76.23% up to 98.92% less energy than BEH in different configurations. In addition they are easy to implement and do not require list sorting (contrary to BEH).

Paper Nr: 160
Title:

METAMODEL-BASED DECISION SUPPORT SYSTEM FOR DISASTER MANAGEMENT

Authors:

Siti Hajar Othman and Ghassan Beydoun

Abstract: Generally software model developers use a general purpose language such as Unified Modelling Language (UML) in modelling their domain application models. But when they come to the situation in which the models they create do not perfectly fit the modelling needs as they desire, a more specific domain modelling language offers a better alternative approach. In this paper, we create a Disaster Management (DM) metamodel that can be used to create a disaster management language. It will serve as a representational layer of DM expertise leading to a DM decision support system based on combining and matching different DM activities according to the disaster on hand. A creation process of the metamodel is presented leading to the synthesis of initial metamodel, as a main component to create a decision support system to unify, facilitate and expedite access to DM expertise.

Paper Nr: 232
Title:

FACET AND PRISM BASED MODEL FOR PEDAGOGICAL INDEXATION OF TEXTS FOR LANGUAGE LEARNING - The Consequences of the Notion of Pedagogical Context

Authors:

Mathieu Loiseau, Georges Antoniadis and Claude Ponton

Abstract: In this article, we discuss the problem of pedagogical indexation of texts for language learning and address it under the scope of the notion of “pedagogical context”. This prompts us to propose a new version of a model based on a couple formed of two entities : prisms and facets. We first evoke the importance of material selection in the task of planing a language class in order to introduce our point of view of Yinger’s model of planing applied to language teacher’s search of texts. This is closely intermingled with the elaboration of the notion of pedagogical context from which our model stem. This version though in a way similar to our first attempt provides sounder notions on which to build on.

Short Papers
Paper Nr: 37
Title:

EFFECTS OF EXPERT SYSTEMS IN COMPUTER BASED SUPPORT FOR CMMI IMPLEMENTATIONS

Authors:

Ercan Öztemel and Nilgün Gökmen

Abstract: Computing systems are becoming more complex in very dynamic and uncertain situations. Due to this complexity, the importance of process focused quality approaches is increasing. Capability Maturity Model Integration (CMMI) standards and implementation practices were developed to simplify the software project management and to assure expected quality of the respective software. Realizing the CMMI systems, building and monitoring the implementation practices require an extensive knowledge and experience. Organizations receive these mainly through consultants which may become too costly in most of the cases. Although there have been some computer based support tools available in the market, those still require human experts to justify the related artifacts. In this study a knowledge-based assistant system so called “CMMI Assistant” is introduced. The main aim of this tool is to support CMMI implementations through utilising expert system methodology.

Paper Nr: 65
Title:

APPROXIMATE REASONING BASED ON LINGUISTIC MODIFIERS IN A LEARNING SYSTEM

Authors:

Saoussen Bel Hadj Kacem, Amel Borgi and Moncef Tagina

Abstract: Approximate reasoning, initially introduced in fuzzy logic context, allows reasoning with imperfect knowledge. We have proposed in a previous work an approximate reasoning based on linguistic modifiers in a symbolic context. To apply such reasoning, a base of rules is needed. We propose in this paper to use a supervised learning system named SUCRAGE, that automatically generates multi-valued classification rules. Our reasoning is used with this rule base to classify new objects. Experimental tests and comparative study with two initial reasoning modes of SUCRAGE are presented. This application of approximate reasoning based on linguistic modifiers gives satisfactory results. Besides, it provides a comfortable linguistic interpretation to the human mind thanks to the use of linguistic modifiers.

Paper Nr: 101
Title:

FILLING THE GAPS USING GOOGLE 5-GRAMS CORPUS

Authors:

Costin-Gabriel Chiru, Andrei Hanganu, Traian Rebedea and Stefan Trausan-Matu

Abstract: In this paper we present a text recovery method based on a probabilistic post-recognition processing of the output of an Optical Character Recognition system. The proposed method is trying to fill in the gaps of missing text resulted from the recognition process of degraded documents. For this task, a corpus of up to 5-grams provided by Google is used. Several heuristics for using this corpus for the fulfilment of this task are described after presenting the general problem and alternative solutions. These heuristics have been validated using a set of experiments that are also discussed together with the results that have been obtained.

Paper Nr: 152
Title:

A GREEN DECISION SUPPORT SYSTEM FOR INTEGRATED ASSEMBLY AND DISASSEMBLY SEQUENCE PLANNING USING A PSO APPROACH

Authors:

Yuan-Jye Tseng, Fang-Yu Yu and Feng-Yi Huang

Abstract: A green decision support system is presented to integrate assembly and disassembly sequence planning and to evaluate the two costs in one integrated model. In a green product life cycle, it is important to determine how a product can be disassembled before the product is planned to be assembled. For an assembled product, an assembly sequence planning model is required for assembling the product at the start, whereas a disassembly sequence planning model is needed for disassembling the product at the end. In typical assembly and disassembly sequence planning approaches, the two sequences and costs are independently planned and evaluated. In this research, a new integrated model is presented to concurrently generate and evaluate the assembly and disassembly sequences. First, graph-based models are presented for representing feasible assembly sequences and disassembly sequences. Next, a particle swarm optimization (PSO) method with a new encoding scheme is developed. In the new PSO encoding scheme, a particle is represented by a position matrix defining an assembly sequence and a disassembly sequence. The assembly and disassembly sequences can be simultaneously planned with an objective of minimizing the total of assembly costs and disassembly costs. The test results show that the presented method is feasible and efficient for solving the integrated assembly and disassembly sequence planning problem. An example product is implemented and illustrated in this paper.

Paper Nr: 168
Title:

MINING TIMED SEQUENCES TO FIND SIGNATURES

Authors:

Nabil Benayadi and Marc Le Goc

Abstract: We introduce the problem of mining sequential patterns among timed messages in large database of sequences using a Stochastic Approach. An example of patterns we are interested in is : 50% of cases of engine stops in the car are happened between 0 and 2 minutes after observing a lack of the gas in the engine, produced between 0 and 1 minutes after the fuel tank is empty. We call this patterns “signatures”. Previous research have considered some equivalent patterns, but such work have three mains problems : (1) the sensibility of their algorithms with the value of their parameters, (2) too large number of discovered patterns, and (3) their discovered patterns consider only ”after“ relation (succession in time) and omit temporal constraints between elements in patterns. To address this issue, we present TOM4L process (Timed Observations Mining for Learning process) which uses a stochastic representation of a given set of sequences on which an inductive reasoning coupled with an abductive reasoning is applied to reduce the space search. A very simple example is used to show the efficiency of the TOM4L process against others literature approaches.

Paper Nr: 169
Title:

SEAMLESS SOFTWARE DEVELOPMENT FOR SYSTEMS BASED ON BAYESIAN NETWORKS - An Agricultural Pest Control System Example

Authors:

Isabel María del Águila, José del Sagrado, Samuel Túnez and Francisco Javier Orellana

Abstract: This work presents a specific solution for the development of software systems that embed functionalities based and not based on knowledge, concerning the decision support process and the information management processes, respectively. When constructing a knowledge model, the processes to be performed are mainly focus on the description of the steps necessary to build it. Usually, all approaches concentrate on adapting the software engineering lifecycle to develop a knowledge model and forget the problem of integrating it in the final software system. We propose a process model that allows developing software systems that use a Bayesian network as knowledge model. In order to show how to apply our software process model, we have included a partial view of the development process of a knowledge-based system, related to decision making in an agricultural domain, specifically with pest control in a given crop.

Paper Nr: 202
Title:

EVALUATING AN INTELLIGENT COLLABORATIVE LEARNING ENVIRONMENT FOR UML

Authors:

Kalliopi Tourtoglou and Maria Virvou

Abstract: In this paper, we present an evaluation experiment of AUTO-COLLEAGUE conducted at the University of Piraeus. AUTO-COLLEAGUE is a collaborative learning environment for UML. Students are organized into groups supported with a chat system to collaborate with each other. It builds integrated individual student models aiming at suggesting optimum groups of learners. These optimum groups will allow the trainer of the system to organize them in the most effective way as far as their performance is concerned. In other words, the strengths and weaknesses of the students are blended for the best of the individuals and the groups. The student models concern the level of expertise and specific personality characteristics of the students. The results of the evaluation were quite optimistic, as they indicated a better individual performance of the students.

Paper Nr: 246
Title:

THE REALIS MODEL OF HUMAN INTERPRETERS AND ITS APPLICATION IN COMPUTATIONAL LINGUISTICS

Authors:

Gábor Alberti, Márton Károly and Judit Kleiber

Abstract: As we strive for sophisticated machine translation and reliable information extraction, we have launched a subproject pertaining to modelling human interpreters. The model is based on ReALIS, a new “post-Montagovian” discourse-semantic theory concerning the formal interpretation of sentences constituting coherent discourses, with a lifelong model of lexical, interpersonal and cultural / encyclopedic knowledge of interpreters in its center including their reciprocal knowledge on each other. After the introduction of ReALIS, we provide linguistic data in order to show that intelligent language processing requires a realistic model of human interpreters. Then we put down some principles of the implementation (in progress) and demonstrate how to apply our model in computational linguistics.

Paper Nr: 260
Title:

A SEMANTIC SEARCH ENGINE FOR A BUSINESS NETWORK - A Personalized Vision of the Web applied to a Business Network

Authors:

Angioni Manuela, Emanuela De Vita, Lai Cristian, Marcialis Ivan, Paddeu Gavino and Tuveri Franco

Abstract: The Web’s evolution during the last few years shows that the advantages from the users’ point of view are not so macroscopic. Despite information is still the primal element, is ever more evident the need to redefine the information paradigm so that the net and the information can become really user-centric by an inverse process that brings the information to the user and not more the user to information. Define new tools is needed to create a privileged window of observation on information and knowledge: each user with his specific interest. Not more a single available space of information but shared data for everyone. What each user needs is a specific private space of information according to his point of view, his way to classify and manage the information, related to his network of contacts in the way each person choose to live the Web, the net and the knowledge. In this paper we illustrate a part of a project named A Semantic Search Engine for a Business Network where the introduction of Natural Languages, user profiling, automatic information classification according to users’ personal schemas will contribute to redefine the vision of information and delineate processes of Human-Machine Interaction.

Posters
Paper Nr: 8
Title:

ONTOLOGYJAM - A Toll for Ontology Reuse

Authors:

Luis Fernando Piasseski, Cesar Augusto Tacla and Milton Borsato

Abstract: There has been notable growth in the use of ontologies in knowledge management. This occurs because, with the use of ontologies, knowledge is shared and reuse efficiently and clearly among all the resources, such as a person or an application. However, for the ontologies to establish confidence within an extremely competitive and flexible market, they must be created in a way that is swift and has high credibility, portability and scalability. However, there is a noted lack of tools to aid knowledge specialists in the activities of construction of a new ontology. For this purpose, this article presents a tool that enables the performance of concept research in the knowledge entities represented in an ontology, through the import of multiple ontologies. As a result, knowledge can be exported into a brand new ontology. Thus, based on the knowledge reuse, with the aim of extending an ontology so as to make it adequate to its application.

Paper Nr: 51
Title:

INTEGRATION OF APRIORI ALGORITHMS WITH CASE-BASED REASONING FOR FLIGHT ACCIDENT INVESTIGATION

Authors:

Nan-Hsing Chiu, Pei-Da Lin and Chang En Pu

Abstract: The analysis of flight accidents has been demonstrated to be a crucial tool for improving flight safety. The utilization of visual decision support systems potentially assists investigators in quickly and accurately identifying the underlying causes of accidents. This study aims at supplying a visual decision support system, based on the Apriori and case-based reasoning approaches, for assisting investigators in analyzing human injuries in flight accidents. We demonstrate our approach using the aircraft configuration of flight CI611. The experimental results show that the proposed approach provides support for quick decisions by investigators on the basis of a visualization system.

Paper Nr: 51
Title:

INTEGRATION OF APRIORI ALGORITHMS WITH CASE-BASED REASONING FOR FLIGHT ACCIDENT INVESTIGATION

Authors:

Nan-Hsing Chiu, Pei-Da Lin and Chang En Pu

Abstract: The analysis of flight accidents has been demonstrated to be a crucial tool for improving flight safety. The utilization of visual decision support systems potentially assists investigators in quickly and accurately identifying the underlying causes of accidents. This study aims at supplying a visual decision support system, based on the Apriori and case-based reasoning approaches, for assisting investigators in analyzing human injuries in flight accidents. We demonstrate our approach using the aircraft configuration of flight CI611. The experimental results show that the proposed approach provides support for quick decisions by investigators on the basis of a visualization system.

Paper Nr: 72
Title:

A FRAMEWORK FOR INFORMATION DIFFUSION OVER SOCIAL NETWORKS RESEARCH - Outlining Options and Challenges

Authors:

Juan Yao and Markus Helfert

Abstract: Information diffusion is a phenomenon in which new ideas or behaviours spread contagiously through social networks in the style of an epidemic. Recently, researchers have contributed a plethora of studies, approaches and theoretical contributions related to various aspects of the diffusion phenomenon. There are many options and approaches. However there are only rare research articles consolidating and reviewing the various options. In this paper, we aim to contribute an overview of the most prominent approaches related to the studies of the diffusion phenomenon. We present a framework and research overview for this area. Our framework can assist researchers and practitioners to identify suitable solutions and understand the challenges in the information diffusion over social networks research.

Paper Nr: 105
Title:

CHALLENGES IN DISTRIBUTED INFORMATION SEARCH IN A SEMANTIC DIGITAL LIBRARY

Authors:

Antonio Martín and Carlos León

Abstract: Nowadays an enormous quantity of heterogeneous and distributed information is stored in the current digital libraries. Access to these collections poses a serious challenge, however, because present search techniques based on manually annotated metadata and linear replay of material selected by the user do not scale effectively or efficiently to large collections. The artificial intelligent and semantic Web provides a common framework that allows knowledge to be shared and reused. In this paper we propose a comprehensive approach for discovering information objects in large digital collections based on analysis of recorded semantic metadata in those objects and the application of expert system technologies. We suggest a conceptual architecture for a semantic and intelligent search engine. OntoFAMA is a collaborative effort that proposes a new form of interaction between people and Digital Library, where the latter is adapted to individuals and their surroundings. We have used Case Based-Reasoning methodology to develop a prototype for supporting efficient retrieval knowledge from digital library of Seville University.

Paper Nr: 154
Title:

A KNOWLEDGE SHARING SYSTEM FOR SOFTWARE DEVELOPERS

Authors:

Takayuki Shibata, Kazuyuki Nakamura, Takanobu Sato and Rentaro Yoshioka

Abstract: Knowledge sharing is a key factor for increasing productivity of programmers and also in maintaining the quality of programs in companies. However, programmers tend to resort to outside resources for solving their problems. This paper proposes a system to facilitate active sharing of program related knowledge among a group of programmers in a company. The system introduces a flexible unit to define the target knowledge, defines a set of function tags to describe its functionality from a programming point of view, and a set of project tags to describe its environmental aspects. We illustrate the rigid structure and classification of the tags and how this approach can decrease the work load of programmers in registering and retrieving knowledge along with a few examples. In addition, a simple evaluation tests have been performed with an experimental implementation of the proposed system.

Paper Nr: 155
Title:

TIMED OBSERVATIONS MODELLING FOR DIAGNOSIS METHODOLOGY - A Case Study

Authors:

Laura Pomponio and Marc Le Goc

Abstract: The TOM4D methodology is based on constructing models at the same level of abstraction that experts use to diagnose a process; thus, the resultant models are more simple and abstract allowing a more efficient diagnosis. For this purpose, the framework CommonKADS to interpret and organize the available knowledge of experts is combined with a multi-modelling approach in order to describe the knowledge. This paper complements works accomplished previously about TOM4D, introducing the combined use of Formal Logic and the Tetrahedron of States in order to build models more suitable for the diagnosis task. Formal Logic provides a logical interpretation of expert’s reasoning. The Tetrahedron of States provides a physical interpretation of the process variables and allows to exclude of the logical model those states physically impossible.

Paper Nr: 157
Title:

KNOWBENCH - A Semantic User Interface for Managing Knowledge in Software Development

Authors:

Dimitris Panagiotou and Gregoris Mentzas

Abstract: Modern software development consists of typical knowledge intensive tasks, in the sense that it requires that software developers create and share new knowledge during their daily work. In this paper we propose KnowBench a knowledge management system integrated inside Eclipse IDE that supports developers during the software development process to produce better quality software. The goal of KnowBench is to support the whole knowledge management process when developers design and implement software by supporting identification, acquisition, development, distribution, preservation, and use of knowledge – the building blocks of a knowledge management system.

Paper Nr: 175
Title:

APPLICATIONS OF EXPERT SYSTEM TECHNOLOGY IN THE ATLAS TDAQ CONTROLS FRAMEWORK

Authors:

Alina Corso-Radu, Raul Murillo Garcia, Andrei Kazarov, Giovanna Lehmann Miotto, Luca Magnoni and John Erik Sloper

Abstract: The ATLAS Trigger-DAQ system is composed of O(10000) of applications running ~1500 computers distributed over a network. To maximise the experiment run efficiency, the Trigger-DAQ control system includes advanced verification, diagnostics and complex dynamic error recovery tools, based on an expert system. The error recovery (ER) system is responsible for analysing and recovering from a variety of errors, both software and hardware, without stopping the data-gathering operations. The verification framework allows users to develop and configure tests for any component in the system with different levels of complexity. It can be used as a standalone test facility during the general TDAQ initialization procedure, and for diagnosing the problems which may occur at run time. A key role in both recovery and verification frameworks is played by the rule-based expert system, which is also known as a knowledge-based system, to analyse errors and decide on appropriate recovery actions. The system is composed of a dynamic set of rules that describe the TDAQ system behaviour and by an inference engine that takes decisions on which actions to perform. The system is currently used on a daily basis for the operation of the ATLAS experiment. The paper describes the architecture and implementation of the TDAQ error-recovery system and verification framework with emphasis on the latest developments and experience gained over the first LHC beam runs.

ACT4SOC 2010 Abstracts

Full Papers
Paper Nr: 3
Title:

Specifying Formal executable Behavioral Models for Structural Models of Service-oriented Components

Authors:

Elvinia Riccobene and Patrizia Scandurra

Abstract: This paper presents a behavioral formalism based on the Abstract State Machine (ASM) formal method and intended for high-level, platform-in -dependent, executable specification of Service-oriented Components . We complement the recent Service Component Architecture -- a graphical notation able to provide the overall and the components structure -- with an ASM-based formalism able to describe the workflow of the service orchestration and the services internal behavior. The resulting service-oriented component model provides an ASM-based representation of both the structural and behavioral aspects of service-oriented systems, like service interactions, service orchestration, service tasks and compensation. The ASM formal description of a service-oriented system is suitable for rigorous execution-platform-independent analysis.

Paper Nr: 5
Title:

Semi-automatic Dependency Model Creation based on Process Descriptions and SLAs

Authors:

Matthias Winkler, Thomas Springer, Edmundo David Trigos and Alexander Schill

Abstract: In complex service-oriented business processes the composed services depend on other services to contribute to the common goal. These dependencies have to be considered when service compositions should be changed. Information about dependencies is only implicitly available from service level agreements and process descriptions. In this paper we present a semi-automatic approach to analyze service dependencies and capture information about them explicitly in a dependency model. Furthermore, we describe a system architecture which covers the whole process of dependency analysis, dependency model creation and provisioning. It has been implemented based on a healthcare scenario.

Paper Nr: 12
Title:

An Evaluation of Dynamic Web Service Composition Approaches

Authors:

Ravi Khadka and Brahmananda Sapkota

Abstract: Web Services composition has received much interest from both the academic researchers and industry to support cross-enterprise application integration. Promising research projects and their prototypes are being developed. At the same time the web service environment is getting more dynamic as numerous web services are being published by the service providers in the Internet. To meet the users requirements regarding on-demand delivery of customized services, dynamic web service composition approaches have emerged. But still many compositional issues have to be overcome like dynamic discovery of services, compositional correctness, transactional supports etc. In this paper we discuss some of these issues and then investigate some of the representative dynamic web service composition approaches. We evaluate those approaches on the basis of the issues and present how the future research can benefit by addressing those issues of dynamic web service composition.

Paper Nr: 19
Title:

A Diffusion Mechanism for Online Advertising Service Over Social Media

Authors:

Yung-Ming Li and Ya-Lin Shiu

Abstract: Social media has increasingly become a popular platform for diffusing information, through the message sharing of numerous participants in a social network. Recently, companies attempt to utilize social media to expose their advertisements to appropriate customers. The success of message propagation in social media highly depends on the content relevance and closeness of social relationships. In this paper, considering the factors of user preference, network influence, and propagation capability, we propose a social diffusion mechanism to discover the appropriate and influential endorsers from the social network to deliver relevant advertisements broadly. The proposed mechanism is implemented and verified in one of the most famous micro-blogging system- Plurk. Our experimental results shows that the proposed model could efficiently enhance the advertising exposure coverage and effectiveness.

Paper Nr: 21
Title:

Enabling Publish / Subscribe with Cots Web Services across Heterogeneous Networks

Authors:

Espen Skjervold, Trude Hafsøe, Frank T. Johnsen and Ketil Lund

Abstract: In scenarios such as search-and-rescue operations, it may be required to transmit information across multiple, heterogeneous networks, often experiencing unreliable connections and limited bandwidths. Typically, there will be traffic within and across radio networks, as well as back to a central infrastructure (e.g., a police command post) when a reach-back link is available. This implies that using Publish/Subscribe is advantageous in order to reduce network traffic, and that store-and-forward capabilities are required to handle the instability of radio networks. At the same time, it is desirable to use commercial software based on standards as far as possible, in order to reduce cost and development time, and to ease interconnection of systems from different organizations. We therefore propose using SOA based on Web services in such scenarios. Indeed, Web services are targeted at stable, high-speed networks, but our work shows that such usage is feasible. In this paper, we add Publish/Subscribe functionality to standard, unmodified Web services through the use of our prototype middleware solution called the Delay and Disruption Tolerant SOAP Proxy (DSProxy). In addition to the ability to make Web services delay and disruption tolerant, the DSProxy enables SOAs in scenarios as described above. The DSProxy has been tested in field trials, with promising results.

Paper Nr: 22
Title:

Enterprise Interoperability Ontology for SOC applied to Logistics

Authors:

Wout Hofman

Abstract: The Service Oriented Architecture (SOA, [1]) can be applied to enterprise integration. It creates an Internet of services for enterprises. Service science [2] defines services in term of value propositions of enterprises to customers. Both service science and the Internet of services require a form of mediation between customer requirements and service capabilities like for instance identified in [3]. However, mediation is not yet automated, thus can not be applied real time. Furthermore, many business areas already have agreement on semantics and interaction sequencing based on existing business documents, thus, mediation is not always required for a certain business area. This paper presents an ontology to support business services (Ontology Web Language (OWL), [18]). The concepts shared at business level are based on existing approaches like Resource, Event, Agent used for auditing and control [4] and builds upon service frameworks like OWL-S [13]. The ontology will be specialized to logistics and compared with other approaches that might be applied to enterprise integration.

Paper Nr: 24
Title:

Service Tailoring: Towards Personalized Homecare Services

Authors:

Mohammad Zarifi Eslami, Alireza Zarghami, Brahmananda Sapkota and Marten van Sinderen

Abstract: Health monitoring and healthcare provisioning for the elderly at home have received increasingly attention. Since each elderly person is unique, with a unique lifestyle, living environment and health condition, personalization is an essential feature of homecare software services. Service tailoring, which is creating a new service to meet individual requirements may be achieved in a cost-effective and time-efficient manner if new services can be configured and composed from already existing services. In this paper, we propose an effective service tailoring process and architecture to personalize homecare services according to the individual care-receiver’s needs. In addition, we present a scenario to highlight the need for service tailoring and to demonstrate the feasibility of the proposed approach.

Paper Nr: 25
Title:

Optimizing Service Selection for Probabilistic QoS Attributes

Authors:

Ulrich Lampe, Dieter Schuller, Julian Eckert and Ralf Steinmetz

Abstract: The service selection problem (SSP) – i.e., choosing from sets of functionally equivalent services in order to fulfill certain business process steps based on non-functional requirements – has frequently been addressed in literature considering deterministic values for the Quality of Service (QoS) attributes. However, the usage of deterministic values does not reflect the uncertainty about the actual value of an attribute during execution, thus ignoring the risk of QoS violations. In the paper at hand, a simulative step, based on stochastic QoS attributes, is performed as complement for optimally solving the SSP using linear programming methods. With this two-step approach, uncertainties in the selected set of services can be explicitly revealed and addressed through repeated selection steps, thus allowing to prevent the violation of QoS restrictions much more effectively.

Paper Nr: 26
Title:

From i* Models to Service Oriented Architecture Models

Authors:

Carlos Becerra, Xavier Franch and Hernán Astudillo

Abstract: Requirements engineering and architectural design are key activities for successful development of software systems. Specifically in the service-oriented development systems there is a gap between the requirements description and architecture design and assessment. This article presents a systematic process for systematically deriving service-oriented architecture from goal-oriented models. This process allows generate candidate architectures based on i* models and helps architects to select a solution using services oriented patterns for both services and components levels. The process is exemplified by applying it in a synthesis metadata and assembly learning objects system.

Paper Nr: 28
Title:

Model Checking Verification of Web Services Composition

Authors:

Abdallah Missaoui, Zohra Sbaï and Kamel Barkaoui

Abstract: Web services composition is becoming very important in today’s service oriented business environment. Different services frequently have semantic inconsistencies which may lead to the failure of the services composition. In order to verify the correctness of the Web Services composition, we present a method for analyzing and verifying interactions among web services. We model web service composition based on special class of Petri nets: open workflow nets. We translate this composition to Promela, a source language of SPIN model checker, designed to describe communicating distributed services. At the requirements level, model checking is used to validate the specification against a set of formulae specified into LTL which are used to verify constraints satisfaction of web services composition.

DMIA 2010 Abstracts

Full Papers
Paper Nr: 3
Title:

A CASE STUDY - Classification of Stock Exchange News by Support Vector Machines

Authors:

P. Kroha, K. Kröber and R. Janetzko

Abstract: In this paper, we present a case study concerning the classification of text messages with the use of Support Vector Machines. We collected about 700.000 news and stated the hypothesis saying that when markets are going down then negative messages have a majority and when markets are going up then positive messages have a majority. This hypothesis is based on the assumption of news-driven behavior of investors. To check the hypothesis given above we needed to classify the market news. We describe the application of Support Vector Machines for this purpose including our experiments that showed interesting results. We found that the news classification has some interesting correlation with long-term market trends.

Paper Nr: 5
Title:

MODEL-DRIVEN AD HOC DATA INTEGRATION IN THE CONTEXT OF A POPULATION-BASED CANCER REGISTRY

Authors:

Yvette Teiken, Martin Rohde and Hans-Jürgen Appelrath

Abstract: The major task of a population-based Cancer Registry (CR) is the identification of risk groups and factors. This analysis makes use of data about the social background of the population. The integration of that data is not intended for the routine processes at the CR. Therefore, this process must be performed by data warehouse experts that results in high cost. This paper proposes an approach, which allows epidemiologists and physicians at the CR to realize this ad hoc data integration on their own. We use model driven software design (MDSD) with a domain specific language (DSL), which allows the epidemiologists and physicians to describe the data to be integrated in a known language. This description or rather model is used to create an extension of the existing data pool and a web service and web application for data integration. The end user can do the integration on his/her own which results in a very cost-efficient way of ad hoc data integration.

Short Papers
Paper Nr: 2
Title:

OLAP FOR FINANCIAL ANALYSIS AND PLANNING - A Proof of Concept

Authors:

Eitel J. M. Lauría and Carlos A. Greco

Abstract: We describe the design of an in-memory OLAP model for financial planning, analysis and reporting at a medium sized manufacturing company in South America. The architecture and data model design are explained within the context of the company’s requirements and constraints.

Paper Nr: 4
Title:

PURPOSE-DRIVEN APPROACH FOR FLEXIBLE STRUCTURE-INDEPENDENT DATABASE DESIGN

Authors:

Youri I. Rogozov, Alexander S. Sviridov, Sergey A. Kutcherov and Wladimir Bodrow

Abstract: This paper presents a purpose-driven approach for development of flexible databases outgoing from the relational database concept. Based on carried out analysis of both relational and entity-attribute-value database models the aspects essential for the described purpose-driven approach are defined. The necessary requirements to be satisfied by structure-independent databases are derived and discussed in detail. Several implementations of structure-independent databases using the suggested approach have been realized and presented as well. The improvement of relational database model based on proposed structure-independent database approach is formulated.

SDT 2010 Abstracts

Full Papers
Paper Nr: 4
Title:

CONSTRUCTING EVOLVABLE ENTERPRISE IMPLEMENTATIONS

Authors:

Philip Huysmans

Abstract: Contemporary organizations are operating in increasingly volatile environments. Hence, organizations must be agile in order to be able to quickly adapt to changes in its environment. This may be a complex process, since a change to one organizational unit may affect other units. Given the increasing complexity of organizations, it has been argued that organizations should be purposefully designed. Enterprise architecture frameworks provide guidance for the design of organizational structures. Unfortunately, current enterprise architecture frameworks have a descriptive, rather than a prescriptive nature and do not seem to have a strong theoretical foundation. In software engineering literature, the Normalized Systems approach has recently been proposed to provide such deterministic design principles for the modular structure of software. The Normalized Systems approach is based on the systems theoretic concept of stability to ensure the evolvability of information systems. In our PhD research, we explore the feasibility of extending the ns design principles to the field of enterprise architecture. Our results show that such approach is feasible and illustrate how the systems theoretic concept of stability can be used on the organizational level.

Paper Nr: 5
Title:

OPTIMIZING QOS-BASED SERVICE SELECTION IN SERVICE-ORIENTED ARCHITECTURES

Authors:

Dieter Schuller

Abstract: In Service-oriented Architectures, services can be composed in a loosely coupled manner to realize business processes. Thereby, mentioned services are not necessarily located only within the borders of the own en-terprise. In the Internet of Services, multiple service providers offer various services on several service mar-ketplaces. In case services with comparable functionalities but varying quality levels are available at different costs, service requesters can decide, which services from which service providers to select. My research focuses on this service-selection-problem for complex workflows by formulating a linear optimization prob-lem, which can be solved optimally using linear programming techniques. As the actual execution path is probably not known at planning time (e.g., for conditional braches), a worst-case, and an average-case anal-ysis is performed. In addition to considering non-functional, quantitative service properties, I am working in my research towards integrating qualitative service features (as, e.g., security) for different, complex workflow patterns.