CONFERENCE
Area 1 - Programming Languages  
Area 2 - Software Engineering  
Area 3 - Distributed and Parallel Systems  
Area 4 - Information Systems and Data Management  
Area 5 - Knowledge Engineering  

 
WORKSHOP
Workshop on Architectures, Concepts and Technologies for Service Oriented Computing (ACT4SOC)  
SPECIAL SESSION  
Metamodelling – Utilization in Software Engineering (MUSE)

 
Area 1 - Programming Languages  
Title:  
A LANGUAGE FOR SPECIFYING INFORMATIONAL GRAPHICS FROM FIRST PRINCIPLES
Author(s):  
Stuart M. Shieber and Wendy Lucas
Abstract:  
Information visualization tools, such as commercial charting packages, provide a standard set of visualizations for tabular data, including bar charts, scatter plots, pie charts, and the like. For some combinations of data and task, these are suitable visualizations. For others, however, novel visualizations over multiple variables would be preferred but are unavailable in the fixed list of standard options. To allow for these cases, we introduce a declarative language for specifying visualizations on the basis of the first principles on which (a subset of) informational graphics are built. The functionality we aim to provide with this language is presented by way of example, from simple scatter plots to versions of two quite famous visualizations: Minard's depiction of troop strength during Napoleon's march on Moscow and a map of the early ARPAnet from the ancient history of the Internet. Benefits of our approach include flexibility and expressiveness for specifying a range of visualizations that cannot be rendered with standard commercial systems.

 
Title:  
ITKBOARD:A VISUAL DATA FLOW LANGUAGE FOR BIOMEDICAL IMAGE PROCESSING
Author(s):  
Hoang D. K. Le, Rongxin Li, Sebastien Ourselin and John M. Potter
Abstract:  
Experimenters in biomedical image processing rely on software libraries to provide a large number of standard filtering and image handling algorithms. The Insight Toolkit (ITK) is an open-source library that provides a complete framework for a range of image processing tasks, and is specifically aimed at segmentation and registration tasks for both two and three dimensional images. This paper describes a visual dataflow language, ITKBoard, designed to simplify building, and more significantly, experimenting with ITK applications. The ease with which image processing experiments can be interactively modified and controlled is an important aspect of the design. The experimenter can focus on the image processing task at hand, rather than worry about the underlying software. ITKBoard incorporates composite and parameterised components, and control constructs, and relies on a novel hybrid dataflow model, combining aspects of both demand and data-driven execution.

 
Title:  
THE DEBUGGABLE INTERPRETER DESIGN PATTERN
Author(s):  
Jan Vrany and Alexandre Bergel
Abstract:  

Using Interpreter and Visitor design patterns is widely adopted to implement programming language inter-preters. Their popularity stems from their expressive and simple design. However, no general approach to conceive a debugger is commonly adopted. This paper presents the debuggable interpreter design pattern, a general approach to extend a language interpreter with debugging facilities such as step-over and step-into. Moreover, it enables multiple debuggers to coexists. It extends the Interpreter and Visitor design patterns with few hooks and a debugging service. SmallJS, an interpreter for Javascript-like language, serves as illustration.


 
Title:  
A PATTERN FOR STATIC REFLECTION ON FIELDS Sharing Internal Representations in Indexed Family Containers
Author(s):  
Andreas P. Priesnitz and Sibylle Schupp
Abstract:  

Reflection allows defining generic operations in terms of the constituents of objects. These definitions incur overhead if reflection takes place at run time, which is the common case in popular languages. If performance matters, some compile-time means of reflection is desired to obviate that penalty. Furthermore, the information provided by static reflection can be utilised for class generation, e.g., to optimize internal representation. We demonstrate how to provide static reflection on class field properties by means of generic components in an OO language with static meta-programming facilities. Surprisingly, a major part of the solution is not specific to the particular task of providing reflection. We define the internal representation of classes by a reworked implementation of a generic container that models the concept of a statically indexed family. The proposed features of this implementation are also beneficial to its use as a common container.


 
Title:  
A SPACE-EFFICIENT ALGORITHM FOR PAGING UNBALANCED BINARY TREES
Author(s):  
Rui A. E. Tavares and Elias P. Duarte Jr
Abstract:  

This work presents a new approach for paging large unbalanced binary trees which frequently appear in computational biology. The proposed algorithm aims at reducing the number of pages accessed for searching, and at decreasing the amount of unused space in each page as well as reducing the total number of pages required to store a tree. The algorithm builds the best possible paging when it is possible and employs an efficient strategy based on bin packing for allocating trees that are not complete. The complexity of the algorithm is presented. Experimental results are reported and compared with other approaches, including balanced trees. The comparison shows that the proposed approach is the only one that presents an average number of page accesses for searching close to the optimal and, at the same time, the page filling percentage is also close to the optimal.


 
Title:  
ASPECT ORIENTATION VS. OBJECT ORIENTATION IN SOFTWARE PROGRAMMING An Exploratory Case-study
Author(s):  
Anna Lomartire, Gianfranco Pesce and Giovanni Cantone
Abstract:  

Aspect orientation is a software paradigm that is claimed to be more effective and efficient than Object orientation when software development and maintenance interventions are taken in consideration that affect transversally the application structure, namely Aspects. In order to start with providing evidence able to confirm or disconfirm that opinion in our context - software processes that we enact, and products that we develop at our University Data Center - before launching a controlled experiment, which would require the investment of large effort, we conducted a preliminary explorative investigation that we arranged as a case study. We started from a Web-based object-oriented application, which engineering students in Informatics had constructed under our supervision. We specified new user needs, which realization was expected to impact on many of the application’s classes and relationships. Hence, we applied another student to realize those extensive requirements by using both Aspect orientation and Object orientation. Results show that, in the average, both the completion time and the size of the additional code advantage significantly the Aspect orientation, for maintenance interventions that are transversal to the application’s structure, with respect to the characteristics of the experiment object utilized, the specified enhancement maintenance requirements, and the subject involved with performing in the role of programmer. Although the exploratory nature of the study, the limited generality of the utilized application, and the fact that just one programmer was utilized as experimental subjects, the experiment results push us to verify the findings by conducting further investigation involving a wider set of programmers and applications with different characteristics.


 
Title:  
TEST COVERAGE ANALYSIS FOR OBJECT ORIENTED PROGRAMS Structural Testing through Aspect Oriented Instrumentation
Author(s):  
Fabrizio Baldini,Giacomo Bucci,Leonardo Grassi and Enrico Vicario
Abstract:  

The introduction of Object Oriented Technologies in test centered processes has emphasized the importance of finding new methods for software verification. Testing metrics and practices, developed for structured programs, have to be adapted in order to address the prerogatives of object oriented programming. In this work, we introduce a new approach to structural coverage evaluation in the testing of OO software. Data flow paradigm is adopted and reinterpreted through the definition of a new type of structure, used to record def-use information for test critical class member variables. In the final part of this paper, we present a testing tool that employs this structure for code based coverage analysis of Java and C++ programs.


 
Title:  
ON DIGITAL SEARCH TREES A Simple Method for Constructing Balanced Binary Trees
Author(s):  
Franjo Plavec, Zvonko G. Vranesic and Stephen D. Brown
Abstract:  

This paper presents digital search trees, a binary tree data structure that can produce well-balanced trees in the majority of cases. Digital search tree algorithms are reviewed, and a novel algorithm for building sorted trees is introduced. It was found that digital search trees are simple to implement because their code is similar to the code for ordinary binary search trees. Experimental evaluation was performed and the results are presented. It was found that digital search trees, in addition to being conceptually simpler, often outperform other popular balanced trees such as AVL or red-black trees. It was found that good performance of digital search trees is due to better exploitation of cache locality in modern computers.


 
Area 2 - Software Engineering  
Title:  
ROLE-BASED CLUSTERING OF SOFTWARE MODULES An Industrial Size Experiment
Author(s):  
Philippe Dugerdil and Sebastien Jossi
Abstract:  

Legacy software system reverse engineering has been a hot topic for more than a decade. One of the key problems is to recover the architecture of the system i.e. its components and the communications between them. Generally, the code alone does not provide much clue on the structure of the system. To recover this architecture, we proposed to use the artefacts and activities of the Unified Process to guide the search. In our approach we first recover the high-level specification of the program. Then we instrument the code and “run” the use-cases. Next we analyse the execution trace and rebuild the run-time architecture of the program. This is done by clustering the modules based on the supported use-case and their roles in the software. In this paper we present an industrial validation of this reverse-engineering process. First we give a summary of our methodology. Then we show a step-by-step application of this technique to real-world business software and the result we obtained. Finally we present the workflow of the tools we used and implemented to perform this experiment. We conclude by giving the future directions of this research.


 
Title:  
DETECTING PATTERNS IN OBJECT-ORIENTED SOURCE CODE – A CASE STUDY
Author(s):  
Andreas Wierda, Eric Dortmans and Lou Somers
Abstract:  

Pattern detection methods discover recurring solutions in a system’s implementation, for example design patterns in object-oriented source code. Usually this is done with a pattern library. This has the disadvantage that the precise implementation of the patterns must be known in advance. The method used in our case study does not have this disadvantage. It uses a mathematical technique called Formal Concept Analysis and is applied to find structural patterns in two subsystems of a printer controller. The case study shows that it is possible to detect frequently used structural design constructs without upfront knowledge. However, even the detection of relatively simple patterns in relatively small pieces of software takes a lot of computing time. Since this is due to the complexity of the applied algorithms, applying the method to large software systems like the complete controller is not practical. They can be applied to its subsystems though, which are about five to ten percent of its size.


 
Title:  
SPECIFICATION AND PROOF OF LIVENESS PROPERTIES IN B EVENT SYSTEMS
Author(s):  
Olfa Mosbahi and Jacques Jaray
Abstract:  

In this paper, we give a framework for defining an extension to the event B method. The event B method allows us to state only invariance properties, but in some applications such as automated or distributed systems, fairness and eventuality properties must also be considered. We first extend the expressiveness of the event B method to deal with the specification of these properties. Then, we give a semantics of this extended syntax over traces, in the same spirit as the temporal logic of actions TLA does. Finally, we give verification rules of these properties. We denote by temporal B model, the B model extended with liveness properties. We illustrate our method on a case study related to automated system.


 
Title:  
AUTO-COLLEAGUE A Collaborative Learning Environment for UML
Author(s):  
Maria Virvou and Kalliopi Tourtoglou
Abstract:  

In this paper we present AUTO-COLLEAGUE, a collaborative learning environment for UML. AUTO-COLLEAGUE is a Computer-Supported Collaborative Learning (CSCL) system. It is based on a multi-dimensional User-Modeller component that describes user characteristics related to the UML domain knowledge, the performance types, the personality and the needs of the learner. The system constantly monitors, records and reasons about each learner’s actions. As a result of this process, AUTO-COLLEAGUE provides adaptive advice and help to users so that they may use UML more efficiently and collaborate with other members of their team more constructively.


 
Title:  
USING MBIUI LIFE-CYCLE FRAMEWORK FOR AN AFFECTIVE BI-MODAL USER INTERFACE
Author(s):  
Katerina Kabassi, Maria Virvou and Efthymios Alepis
Abstract:  

Decision making theories seem very promising for improving human-computer interaction. However, the actual process of incorporating multi-criteria analysis into an intelligent user interface involves several development steps that are not trivial. Therefore, we have employed and tested the effectiveness of a unifying life-cycle framework that may be used for the application of many different multi-criteria decision making theories. The life-cycle framework is called MBIUI and in this paper we show how we have used it for employing a multi-criteria decision making theory, called Simple Additive Weighting, in an affective bi-modal educational system. More specifically, we describe the experimental studies for designing, implementing and testing the decision making theory. The decision making theory has been adapted in the user interface for combining evidence from two different modes and providing affective interaction.


 
Title:  
AN ONTOLOGICAL SW ARCHITECTURE FOR THE DEVELOPMENT OF COOPERATIVE WEB PORTALS
Author(s):  
Giacomo Bucci, Valeriano Sandrucci, Enrico Vicario and Saverio Mecca
Abstract:  

Ontological technologies comprise a rich framework of languages and components off the shelf, which devise a paradigm for the organization of SW architectures with high degree of interoperability, maintainability and adaptability. In particular, this fits the needs for the development of semantic web portals, where pages are organized as a generic graph, and navigation is driven by the inherent semantics of contents. We report on a pattern-oriented executable SW architecture for the construction of portals enabling semantic access, querying, and contribution of conceptual models and concrete elements of information. By relying on the automated configuration of an Object Oriented domain layer, the architecture reduces the creation of a cooperative portal to the definition of an ontological domain model. Application of the proposed architecture is illustrated with reference to the specific case of a portal which aims at enabling cooperation among subjects from different localities and different domains of expertise in the development of a shared knowledge base in the domain of construction practices based on mudbrick.


 
Title:  
HOW “DEVELOPER STORIES” IMPROVES ARCHITECTURE Facilitating Knowledge Sharing and Embodiment, and Making Architectural Changes Visible
Author(s):  
Rolf Njor Jensen, Niels Platz and Gitte Tjørnehøj
Abstract:  

Within the field of Software Engineering emergence of agile methods has been a hot topic since late the 90s. eXtreme Programming (XP) (Beck, 1999) was one of the first agile methods and is one of the most well-known. However research has pointed to weaknesses in XP regarding supporting development of viable architectures. To strengthen XP in this regard a new practice: Developer Stories (Jensen et al., 2006) was introduced last year mainly based on a theoretical argumentation.
This paper reports from extensive experimentation with, and elaboration of the new practice. Results from this experimentation shows that using Developer Stories increases the likelihood of developing a viable architecture through a series of deliberate choices, through creating disciplined and recurring activities that:
1) Facilitate sharing and embodying of knowledge about architectural issues, and
2) heighten visibility of refactorings for both customers and developers.


 
Title:  
AN AGILE MODEL DRIVEN ARCHITECTURE-BASED CONTRIBUTION TO WEB ENGINEERING
Author(s):  
Alejandro Gómez Cuesta, Juan Carlos Granja, Rory O’Connor
Abstract:  

The rise of the number and complexity of web applications is ever increasing. Web engineers need advanced development methods to build better systems and to maintain them in an easy way. Model-Driven Architecture (MDA) is an important trend in the software engineering field based on both models and its transformations to automatically generate code. This paper describes a a methodology for web application development, providing a process based on MDA which provides an effective engineering approach to reduce effort. It consists of defining models from metamodels at platform-independent and platform-specific levels, from which source code is automatically generated.


 
Title:  
AN INTEGRATED TOOL FOR SUPPORTING ONTOLOGY DRIVEN REQUIREMENT SELICITATION
Author(s):  
Motohiro Kitamura, Ryo Hasegawa, Haruhiko Kaiya and Motoshi Saeki
Abstract:  
Since requirements analysts do not have sufficient knowledge on a problem domain (simply, domain knowledge), the technique how to make up for lacks of domain knowledge is a key issue. This paper proposes the usage of a domain ontology as domain knowledge during requirements elicitation processes and the technique to create a domain ontology for a certain problem domain by using text-mining techniques.

 
Title:  
VCODEX: A DATA COMPRESSION PLATFORM
Author(s):  
Kiem-Phong Vo
Abstract:  

Vcodex is a software platform focusing primarily on data compression but also providing other encoding techniques such as portability, privacy and robustness. Components called data transforms implement transformation techniques ranging from general purpose such as Huffman encoding to structure-driven such as table reordering by column dependency.
Transforms can be composed to build custom compressors tailored to data semantics to achieve optimum compression performance. An overview of the software and data architecture of Vcodex will be given along with examples of how data transforms are used in practice. Experimental data will be presented to show the effectiveness of the approach.


 
Title:  
DIFFERENCING AND MERGING OF SOFTWARE DIAGRAMS Stateof the Art and Challenges
Author(s):  
Sabrina Fortsch and Bernhard Westfechtel
Abstract:  
For long, fine-grained version control for software documents has been neglected severely. Typically, software configuration management systems support the management of text or binary files. Unfortunately, text-based tools for fine-grained version control are not adequate for software documents produced in earlier phases in the software life cycle. Frequently, these documents have a graphical syntax; therefore we will call them software diagrams. This paper discusses the current state of the art in fine-grained version control (differencing and merging) for software diagrams with an emphasis on UML diagrams.

 
Title:  
MODERN CONCEPTS FOR HIGH-PERFOMANCE SCIENTIFIC COMPUTING Library Centric Application Design
Author(s):  
Rene Heinzl, Philipp Schwaha and Siegfried Selberherr
Abstract:  

During the last decades various high-performance libraries were developed written in fairly low level lan- guages, like FORTRAN, carefully specializing codes to achieve the best performance. However, the objective to achieve resuable components has been regularly eluded by the software community ever since. The funda- mental goal of our approach is to create a high-performance mathematical framework with resuable domain- specific abstractions which are close to the mathematical notations to describe many problems in scientific computing. Interoperability driven by strong theoretical derivations of mathematical concepts is another im- portant goal of our approach.


 
Title:  
REFORMULATING COMPONENT IDENTIFICATION AS DOCUMENT ANALYSIS PROBLEM Towards Automated Component Procurement
Author(s):  
Hans-Gerhard Gross, Marco Lormans and Jun Zhou
Abstract:  

One of the first steps of component procurement is the identification of required component features in large repositories of existing components. On the highest level of abstraction, component requirements as well as component descriptions are usually written in natural language. Therefore, we can reformulate component identification as a text analysis problem and apply latent semantic analysis for automatically identifying suitable existing components in large repositories, based on the descriptions of required component features. In this article, we motivate our choice of this technique for feature identification, describe how it can be applied to feature tracing problems, and discuss the results that we achieved with the application of this technique in a number of case studies.


 
Title:  
LINKING SOFTWARE QUALITY TO SOFTWARE ENGINEERING ACTIVITIES, RESULTS FROM A CASE-STUDY
Author(s):  
Jos J.M. Trienekens, Rob J. Kusters and Dennis C. Brussel
Abstract:  

Specification of software quality characteristics, such as reliability and usability, is an important aspect of software development. However, of equal importance is the implementation of quality during the design and construction of the software. This paper links software quality specification to software quality implementation using a multi-criteria decision analysis technique. The approach is validated in a case-study, at the Royal navy in The Netherlands.


 
Title:  
ON GENERATING TILE SYSTEM FOR A SOFTWARE ARCHITECTURE CASE OF A COLLABORATIVE APPLICATION SESSION
Author(s):  
C. Bouanaka, A. Choutri and F. Belala
Abstract:  

Tile logic, an extension of rewriting logic, where synchronization, coordination and interaction can be naturally expressed, is showed to be an appropriate formal semantic framework for software architecture specification. Based on this logic, we define a notion of dynamic connection between software components. Then, individual components are viewed as entirely independent elements and free from any static interconnection constraints. We also fill out the usual component description, expressed in terms of Provided/Required services, with functionalities specification of such services. Starting from State/Transition UML diagrams, representing requirements of the underlying distributed system, our objective consists of offering a common semantic framework for architectural description as well as behavioural specification of that system. Direct consequences of the proposed approach are dynamic reconfiguration and components mobility which become straightforward aspects. A simple, but comprehensive, case study, the collaborative application session, is used to illustrate all stages of our proposed approach.


 
Title:  
ADDRESSING SECURITY REQUIREMENTS THROUGH MULTI-FORMALISM MODELLING AND MODEL TRANSFORMATION
Author(s):  
Miriam Zia, Ernesto Posse and Hans Vangheluwe
Abstract:  

Model-based approaches are increasingly used in all stages of complex systems design. In this paper, we use multi-formalism modelling and model transformation to address security requirements. Our methodology supports the verification of these properties on CSP (Communicating Sequential Processes) models using the model checker FDR2. This low-level constraint checking is performed through model refinements, from a higher-level behavioural description of a system in the Statecharts formalism. The contribution of this paper lies in the transformation of Statechart models into CSP models. These, combined with non-deterministic models of a system's environment (including, for example, possible attacks), are used for model checking. To bridge the gap between these two levels, we introduce kiltera, an intermediate language that defines the system in terms of interacting processes and allows for simulation as well as automatic translation into CSP models. An e-Health application is used to demonstrate our approach.


 
Title:  
EVOLUTION STYLES IN PRACTICE Refactoring Revisited as Evolution Style
Author(s):  
Olivier Le Goaer, Mourad Oussalah, Dalila Tamzalit and Djamel Serai
Abstract:  

The evolution of pure software systems remains a time-consuming and error-prone activity. But whatever the considered domain, recurring practices can be captured and reused to alleviate the subsequent amounts of effort. In this paper we propose to cast domain-specific problems-solutions pairs in form of technology-neutral and formal units called evolution styles. As such, an evolution style is endowed with an instantiation mechanism and can be considered at different conceptual levels. An evolution style is intended to evolve a family of applications of a same domain, whereas its instance evolves a given application. In addition, the evolution style's format is a component triple where each component is highly reusable. In this way, styles are scalable knowledge fragments able to support large and complex evolutions, readily available in form of evolution catalogs.


 
Title:  
INTEGRATING SOFTWARE ARCHITECTURE CONCEPTS INTO THE MDA PLATFORM
Author(s):  
Alti Adel, Khammaci Tahar, Smeda Adel and Bennouar Djamal
Abstract:  

Architecture Description Languages (ADLs) provide an abstract representation of software systems. Achieving a concrete mapping of such representation into the implementation is one of the principal aspects of MDA (Model Driven Architecture). Integration of ADLs within MDA confers to the MDA platform a higher level of abstraction and a degree of reuse of ADLs. Indeed they have significantly different platform metamodels which make the definition of mapping rules complex. This complexity is clearly noticeable when some software architecture concepts cannot be easily mapped to MDA platform. In this paper, we propose to integrate software architecture within MDA. We define also strategy for direct transformation using a UML profile. It represents both software architecture model (PIM) and MDA platform model (PSM) in UML meta-model then elaborates transformation rules between results UML meta-models. The goal is to automate the process of deriving implementation platform from software concepts.


 
Title:  
AUTOMATIC TEST MANAGEMENT OF SAFETY-CRITICAL SYSTEMS: THE COMMON CORE Behavioural Emulation of Hard-soft Components
Author(s):  
Antonio Grillo, Giovanni Cantone, Christian Di Biagio and Guido Pennella
Abstract:  

In order to solve problems that the usage a human-managed test process caused, the reference company for this paper - Italian branch of a multinational organization which works in the domain of large safety-critical systems - evaluated the opportunity, as offered by major technology that the market provides, of using automatic test management. That technology resulted not sufficiently featured for the company’s quality and productivity improvement goals, and we were charged for investigating in deep and eventually satisfying the company’s test-management needs of automation. Once we had transformed those goals in technical requirements and evaluated that it was possible to realize them conveniently in a software system, we passed to analyze, construct, and eventually evaluate in field the “Automatic Test Management” system, ATM. This paper is concerned with the ATM subsystem’s Common Core, CC. This allows the behavioral emulation of hard-soft components - as part of a distributed real components scenario placed under one or more Unix standard operative systems - once we describe those behaviors by using the Unified Modeling Language. This paper reports on the ATM-CC’s distinctive characteristics and architecture overview. Results from a case study show that, in order to enact a given suite of tests by the ATM-CC, the amount of time required is more or less the same for the first test run, but it becomes around ten times less for the following test runs, than the time required for managing the execution of those tests by hand.


 
Title:  
INCLUDING IMPROVEMENT OF THE EXECUTION TIME IN A SOFTWARE ARCHITECTURE OF LIBRARIES WITH SELF-OPTIMISATION
Author(s):  
Luis-Pedro Garcia, Javier Cuenca, Domingo Gimenez
Abstract:  
The design of hierarchies of libraries helps to obtain modular and efficient sets of routines to solve problems of specific fields. An example is ScaLAPACK's hierarchy in the field of parallel linear algebra. To facilitate the efficient execution of these routines, the inclusion of self-optimization techniques in the hierarchy has been analysed. The routines at a level of the hierarchy use information generated by routines from lower levels. But sometimes, the information generated at one level is not accurate enough to be used satisfactorily at higher levels, and a remodelling of the routines is necessary. A remodelling phase is proposed and analysed with a Strassen matrix multiplication.

 
Title:  
A STABILITY AND EFFICIENCY ORIENTED RESCHEDULING APPROACH FOR SOFTWARE PROJECT MANAGEMENT
Author(s):  
Yujia Gez and Lijun Bai
Abstract:  

Rescheduling gains more attention in recent years by researchers who focus their study on scheduling problem under uncertain situations. But in software engineering circumstances, it has not been widely explored. In this paper we propose a GA-based approach for rescheduling by applying a multi-objective objective function considering both efficiency and stability to produce new schedules after managers take control options to catch up their initial schedules. We also conducted case studies by simulation data. The results show the effectiveness of the rescheduling method in supporting decision making in a dynamic environment.


 
Title:  
A STATISTICAL NEURAL NETWORK FRAMEWORK FOR RISK MANAGEMENT PROCESS From the Proposal to its Preliminary Validation for Efficiency
Author(s):  
Salvatore Alessandro Sarcià, Giovanni Cantone and Victor R. Basili
Abstract:  
This paper enhances the currently available formal risk management models and related frameworks by providing an independent mechanism for checking out their results. It provides a way to compare the historical data on the risks identified by similar projects to the risk found by each framework Based on direct queries to stakeholders, existing approaches provide a mechanism for estimating the probability of achieving software project objectives before the project starts (Prior probability). However, they do not estimate the probability that objectives have actually been achieved, when risk events have occurred during project development. This involves calculating the posterior probability that a project missed its objectives, or, on the contrary, the probability that the project has succeeded. This paper provides existing frameworks with a way to calculate both prior and posterior probability. The overall risk evaluation, calculated by those two probabilities, could be compared to the evaluations that each framework has found within its own process. Therefore, the comparison is performed between what those frameworks assumed and what the historical data suggested both before and during the project. This is a control mechanism because, if those comparisons do not agree, further investigations could be carried out. A case study is presented that provides an efficient way to deal with those issues by using Artificial Neural Networks (ANN) as a statistical tool (e.g., regression and probability estimator). That is, we show that ANN can automatically derive from historical data both prior and posterior probability estimates. This paper shows the verification by simulation of the proposed approach.

 
Title:  
A CASE STUDY ON THE APPLICABILITY OF SOFTWARE RELIABILITY MODELS TO A TELECOMMUNICATION SOFTWARE
Author(s):  
Hassan Artail, Fuad Mrad and Mohamad Mortada
Abstract:  

Faults can be inserted into the software during development or maintenance, and some of these faults may persist even after integration testing. Our concern is about quality assurance that evaluates the reliability and availability of the software system through analysis of failure data. These efforts involve estimation and prediction of next time to failure, mean time between failures, and other reliability-related parameters. The aim of this paper is to empirically apply a variety of software reliability growth models (SRGM) found in the CASRE (Computer Aided Software Reliability Estimation) tool onto real field failure data taken after the deployment of a popular billing software used in the telecom industry. The obtained results are assessed and conclusions are made concerning the applicability of the different models to modeling faults encountered in such environments after the software has been deployed.


 
Title:  
INTEGRATING A DISTRIBUTED INSPECTION TOOL WITHIN AN ARTEFACT MANAGEMENT SYSTEM
Author(s):  
Andrea De Lucia, Fausto Fasano, Genoveffa Tortora and Giuseppe Scanniello
Abstract:  

We propose a web based inspection tool addressing the problem of software inspection within a distributed development environment. This tool implements an inspection method that tries to minimise the synchronous collaboration among team members using an asynchronous discussion to resolve the conflicts before the traditional synchronous meeting. The tool also provides automatic merging and conflict highlighting functionalities to support the reviewers during the pre-meeting refinement phase. Information about the inspection progress, which can be a valuable support to make inspection process related decisions is also provided. The inspection tool has been integrated within an artefact management system, thus allowing the planning, scheduling, and enactment of the inspection within the development process and integrating the review phase within the overall artefact lifecycle.


 
Title:  
COMPONENT BASED METHODOLOGY FOR QOS-AWARE NETWORK DESIGN
Author(s):  
Cedric Teyssié, David Espès and Zoubir Mammeri
Abstract:  

New services (such as VoIP) and their quality requirements have dramatically increased the complexity of the underlying networks. Quality of Service support is a challenge for next generation networks. Design methods and modeling languages can help reduce the complexity of the integration of QoS. UML is successfully used in several domains. In this paper, we propose a QoS component oriented methodology based on UML. This methodology reduces network-design complexity by separating design considerations into functional and non-functional parts. It also provides a design cycle and proposes abstraction means where QoS is integrated. As UML is not adapted for modeling non-functional elements, we combine UML strengths and a QoS specification language (QSL).


 
Title:  
ASSL SPECIFICATION OF RELIABILITY SELF-ASSESSMENT IN THE AS-TRM
Author(s):  
Emil Vassev, Olga Ormandjieva and Joey Paquet
Abstract:  

This article is an introduction to our research towards a formal framework for tackling reliability in reactive autonomic systems with self-monitoring functionality. The Autonomic System Specification Language (ASSL) is a framework for formally specifying and generating autonomic systems. With ASSL, we can specify high-level behavior policies, which shows that it is very appropriate language for specifying reliability models as part of overall system behavior. In this paper, we show how ASSL can be used to specify reliability self-assessment in the Autonomic System Timed Reactive Model (AS-TRM). The reliability self-assessment is performed at two levels: autonomic element (local) and system (global). It depends on the configuration of the system and is concerned with the uncertainty analysis of the AS-TRM as it evolves. An appropriate architecture for supporting reliability self-assessment, along with a communication mechanism to implement the reactive and autonomic behavior, are specified with ASSL.


 
Title:  
A FORMAL APPROACH TO DEPLOY HETEROGENEOUS SOFTWARE COMPONENTS IN A PLC
Author(s):  
Mohamed Khalgui and Emanuele Carpanzano
Abstract:  

This paper deals with an industrial control application following different component-based technologies. This application, considered as a network of heterogeneous components, has to be deployed in a multi-tasking PLC. It has classically to respect temporal constraints according to specifications. To deploy the components in feasible OS tasks of the controller, we propose to fix a formal component model allowing their homogeneous design. We enrich, in particular, this model to unify well known technologies. The application is considered then as a network of homogeneous components. We propose to transform this network into a real-time tasks system with precedence constraints to exploit previous results on real-time deployment.


 
Title:  
A COMPARISON OF STRUCTURED ANALYSIS AND OBJECT ORIENTED ANALYSIS An Experimental Study
Author(s):  
Davide Falessi, Giovanni Cantone and Claudio Grande
Abstract:  

Despite the fact that object oriented paradigm is actually widely adopted for software analysis, design, and implementation, there are still a large number of companies that continue to utilize the structured approach to develop software analysis and design. The fact is that the current worldwide agreement for object orientation is not supported by enough empirical evidence on advantages and disadvantages of object orientation vs. other paradigms in different phases of the software development process. In this work we describe an empirical study focused on comparing the time required for analyzing a data management system by using both object orientation and a structural technique. We choose the approach indicated by the Rational Unified Process, and the Structured Analysis and Design Technique, as instances of object oriented and structured analysis techniques, respectively. The empirical study that we present considers both an uncontrolled and a controlled experiment with Master students. Its aim is to analyze the effects of those techniques to software analysis both for software development from scratch, and enhancement maintenance, respectively. Results show no significant difference in the time required for developing or maintaining a software application by applying those two techniques, whatever is the order of their application. However we found two major tendencies regarding object orientation: 1) it is more sensitive to subjects’ peculiarities, and 2) it is able to provide some reusability advantages already at the analysis level. Since such result concerns a one-hour-size enhancement maintenance, we expect significant benefits from using object orientation, in case of real-size extensions.


 
Title:  
SECURE REFACTORING Improving the Security Level of Existing Code
Author(s):  
Katsuhisa Maruyama
Abstract:  

Software security is ever-increasingly becoming a serious issue; nevertheless, a large number of software programs are still defenseless against malicious attacks. This paper proposes a new class of refactoring, which is called secure refactoring. This refactoring is not intended to improve the maintainability of existing code. Instead, it helps programmers to increase the protection level of sensitive information stored in the code without changing its observable behavior. In this paper, four secure refactorings are presented, and their respective mechanics based on static analysis of Java source code are explained. All transformations of the proposed refactorings can be designed to be automated on our refactoring browser which supports the application of traditional refactorings.


 
Title:  
MACRO IMPACT ANALYSIS USING MACRO SLICING
Author(s):  
Laszlo Vidacs, Arpad Beszedes and Rudolf Ferenc
Abstract:  

The expressiveness of the C/C++ preprocessing facility enables the development of highly configurable source code. However, the usage of language constructs like macros also bears the potential of resulting in highly incomprehensible and unmaintainable code, which is due to the flexibility and also to the “cryptic” nature of the preprocessor language. This could be overcome if suitable analysis tools were available for preprocessorrelated issues, however this is not the case (for instance, none of modern Integrated Development Environments provide features to efficiently analyze and browse macro usage). A conspicuous problem in software maintenance is the correct (safe and efficient) management of change. In particular, due to the aforementioned reasons, determining efficiently the impact of a change in a specific macro definition is not yet possible. In this paper we describe a method and tool for the impact analysis of macro definitions by revealing and analyzing the dependencies among macro-related program points. We do this by computing the so-called macro slices based on the detailed analysis information collected from a whole software system. We also give some preliminary experimental results on the analysis of industrial-size C++ software.


 
Title:  
A METHOD TO MODEL GUIDELINES FOR DEVELOPING RAILWAY SAFETY-CRITICAL SYSTEMS WITH UML
Author(s):  
D.D.Okalas Ossami, J.-M.Mota, L.Thiry, J.-M.Perronne, J.-L.Boulanger and G.Mariano
Abstract:  
There are today an abundance of standards concerned with the development and certification of railway safetycritical systems. They recommend the use of different techniques to describe system requirements and to pursue safety strategies. One problem shared by standards is that they only prescribe what should be done or use but they provide no guidance on how recommendations can be fulfilled. The purpose of this paper is to investigate a methodology to model guidelines for building certifiable UML models that cater for the needs and recommendations of railway standards. The paper will explore some of the major tasks that are typical of development guidelines and will illustrate practical steps for achieving these tasks.

 
Title:  
SOFTWARE DEFECT PREDICTION: HEURISTICS FOR WEIGHTED NAÏVE BAYES
Author(s):  
Burak Turhan and Ayse Bener
Abstract:  

Defect prediction is an important topic in software quality research. Statistical models for defect prediction can be built on project repositories. Project repositories store software metrics and defect information. This information is then matched with software modules. Naïve Bayes is a well known, simple statistical technique that assumes the ‘independence’ and ‘equal importance’ of features, which are not true in many problems. However, Naïve Bayes achieves high performances on a wide spectrum of prediction problems. This paper addresses the ‘equal importance’ of features assumption of Naïve Bayes. We propose that by means of heuristics we can assign weights to features according to their importance and improve defect prediction performance. We compare the weighted Naïve Bayes and the standard Naïve Bayes predictors’ performances on publicly available datasets. Our experimental results indicate that assigning weights to software metrics increases the prediction performance significantly.


 
Title:  
TEST FRAMEWORKS FOR ELUSIVE BUG TESTING
Author(s):  
W. E. Howden and Cliff Rhyne
Abstract:  
Elusive bugs can be particularly expensive because they often survive testing and are released in a deployed system. They are characterized as involving a combination of properties. One approach to their detection is bounded exhaustive testing (BET). This paper describes how to implement BET using a variation of JUnit, called BETUnit. The idea of a BET pattern is also introduced. BET patterns describe how to solve certain problems in the application of BETUnit. Classes of patterns include BET test generation and BET oracle design. Examples are given of each.

 
Title:  
SOFTWARE PROCESS CONVERSION RULES IN IMPPROS Quality Models Conversion for a Software Process Implementation Environment
Author(s):  
Sandro Ronaldo Bezerra Oliveira, Alexandre Marcos Lins de Vasconcelos and Tiago Soares Gonçalves
Abstract:  

The software process conversion is a technique based on the mapping of the existing relationship between the content of the quality norms/models. The basic estimated of the conversion is to obtain making an adaptation of the software processes without the necessary effort to specify new models, guaranteeing the unicity and the consistency. For a company to reach a definitive market, its software process will have to be guided by patterns defined for a norm, and if it glimpses the penetration in other markets perhaps it is necessary the guide for other norms so different. This paper presents a process to convert software processes using quality models/norms, and a discussion of some rules used to support the execution of this process in a software development context. This process is part of a software process implementation environment, called ImPProS, developed at CIn/UFPE – Center of Informatics/Federal University of Pernambuco.


 
Title:  
A PRODUCT LINE OF SOFTWARE REUSE COST MODELS
Author(s):  
Mustafa Korkmaz and Ali Mili
Abstract:  

In past work, we had proposed a software reuse cost model that combines relevant stakes and stakeholders in an integrated ROI-based model. In this paper we extend our earlier work in two directions: conceptually, by capturing aspects of the model that were heretofore unaccounted for; practically, by proposing a product line that supports a wide range of cost modeling applications.


 
Title:  
SIMULATION METHODOLOGIES FOR SCIENTIFIC COMPUTING Modern Application Design
Author(s):  
Philipp Schwaha, Markus Schwaha, Rene Heinzl
Abstract:  

We discuss methodologies to obtain solutions to complex mathematical problems derived from physical models. We present an approach based on series expansion, using discretisation and averaging, and a stochastic approach. Various forms based on the Boltzmann equation are used as model problems. Each of the methodologies comes with its own strengths and weaknesses, which are briefly outlined. We also provide short code snippets to demonstrate implementations of key parts, that make use of our generic scientific simulation environment, which combines high expressiveness with high runtime performance.


 
Title:  
NEW DESIGN TECHNIQUES FOR ENHANCING FAULT TOLERANT COTS SOFTWARE WRAPPERS
Author(s):  
Luping Chen and John May
Abstract:  

Component-based systems can be built by assembling components developed independently of the systems. Middleware code that connects the components is usually needed to assemble them into a system. The ordinary role of the middleware is simple glue code, but there is an opportunity to design it as a safety wrapper to control the integration of the components to help assure system dependability. This paper investigates some architectural designs for the safety wrappers using a nuclear protection system example. It integrates new fault-tolerant techniques based on diagnostic assertions and diverse redundancy into the middleware designs. This is an attractive option where complete trust in component reliability is impossible or costly to achieve.


 
Title:  
RESOURCE SUBSTITUTION FOR THE REALIZATION OF MOBILE INFORMATION SYSTEMS
Author(s):  
Hagen Hopfner and Christian Bunse
Abstract:  

Recent advances in wireless technology have led to mobile computing, a new dimension in data communication and processing. Market observers predict an emerging market with millions of mobile users carrying small, battery-powered terminals equipped with wireless connection, and as a result, the way people use information resources is predicted. However,the realization of mobile information systems (mIS) is affected by the users need to handle complex data sets as well as the restrictions of used devices and networks. Hence, software engineering has to bridge the gab between both worlds, it has to “play” with given resources. Extensive wireless data transmissions, that is expensive, slow, and energy intensive can - for example - be reduced if mobile clients cache received data locally. In this short paper we discuss, which and how resources are substitutable in order to enable more complex, more reliable and more efficient mIS. Therefore, we analyze the resources used for data management with mobile devices and show how they can be considered by software development approaches in order to implement mIS.


 
Title:  
GOAL-ORIENTED AUTOMATIC TEST CASE GENERATORS FOR MC/DC COMPLIANCY
Author(s):  
Emine G. Aydal, Jim Woodcock and Ana Cavalcanti
Abstract:  

Testing is a crucial phase of the software development process. Certification standards such as DO-178B impose certain steps to be accomplished in testing phase and certain testing coverage criteria to be met in order to certify a software as Level-A software. Modified Condition/Decision Coverage, listed as one of these requirements in DO-178B, is one of the most difficult targets to achieve for testers and software developers. This paper presents the state-of-the-art goal-oriented automatic test case generators and evaluates them in the context of MC/DC satisfaction. It also aims to guide the production of MC/DC-compliant test case generators by pointing out the strengths and weaknesses of the current tools and by highlighting the further expectations.


 
Title:  
A MODEL-DRIVEN ENGINEERING APPROACH TO REQUIREMENTS ENGINEERING How These Disciplines May Benefit Each Other
Author(s):  
Begoña Moros, Cristina Vicente-Chicote and Ambrosio Toval
Abstract:  

The integration of Model Driven Engineering (MDE) principles into Requirements Engineering (RE) could be beneficial to both MDE approaches and RE. On the one hand, the definition of a requirements metamodel would allow requirements engineers to integrate all RE concepts in the same model and to know which elements are part of the RE process and how they are related. Besides, this requirement metamodel could be used as a common conceptual model for requirements management tools supporting the RE process. On the other hand, this requirements metamodel could be related to other metamodels describing analysis and design artefacts. This would align requirements to models and, as a consequence, requirements could be more easily integrated into the current MDE approach. To achieve this, the traditional RE process, focused on a document-based requirements specification, should be changed into a requirements modelling process. Thus, in this paper we propose a requirements modelling language (metamodel) aimed at easing the integration of requirements into a MDE approach. This metamodel, called REMM, is the basis of a requirements graphical modelling tool also implemented as part of this work. This tool allows requirements engineers to depict all the elements involved in the RE process and to trace relationships between them.


 
Title:  
A FORMAL APPROACH FOR THE DEVELOPMENT OF AUTOMATED SYSTEMS
Author(s):  
Olfa Mosbahi, Leila Jemni and JacquesJaray
Abstract:  

This paper deals with the use of two verification approaches : theorem proving and model checking. We focus on the event B method by using its associated theorem proving tool (Click_n_Prove), and on the language TLA+ by using its model checker TLC. By considering the limitation of the event B method to invariance properties, we propose to apply the language TLA+ to verify liveness properties on a software behavior. We extend first of all the expressivity of a B model (called temporal B model) to deal with the specification of fairness and eventuality properties. Second, we give transformation rules from a temporal B model into a TLA+ module. We present in particular, our prototype system called B2TLA+, that we have developed to support this transformation. Finally, we verify these properties thanks to the TLC model checker.


 
Title:  
SCMM-TOOL Tool for Computer Automation of the Information Security Management Systems
Author(s):  
Luís Enrique Sánchez, Daniel Villafranca, Eduardo Fernández-Medina, Mario Piattini
Abstract:  
For enterprises to be able to use information technologies and communications with guarantees, it is necessary to have an adequate security management system and tools which allow them to manage it. In addition, security management system must have highly reduced costs for its implementation and maintenance in small and medium-sized enterprises (from here on refered to as SMEs) to be feasible. In this paper, we will show the tool we have developed using our model for the development, implementation and maintenance of a security management system, adapted to the needs and resources of a SME. Furthermore, we will state how this tool lets enterprises with limited resources manage their security system very efficiently. This approach is being directly applied to real cases, thus obtaining a constant improvement in its application.

 
Title:  
A SOFTWARE TOOL FOR REQUIREMENTS SPECIFICATION On using the STORM Environment to Create SRS Documents
Author(s):  
Sergiu Dascalu, Eric Fritzinger, Kendra Cooper and Narayan Debnath
Abstract:  

STORM, presented in this paper, is a UML-based software engineering tool designed for the purpose of automating as much of the requirements specification phase as possible. The main idea of the STORM approach is to combine adequate requirements writing with robust use case modelling in order to expedite the process leading up to the actual design of the software. This paper presents a description of our approach to software requirements specification as well as an overview of STORM’s design concepts, organizing principles, and modes of operation. Also included are examples of the tool’s use, a comparison between STORM and similar CASE tools, and a discussion of needed features for software environments that support text aspects of requirements and use case modelling.


 
Title:  
IMPLEMENTING A VALUE-BASED APPROACH TO SOFTWARE PROCESS AND PRODUCT ASSESSMENT
Author(s):  
Pasi Ojala
Abstract:  

Recently more and more attention has been focused on the costs of SPI as well as on the cost-effectiveness and productivity of software development. This study outlines the main concepts and principles of a value-based approach and presents an industrial case where value assessment based on value-based approach has been used in practise. The results of the industrial case show that even though there is still much to do in making the economic-driven view complete in software engineering, the value-based approach outlines a way towards a more comprehensive understanding of it. For companies the value assessment offers useful help when struggling with cost-effectiveness and productivity related problems.


 
Title:  
CLOSING THE BUSINESS-APPLICATION GAP IN SOA Challenges and Solution Directions
Author(s):  
Boris Shishkov, Jan L.G. Dietz and Marten van Sinderen
Abstract:  

Adequately resolving the business-software gap in the context of SOA (Service-Oriented Architecture) appears to be a non-trivial task, mainly because of the dual essence of the service concept: (i) services are inevitably business-restricted because they operate in real-life environments; (ii) services are also technology-restricted because the software components realizing them have to obey the restrictions of their complex technology-driven environments. Hence, the existence of these two restriction directions makes the (SOA-driven) business-software alignment challenging – here current business-software mapping mechanisms can only play a limited role. With regard to this, the contribution of the current paper is two-fold: 1. it analyzes SOA and its actual challenges, from a business-software-alignment perspective, deriving essential SOA application desirable properties; 2. it proposes software services specification directions, particularly concerning the (SOA-driven) business-software mapping. This contribution is expected to be useful in current software development.


 
Title:  
PRIORITIZATION OF PROCESSES FOR SOFTWARE PROCESS IMPROVEMENT IN SMALL SOFTWARE ENTERPRISES
Author(s):  
Francisco J. Pino, Félix Garcia, Mario Piattini
Abstract:  

In this article a set of processes which are considered to be of high-priority when initiating the implementation of a Software Process Improvement –SPI– project in Very Small Software Enterprises –VSEs–, is presented. The objective is to present the VSEs with a strategy to deal with the first processes that must be considered when they undertake an SPI project. The processes proposed in this article are fundamentally based on the analysis and contrast of several pieces of research carried out by the COMPETISOFT project. The fundamental principle of the proposal is that process improvement must be connected with the other software process management responsibilities.


 
Title:  
SCHEME FOR COMPARING RESULTS OF DIVERSE SOFTWARE VERSIONS
Author(s):  
Viktor Mashkov and Jaroslav Pokorny
Abstract:  

The paper presents a scheme for comparing the results produced by diversely designed SW versions in order to select and deliver presumably correct result. It also allows to determine all faulty versions of SW and all faulty comparators. As compared to the majority voting scheme, it requires a lesser number of result comparisons and is able, in most situations, to deliver presumably correct service even if the number of faulty SW versions is greater than the number of correct ones. The scheme is based on system-level diagnosis technique, particularly, on the comparison-based testing model. The proposed scheme can be used for designing fault-tolerant diverse servers and for improving adjudicator in N-version programming technique.


 
Title:  
TOWARDS A UNIFIED SECURITY/SAFETY FRAMEWORK A Design Approach to Embedded System Applications
Author(s):  
Miroslav Sveda and Radimir Vrba
Abstract:  

This paper presents a safety and security-based approach to networked embedded system design that offers reusable design patterns for various domain-dedicated applications. After introducing proper terminology, it deals with industrial, sensor-based applications development support aiming at distributed components interconnected by wired Internet and/or wireless sensor networks. The paper presents a dependability-driven approach to embedded networks design for a class of Internet-based applications. It discusses an abstract framework stemming from embedded system networking technologies using wired and wireless LANs, and from the IEEE 1451.1 smart transducer interface standard supporting client-server and publish-subscribe communication patterns with group messaging based on IP multicast that mediate safe and secure access to smart sensors through Internet and Zigbee. The case study demonstrates how clients can access groups of wireless smart pressure and temperature sensors and safety valves through Internet effectively using developed system architecture, which respects prescribed requirements for application dependent safety and security.


 
Title:  
THE MISSING LAYER Deficiencies in Current Rich Client Architectures, and their Remedies
Author(s):  
Brendan Lawlor and Jeanne Stynes
Abstract:  

There is an architectural deficit in most rich client applications currently undertaken: In n-tier applications the presentation layer is represented as a single layer. This fits badly with business layers that are increasingly organized along Service Oriented Architecture lines. In n-tier systems in general, and SOA systems in particular, the client’s role is to combine a number of services into a single application. Low-level patterns, mostly based on MVC, can support the design of individual components, each one communicating with a particular back end service. No commonly understood pattern is currently evident that would allow these components to be combined into a loosely coupled application. This paper outlines a rich client architecture that addresses this gap by adding a client application layer.


 
Title:  
RE-USING EXPERIENCE IN INFORMATION SYSTEMS DEVELOPMENT
Author(s):  
Paulo Tomé, Ernesto Costa, Luis Amaral
Abstract:  

Information Systems Development (ISD) is an important organization activity that generally involves the development of models. This paper describes a framework, supported by Case-Based-Reasoning (CBR) method, that enables the use of experience in model development in the context of ISD process.


 
Title:  
TOWARDS A NEW CODE-BASED SOFTWARE DEVELOPMENT CONCEPT ENABLING CODE PATTERNS
Author(s):  
Klaus Meffert and Ilka Philippow
Abstract:  

Modern software development is driven by many critical forces. Among them are fast deployment require-ments and easy-to-maintain code. These forces are contradicted by the rising complexity of the technologi-cal landscape among others. We introduce a concept aiding in lowering these negative aspects for code-based software development. Protagonists of our work are explicit semantics in source code and newly in-troduced code pattern templates, which enable code transformations. Throughout this paper, the term code pattern includes architectural patterns, design patterns, and refactoring operations. Enabling automated transformations stands for providing means of executing possibly premature transformations.


 
Title:  
A COMPUTERIZED TUTOR FOR ARCHITECTING SOFTWARESupporting the Creative Aspects of Software Development
Author(s):  
José L. Fernández-Sánchez and Javier Carracedo Pais
Abstract:  

CASE tools must be more user-oriented, and support creative problem-solving aspects of software engineering as well as rigorous modelling based on standard notations such as UML. Knowledge based systems and particularly intelligent agents provide the technology to implement user-oriented CASE tools. Here we present an intelligent agent implemented as a CASE tool module. The agent guides the software architect through the architecting process, suggesting him the actions to be performed and the methodology rules that apply to the current problem context.


 
Title:  
REQUIREMENTS DEFINITIONS OF REAL-TIME SYSTEM USING THE BEHAVIORAL PATTERNS ANALYSIS (BPA) APPROACH The Elevator Control System
Author(s):  
Assem El-Ansary
Abstract:  

This paper presents a new event-oriented Behavioral Pattern Analysis (BPA) modeling approach. In BPA, events are considered the primary objects of the world model. Events are more effective alternatives to use cases in modeling and understanding the functional requirements. The Event defined in BPA is a real-life conceptual entity that is unrelated to any implementation. The BPA Behavioral Patterns are temporally ordered according to the sequence of the real world events. The major contributions of this research are: The Behavioral Pattern Analysis (BPA) modeling approach. Validation of the hypothesis that the Behavioral Pattern Analysis (BPA) modeling approach is a more effective alternative to Use Case Analysis (UCA) in modeling the functional requirements of Human-Machine Safety-Critical Real-time Systems.


 
Title:  
DETECTING ASPECTUAL BEHAVIOR IN UML INTERACTION DIAGRAMS
Author(s):  
Amir Abdollahi Foumani and Constantinos Constantinides
Abstract:  

In this paper we discuss an approach to detect potential aspectual behaviorin UML interaction diagrams. We use a case study to demonstrate how our approach can be realized: We adopt a production system to represent the static and dynamic behavior of a design model. Derivation sentences generated from the production representation of the dynamic model allow us to apply certain strategies in order to detect aspectual behavior which we categorize into“horizontal”and“vertical.”Ourapproachcanaiddevelopers by providing indications over their designs where restructuring may be desired.


 
Title:  
AN IMPROVEMENT TO THE MIXED MDA-SOFTWARE FACTORY APPROACH: A REAL CASE
Author(s):  
Gustavo Muñoz Gómez and Juan Carlos Granja
Abstract:  

In this article, we will offer a solution to the mixed MDA–software factory model which enables greater satisfaction of the requirements of a product line based on work by Gary Chastek’s team with the application of the required transformations for generating the three components necessary for the creation of product families using the mixed approach. In order to validate the chosen representation and transformations, we will focus on a real case which appeared in a previous article by Muñoz et al. (2006). Interesting option is to explore in greater depth the requirements of the family of programs that we want to create and to obtain the product line, framework and specific language from these. For this purpose, we will use Chastek et al.’s representation system (2001) which allows us to represent the requirements using three CIM models and a dictionary of specific terms. The mixed MDA–software factory approach (Muñoz, J., Pelechano, V.) enables the advantages of both approaches to be enjoyed using the PIM models as a starting point.


 
Title:  
A CASE STUDY OF DISTRIBUTED AND EVOLVING APPLICATIONS USING SEPARATION OF CONCERNS
Author(s):  
Hamid Mcheick, Hafedh Mili, Rakan Mcheik
Abstract:  

Researchers and practitioners have noted that the most difficult task is not development software in the first place but rather changing it afterwards because the software’s requirements change, the software needs to execute more efficiently, etc. For instance, changing the architecture of an application from a stand-alone application, to a distributed one is still an issue. Generally speaking, we should encapsulate distribution logic in components through the borders of aspects oriented techniques (separation of concerns) in which we define an aspect as a software artefact that addresses a concern. Although, theses aspects can be offered by the same object that changes its behaviour during lifetime. We investigate through a case study the following ideas. Firstly, what we need like modifications to transform local application to distributed one, using a number of target platforms (RMI, EJBs, etc.)? Secondly, we analyze aspects oriented development techniques to detect what is the best technique that corresponds for changes requested to integrate a new requirements such as distribution.


 
Title:  
SOFTWARE ENGINEERING LESSIONS LEARNED FROM DEVELOPING AND MAINTAINING WEBSITES
Author(s):  
Tammy Kam Hung Chan and Zhen Hua Liu
Abstract:  

Developing, maintaining and enhancing software features and functions for production websites are challenging software engineering activities. There are many aspects of software engineering practices and methodologies that are different in developing software features and systems for 24x7 production website compared with developing classical standalone software systems or client-server systems. This experience paper describes software engineering lessons that we have learned from developing, enhancing and maintaining software features for production websites and summarizes the key software engineering principles and practices that are essential for delivering successful 24x7 E-commerce based production websites.


 
Title:  
UNDERSTANDING PRODUCT LINES THROUGH DESIGN PATTERNS
Author(s):  
Daniel Cabrero, Javier Garzás and Mario Piattini
Abstract:  

Many proposals concerning design and implementation of Software Product Lines have been studied in the last few years. This work points out how and why different Design Patterns are used in the context of Product Lines. This will be achieved by reviewing how often those patterns appear in different proposed solutions and research papers for Product Lines for a given set of sources. This information will help us identify which specific problems need to be solved in the context of Product Lines. In addition, we will discuss how this information can be useful to identify gaps in new research.


 
Title:  
HARDWARE PROJECT MANAGEMENT What we Can Learn from the Software Development Process for Hardware Design?
Author(s):  
Rolf Drechsler and Andreas Breiter
Abstract:  

Nowadays hardware development process is more and more software oriented. Hardware description languages (HDLs), like VHDL or Verilog, are used to describe the hardware on the register-transfer level (RTL) or on even higher levels of abstraction. Considering ASICs of more than 10 million gates and a HDL to gate ratio of approximately 1:10 to 1:100, i.e. one line of HDL code on the RTL corresponds to 10 to 100 gates in the netlist, the HDL description consists of several hundred thousand lines of code. While classical hardware design focuses purely on the development of efficient tools to support the designer, in industrial work processes the development cycle becomes more and more important. In this paper we discuss an approach, where known concepts from software engineering and project management are studied and transferred to the hardware domain. Several aspects are pointed out that should ensure high quality designs and by this the paper presents a way working towards a more robust design process by a tight integration of hardware design and project management. The intention of this work is not to provide an exhaustive discussion, but many points are addressed that with increasing circuit complexity will become more and more important for successful ASIC design.


 
Title:  
AN EXPERIMENTAL EVALUATION OF SOFTWARE PERFORMANCE MODELING AND ANALYSIS TECHNIQUES
Author(s):  
Julie A. Street and Robert G. Pettit IV
Abstract:  

In many software development projects, performance requirements are not addressed until after the application is developed or deployed, resulting in costly changes to the software or the acquisition of expensive high-performance hardware. Many techniques exist for conducting performance modeling and analysis during the design phase; however, there is little information on their effectiveness. This paper presents an experiment that compared the predicted data from the UML Profile for Schedulability, Performance, and Time (SPT) paired with statistical simulation and coloured Petri nets (CPNs) for a sample implementation. We then discuss the results from applying these techniques.


 
Title:  
TOWARDS A KNOWLEDGE BASE TO IMPROVE REUSABILITY OF DESIGN PATTERN
Author(s):  
Cédric Bouhours, Hervé Leblanc and Christian Percebois
Abstract:  

In this paper, we propose to take directly into account the knowledge of experts during a design review activity. Such activity requires an ability to analyze and to transform models, in particular to inject design patterns. Our approach consists in identifying model fragments which can be replaced by design patterns. We name these fragments “alternative models” because they solve the same problem as the pattern, but with a more complex or different structure than the pattern. In order to classify and to explain the design defects of this alternative models base, we propose the concept of strong point. A strong point is a key design feature which permits the pattern to resolve a problem most efficiently.


 
Title:  
MODEL-DRIVEN DEVELOPMENT OF GRAPHICAL TOOLS Fujaba Meets GMF
Author(s):  
Thomas Buchmann, Alexander Dotor and Bernhard Westfechtel
Abstract:  

In this paper we describe and evaluate our combination of the Fujaba CASE-Tool with the Graphical Modeling Framework (GMF) of the Eclipse IDE. We created an operational model with Fujaba and used it as input for a GMF editor generation process. This allows us to introduce a new approach for generating fully operational models including graphical editors for model representation and transformation. By making our developement process explicit this paper acts as a guide for applying this approach to other projects as well.


 
Title:  
A STUDY ON SOFTWARE PROJECT COACHING MODEL USING TSP IN SAMSUNG
Author(s):  
Taehee Gwak and Yoonjung Jang
Abstract:  

Reasonable planning for the project, monitoring of the project status, and controlling of the project with appropriate corrective actions are key factors for successful software project management. If the project leader gets helps and supports from an expert who have the know-how of software project management instead of depending on only their own discretion, these activities can be conducted much more effectively. The TSPPSM(Team Software ProcessSM) is a process framework developed to provide guidelines on software development and management activities for teams. Samsung Electronics introduced the PSP/TSP technology to meet visualization and high efficiency needs in software development, in 2003. In this paper, we propose the software project coaching model based on the TSP/PSP experiences to support the project leader and the team members efficiently and analyze the applying results and effects.


 
Title:  
V3 STUDIO: A COMPONENT-BASED ARCHITECTURE DESCRIPTION META-MODEL Extensions to Model Component Behaviour Variability
Author(s):  
Cristina Vicente-Chicote, Diego Alonso and Franck Chauvel
Abstract:  

This paper presents a Model-Driven Engineering approach to component-based architecture description, which provides designers with two variability modelling mechanisms, both of them regarding component behaviour. The first one deals with how components perform their activities (which algorithm is followed), and the second one deals with how these activities are implemented, for instance, using different Commercial Of-The-Shelf (COTS) products. To achieve this, the basic V3Studio meta-model, which allows designers to model both the structure and behaviour of component-based software systems, is presented. V3Studio, which takes many of its elements from the UML 2.0 meta-model, offers three loosely coupled views of the system under development, namely: a structural view (component diagrams), a coordination view (state-machine diagrams), and a dataflow view (activity diagrams). The last two of them, concerning component behaviour, are then extended to incorporate the two variability mechanisms previously introduced. To conclude, a case study regarding the design and implementation of a vision guided robotic system is presented, which demonstrates the feasibility of the proposal.


 
Title:  
E-LEARNING FOR HEALTH ISSUES BASED ON RULE-BASED REASONING AND MULTI-CRITERIA DECISION MAKING
Author(s):  
Katerina Kabassi, Maria Virvou and George Tsihrintzis
Abstract:  

The paper presents an e-learning system called INTATU, which provides education on Atheromatosis. Atheromatosis is a disease that is of interest not only to doctors, but also to common users without any medical background. For this purpose, the system maintains and processes information about the users’ interests and background knowledge and provides individualized learning for the domain of Atheromatosis. More specifically, the reasoning mechanism in INTATU uses a novel combination of rule-based reasoning and a multi-criteria decision making theory called SAW for selecting the theory topics that appear to be most appropriate for a particular user with respect to his/her background knowledge and interest.


 
Title:  
COSA: AN ARCHITECTURAL DESCRIPTION META-MODEL
Author(s):  
Sylvain Maillard, Adel Smeda and Mourad Oussalah
Abstract:  

As software systems grow, their complexity augments dramatically. In consequence their understandability and evolvability are becoming a difficult task. To cope with this complexity, sophisticated approaches are needed to describe the architecture of these systems. Architectural description is much more visible as an important and explicit analysis design activity in software development. The architecture of a software system can be described using either an architecture description language (ADL) or an object-oriented modeling language. In this article, we present a hybrid model, based on the two approaches, to describe the architecture of software systems. The principal contribution of this approach is, on the one hand to extend ADLs with object-oriented concepts and mechanisms, and on the other hand to describe connectors as entities of first class that can treat the complex dependences among components.


 
Title:  
A METHODOLOGY TO FINALIZE THE REQUIREMENTS FOR A PROJECT WITH MULTIPLE STAKE- HOLDERS Presenting Software Engineering Workshop as a Solution
Author(s):  
Ashutosh Parashar and Selvakumaran Mannappan
Abstract:  

Implementing software projects for large corporations, more often than not, involves large number of stakeholders, each with their own set of requirements, which makes requirements finalization very difficult. The authors propose Solution Envisioning Workshop (SEW) as a solution and present the practice from the context of a large project executed for a European banking giant. The project had a very large and diverse set of stake-holders- around 300 member banks as the client organizations, interfacing requirements with around ten separate systems/ projects, active involvement of central departments of the organization as active stakeholders. The paper elaborates on the approach taken towards implementing the SEW, the preparatory & follow-up activities, the benefits, limitations and the lessons learnt. They conclude that the SEW approach results in creating better understanding, much faster requirement finalization. Quantitative and qualitative inputs are provided to corroborate the findings.


 
Area 3 - Distributed and Parallel Systems  
Title:  
A MODEL BASED APPROACH FOR DEVELOPING ADAPTIVE MULTIMODAL INTERACTIVE SYSTEMS
Author(s):  
Waltenegus Dargie, Anja Strunk, Matthias Winkler, Bernd Mrohs, Sunil Thakar and Wilfried Enkelmann
Abstract:  
Currently available mobile devices lack the flexibility and simplicity their users require of them. To start with, most of them rigidly offer impoverished, traditional interactive mechanisms which are not handy for mobile users. Those which are multimodal lack the grace to adapt to the current task and context of their users. Some of the reasons for such inflexibility are the cost, duration and complexity of developing adaptive multimodal interactive systems. This paper motivates and introduces a modelling and development platform – the EMODE platform – which enables the rapid development and deployment of adaptive and multimodal mobile interactive systems.

 
Title:  
CONSTRUCTION OF BENCHMARKS FOR COMPARISON OF GRID RESOURCE PLANNING ALGORITHMS
Author(s):  
Wolfgang Süß, Alexander Quinte, Wilfried Jakob and Karl-Uwe Stucky
Abstract:  
The present contribution will focus on the systematic construction of benchmarks used for the evaluation of resource planning systems. Two characteristics for assessing the complexity of the benchmarks were developed. These benchmarks were used to evaluate the resource management system GORBA and the optimization strategies for resource planning applied in this system. At first, major aspects of GORBA, in particular two-step resource planning, will be described briefly, before the different classes of benchmarks will be defined. With the help of these benchmarks, GORBA was evaluated. The evaluation results will be presented and conclusions drawn. The contribution shall be completed by an outlook on further activities.

 
Title:  
DISTRIBUTED PATH RESTORATION ALGORITHM FOR ANONYMITY IN P2P FILE SHARING SYSTEMS
Author(s):  
Pilar Manzanares-Lopez, Juan Pedro Munoz-Gea, Jose maria Malgosa-Sanahuja Juan Carlos Sanchez-Aarnoutse and Joan Garcia-Haro
Abstract:  
In this paper, a new mechanism to achieve anonymity in peer-to-peer (P2P) file sharing systems is proposed. As usual, the anonymity is obtained by means of connecting the source and destination peers through a set of intermediate nodes, creating a multiple-hop path. The main paper contribution is a distributed algorithm able to guarantee the anonymity even when a node in a path fails (voluntarily or not). The algorithm takes into account the inherent costs associated with multiple-hop communications and tries to reach a well-balanced solution between the anonymity degree and its associated costs. Some parameters are obtained analytically but the main network performances are evaluated by simulation.

 
Title:  
ADDING UNDERLAY AWARE FAULT TOLERANCE TO HIERARCHICAL EVENT BROKER NETWORKS
Author(s):  
Madhu Kumar S.D., Umesh Bellur and Erusu Kranthi Kiran
Abstract:  
Recent studies have shown that the quality of service of overlay topologies and routing algorithms for event broker networks can be improved by the use of underlying network information. Hierarchical topologies are widely used in recent event-based publish-subscribe systems for reduced message traffic. We hypothesize that the performance and fault tolerance of existing hierarchical topology based event broker networks can be improved by augmenting the construction of the overlay and subsequent routing with the underlay information. In this paper we present a linear time algorithm for constructing a fault tolerant overlay topology for event broker networks that can tolerate single node and link failures and improve the routing performance by balancing network load. We test the algorithm on the SIENA event based middleware which follows the hierarchical model for event brokers. We present simulation results that support the claim that the use of underlay information can significantly increase the robustness of the overlay topology and performance of the routing algorithm for hierarchical event broker networks.

 
Title:  
PERFORMANCE ANALYSIS OF SCHEDULING-BASED LOAD BALANCING FOR DISTRIBUTED AND PARALLEL SYSTEMS USING VISUALSIM
Author(s):  
Abu Asaduzzaman, Manira Rani and Darryl Koivisto
Abstract:  
The concurrency in a distributed and parallel system can be used to improve the performance of that system by properly distributing the tasks among the processors. However, the advantage of parallelism may be offset by the increased complexity of load balancing techniques. Scheduling is proven to be an effective technique for load balancing in any distributed and parallel system. Studies indicate that for application-specific systems static scheduling may be the potential choice due to its simplicity. In this paper, we analyze the performance of load balancing by static scheduling for distributed and parallel systems. Using VisualSim, we develop a simulation program that models a system with three processors working simultaneously on a single problem. We obtain the response time and completion time for different scheduling algorithms and task groups. Simulation results show that load balancing by scheduling has significant impact on the performance of distributed and parallel systems.

 
Title:  
WEB SERVICE TRANSACTION MANAGEMENT
Author(s):  
Frans A. Henskens
Abstract:  
This paper describes extension of the functionality of conventional web browsers to produce a new enhanced web browser. Each instance of this enhanced browser is part of a federation of browser instances that use a directed graph-based technique to provide transaction and hence concurrency control over access to web services. These ‘super browsers’ communicate with web-based services across the Internet, application code that may be obtained from the Internet but then executes as a local program, and with other browser instances.

 
Title:  
MULTI-CRITERION GENETIC PROGRAMMING WITH NEGATIVE SELECTION FOR FINDING PARETO SOLUTIONS
Author(s):  
Jerzy Marian Balicki
Abstract:  
Multi-criterion genetic programming (MGP) is a relatively new approach for a decision making aid and it can be applied to determine the Pareto solutions. This purpose can be obtained by formulation of a multi-criterion optimization problem that can be solved by genetic programming. An improved negative selection procedure to handle constraints in the MGP has been proposed. In the test instance, both a workload of a bottleneck computer and the cost of system are minimized; in contrast, a reliability of the distributed system is maximized.

 
Title:  
A HYPER-HEURISTIC FOR SCHEDULING INDEPENDENT JOBS IN COMPUTATIONAL GRIDS
Author(s):  
Juan Antonio Gonzalez, Maria Serna and Fatos Xhafa
Abstract:  
In this paper we present the design and implementation of an hyper-heuristic for efficiently scheduling independent jobs in Computational Grids. An efficient scheduling of jobs to Grid resources depends on many parameters, among others, the characteristics of the Grid infrastructure and job characteristics (such as computing capacity, consistency of computing, etc.). Existing \emph{ad hoc} scheduling methods (batch and immediate mode) have shown their efficacy for certain types of Grids and job characteristics. However, as stand alone methods, they are not able to produce the best planning of jobs to resources for different types of Grid resources and job characteristics. In this work we have designed and implemented a hyper-heuristic that uses a set of ad hoc (immediate and batch mode) scheduling methods to provide the scheduling of jobs to Grid nodes according to the Grid and job characteristics. The hyper-heuristic is a high level algorithm, which examines the state and characteristics of the Grid system (jobs and resources), and selects and applies the ad hoc method that yields the best planning of jobs to Grid resources. The resulting hyper-heuristic based scheduler can be thus used to develop network-aware applications that need efficient planning of jobs to resources. The Hyper-heuristic has been tested and evaluated in a dynamic setting through a prototype of a Grid simulator. The experimental evaluation showed the usefulness of the hyper-heuristic in planning of jobs to resources as opposed to planning without knowledge of the Grid and jobs characteristics.

 
Title:  
USING RULE-BASED ENGINE TO SUPPORT TEST VALIDATION MANAGEMENT OF COMPLEX SAFETY-CRITICAL SYSTEMS
Author(s):    
Valentina Accili, Giovanni Cantone, Christian Di Biagio, Guido Pennella and Fabrizio Gori
Abstract:  
Testing and validating software components in distributed architecture environments are critical activities for our reference company, where those activities have been performed in a non-automatic way up to now, so spending time and human resources. As a consequence, we were charged to design and construct a flexible system, the Automated Test Manager (ATM), for the automatic software testing and automatic validation of test results. In this paper we focus on the subsystem ATM-Console that handles the validation aspect of the ATM system. This subsystem reuses an Open Source Rule-based Engine, which is able to meet our purposes. Based on results from a case study, the paper reports that introducing the ATM-Console in field could very significantly improve the efficiency of test validation.

 
Title:    
LOCATION MANAGEMENT IN DISTRIBUTED, SERVICE-ORIENTED COMMAND AND CONTROL SYSTEMS
Author(s):  
Thomas Nitsche
Abstract:  
In this paper we propose an efficient location management scheme for large amounts of mobile users and other objects in distributed, service-oriented systems. To efficiently observe geographic areas of interest (AOI) in command and control information systems (C2IS), i.e. to compute the AOI within a C2IS, we introduce the concept of region services. These services contain all objects of a fixed geographic region. To handle in-homogenous distributions of objects we propose a combination of regular and hierarchical regions. A user-specific C2IS instance can now directly and efficiently establish subscription-relations to the relevant objects around its AOI in order to obtain information about the position, status and behaviour of these objects. If objects including the current user itself now dynamically change their position we merely have to update the information relations to those few objects that enter or leave a region within the AOI, instead of having to consider all objects within the global information grid. Region services thus do not only improve the efficiency for generating a static common operational picture but can also handle any dynamic changes of object positions.

 
Title:  
REGULATION MECHANISM FOR CACHING IN PORTAL APPLICATIONS
Author(s):  
Mehregan Mahdavi, John Shepherd and Boualem Benatallah
Abstract:  
Web portals are emerging Web-based applications that provide a single interface to access different data or service providers. Caching data from different providers at the portal can increase the performance of the system in terms of throughput and user-perceived delay. The portal and its providers can collaborate in order to determine the candidate caching objects. The providers allocate a caching score to each object sent to the portal. The decision for caching an object is made at the portal mainly based on these scores. However, the fact that it is up to providers to calculate such caching scores may lead to inconsistencies between them. The portal should detect these inconsistencies and regulate them in order to achieve a fair and effective caching strategy.

 
Area 4 - Information Systems and Data Management  
Title:  
VERSION CONTROL FOR RDF TRIPLE STORES
Author(s):  
Steve Cassidy and James Ballantine
Abstract:  
RDF, the core data format for the Semantic Web, is increasingly being deployed both from automated sources and via human authoring either directly or through tools that generate RDF output. As individuals build up large amounts of RDF data and as groups begin to collaborate on authoring knowledge stores in RDF, the need for some kind of version management becomes apparent. While there are many version control systems available for program source code and even for XML data, the use of version control for RDF data is not a widely explored area. This paper examines an existing version control system for program source code, Darcs, which is grounded in a semi-formal \emph{theory of patches}, and proposes an adaptation to directly manage versions of an RDF triple store.

 
Title:  
ANALYSIS OF ONTOLOGICAL INSTANCES A Data Warehouse for the Semantic Web
Author(s):  
Roxana Danger and Rafael Berlanga
Abstract:  
New data warehouse tools for Semantic Web are becoming more and more necessary. The present paper formalizes one such a tool considering, on the one hand, the semantics and theorical foundations of Description Logic and, on the other hand, the current developments for information data generalization. The presented model is constituted by dimensions and multidimensional schemata and spaces. An algorithm to retrieve interesting spaces according to the data distribution is also proposed. Some ideas from Data Mining techniques are incorporated in order to allow users to discover knowledge from the Semantic Web.

 
Title:  
OPTIMIZATION OF DISTRIBUTED OLAP CUBES WITH AN ADAPTIVE SIMULATED ANNEALING ALGORITHM
Author(s):  
Jorge Loureiro and Orlando Belo
Abstract:  
The materialization of multidimensional structures is a sine qua non condition of performance for OLAP systems. Several proposals have addressed the problem of selecting the optimal set of aggregations for the centralized OLAP approach. But the OLAP structures may also be distributed to capture the known advantages of distributed databases. However, this approach introduces another term into the optimizing equation: space, which generates new inter-node subcubes’ dependencies. The problem to solve is the selection of the most appropriate cubes, but also its correct allocation. The optimizing heuristics face now with extra complexity, hardening its searching for solutions. To address this extended problem, this paper proposes a simulated annealing heuristic, which includes an adaptive mechanism, concerning the size of each move of the hill climber. The results of the experimental simulation show that this algorithm is a good solution for this kind of problem, especially when it comes to its remarkable scalability.

 
Title:  
HYPERSET WEB LIKE DATABASES AND THE EXPERIMENTAL IMPLEMENTATION OF THE QUERY LANGUAGE DELTA Current State of Affairs
Author(s):  
Richard Molyneux and Vladimir Sazonov
Abstract:  
The hyperset approach to WEB-like or semistructured databases is outlined. WDB is presented either (i) as a finite edge-labelled graph or, equivalently, (ii) as system of (hyper)set equations or (iii) in a special XML-WDB format convenient both for distributed WDB and for including arbitrary XML elements in this framework. The current state of affairs on experimental implementation of a query language $\Delta$ (Delta) to such databases---the main result of this paper---is described with outlining some further implementation work to be done.

 
Title:  
A PREDICTIVE AUTOMATIC TUNING SERVICE FOR OBJECT POOLING BASED ON DYNAMIC MARKOV MODELING
Author(s):  
Nima Sharifimehr and Samira Sadaoui
Abstract:  
One of the most challenging concerns in the development of enterprise software systems is how to manage effectively and efficiently available resources. Object pooling service as a resource management facility significantly improves the performance of application servers. However, tuning object pool services is a complicated task that we address here through a predictive automatic approach. Based on dynamic markov models, which capture high-order temporal dependencies and locally optimize the required length of memory, we find patterns across object invocations that can be used for prediction purposes. Subsequently, we propose an effective automatic tuning solution, with reasonable time costs, which takes advantage of past and future information about activities of object pool services. Afterwards, we present experimental results which demonstrate the scalability and effectiveness of our novel tuning solution, namely predictive automatic tuning service.

 
Title:  
THE TOP-TEN WIKIPEDIAS A Quantitative Analysis Using Wiki XRay
Author(s):  
Felipe Ortega, Jesus M. Gonzalez-Barahona and Gregorio Robles
Abstract:  
In a few years, Wikipedia has become one of the information systems with more public (both producers and consumers) of the Internet. Its system and information architecture is relatively simple, but has proven to be capable of supporting the largest and more diverse community of collaborative authorship worldwide. In this paper, we analyze in detail this community, and the contents it is producing. Using a quantitative methodology based on the analysis of the public Wikipedia databases, we describe the main characteristics of the 10 largest language editions, and the authors that work in them. The methodology (which is almost completely automated) is generic enough to be used on the rest of the editions, providing a convenient framework to develop a complete quantitative analysis of the Wikipedia. Among other parameters, we study the evolution of the number of contributions and articles, their size, and the differences in contributions by different authors, inferring some relationships between contribution patterns and content. These relationships reflect (and in part, explain) the evolution of the different language editions so far, as well as their future trends.

 
Title:  
MODELING WEB INFORMATION SYSTEMS FOR CO-EVOLUTION
Author(s):  
Buddhima De Silva and Athula Ginige
Abstract:  
When an information system is introduced to an organisation it changes the original business environment thus changes the original requirements. This can lead to changes to processes that are supported by the information system. Also when users get familiar with the system they ask for more functionality. This gives rise to a cycle of changes known as co-evolution. One way to facilitate co-evolution is to empower end-users to make changes to the web application to accommodate the required changes while using that web application. This can be achieved through meta-design paradigm. We model web applications using high level abstract concepts such as user, hypertext, process, data and presentation. We use set of smart tools to generate the application based on this high-level specification. We developed a hierarchical meta-model where an instance represent a web application. High level aspects are used to populate the attribute values of a meta-model instance. End-user can create or change a web application by specifying or changing the high level concepts in the meta-model. This paper discusses these high level aspects of web information systems. We also conducted a study to find out how end-users conceptualise a web application using these aspects. We found that end-users think naturally in terms of some of the aspects but not all. Therefore, in meta-model approach we provide default values for the model attributes which users can overwrite. This approach based on meta-design paradigm will help to realise the end-user development to support co-evolution.

 
Title:  
AN APPROXIMATION-AWARE ALGEBRA FOR XML FULL-TEXT QUERIES
Author(s):  
Giacomo Buratti and Danilo Montesi
Abstract:  
XQuery Full-Text is the proposed standard language for querying XML documents using either standard or full-text conditions; while full-text conditions can have a boolean or a ranked semantics, standard conditions must be satisfied for an element to be returned. This paper proposes a more general formal model that considers structural, value-based and full-text conditions as desiderata rather than mandatory constraints. The goal is achieved defining a set of relaxation operators that, given a path expression or a selection condition, return a set of relaxed path expressions or selection conditions. Algebraic approximated operators are defined for representing typical queries and returns either elements that perfectly respect the conditions and elements that answer to a relaxed version of the original query. A score reflecting the level of satisfaction of the original query is assigned to each result of the relaxed query.

 
Title:  
ENABLING AN END-USER DRIVEN APPROACH FOR MANAGING EVOLVING USER INTERFACES IN BUSINESS WEB APPLICATIONS A Web Application Architecture using Smart Business Object
Author(s):  
Xufeng (Danny) Liang and Athula Ginige
Abstract:  
As web applications become the centre-stage of today’s businesses, they require a more manageable and holis- tic approach to handle the rapidity and diversity which changes occur in their web user interfaces. Adopting the End User Development (EUD) paradigm, we advocate an end-user driven approach to maintain evolving user interfaces of business web applications. Such an approach, demands a complementary web application architecture to enable a flexible, managed, and fine-grained control over the web user interfaces. In this paper, we proposed a web application architecture that embraces a dedicated UI Definition Layer, enforced by a web UI Model, for describing the user interfaces in business web applications. This empowers business users to use intuitive web-based tools to effortlessly manage and create web user interfaces. The proposed architecture is realised through the Smart Business Object (SBO) technology we previously developed. We have also created a toolkit based on our proposed architecture. A tailored version of the toolkit has been utilised in an enterprise level web-based workflow application.

 
Title:  
INTEGRATING BUSINESS PROCESSES AND INFORMATION SYSTEMS
Author(s):  
Giorgio Bruno
Abstract:  
While the need for a better integration between business processes and enterprise information systems is widely acknowledged, current notations for business processes are inclined to emphasize control-flow issues and omit to provide adequate links with two fundamental aspects of enterprise information systems, i.e. the human tasks and the information flow among the tasks. This paper presents a notation for business processes whose purpose is to overcome the above-mentioned limitations. This notation, called tk-nets (task-oriented nets) supports four interaction patterns between process elements and human tasks. It is exemplified with the help of a case study concerning a web-based application intended to manage the handling of paper submissions to conferences.

 
Title:  
METRICS FOR MEASURING DATA QUALITY Foundations for an Economic Data Quality Management
Author(s):  
Bernd Heinrich, Marcus Kaiser and Mathias Klier
Abstract:  
The article develops metrics for an economic oriented management of data quality. Two data quality dimen-sions are focussed: consistency and timeliness. For deriving adequate metrics several requirements are stated (e. g. normalisation, cardinality, adaptivity, interpretability). Then the authors discuss existing ap-proaches for measuring data quality and illustrate their weaknesses. Based upon these considerations, new metrics are developed for the data quality dimensions consistency and timeliness. These metrics are applied in practice and the results are illustrated in the case of a major German mobile services provider.

 
Title:  
A NOVEL ROBUST SCHEME OF WATERMARKING DATABASE
Author(s):  
Jia-jin Le, Qin Zhu
Abstract:  
A scheme for watermarking relational database is proposed in this paper. It is applied to protect the copyright of numeric data. The chaos binary sequences are generated under the control of the privacy key, and are utilized as the watermark signal and the control signal for watermark embedding. Both the privacy key and the primary key determine the watermarking position, and the watermark is embedded into the numeric data by changing the parity of their low order digits, thus avoids the syndrome phenomena caused by the usual Least Significant Bit (LSB) watermarking scheme. The embedment of the watermark meets the requirement of the synchronous dynamic updating for the database, and the detection of the watermark needs no original database. Both the theoretical analysis and the practical experiments prove that this scheme possesses fine efficiency, imperceptibility and security, and it is robust against common attacks towards the watermark.

 
Title:  
LARGE SCALE RDF STORAGE SOLUTIONS EVALUATION
Author(s):  
Bela Stantic, Juergen Bock and Irina Astrova
Abstract:  
The increasing popularity of the Semantic Web and Semantic Technologies require sophisticated ways to store huge amounts of semantic data. RDF together with the rule base RDF Schema have proved themselves as good candidates for storing semantic data due to the simplicity and high abstraction level. A number of large scale RDF data storage solutions have been proposed. Several typical representative have been discussed and compared in this work, namely Sesame, Kowari, YARS, Redland and Oracle's RDF\_MATCH table function. We present a comparison of those approaches with respect to consideration of context information, supported access protocols, query languages, indexing methods, RDF Schema awareness, and implementation. We also identify applicability as well as discuss advantages and disadvantages of particular approach. Furthermore, an overview of storage requirements and performance tests has been presented. A summary of performance analysis and recommendations are given and discussed.

 
Title:  
TURNING CONCEPTS INTO REALITY Bridging Requirement Engineering and Model-Driven Generation of Web Applications
Author(s):  
Xufeng (Danny) Liang and Athula Ginige
Abstract:  
Today web application development is under the pressure of evolving business needs, compressed timelines, and limited resources. These dynamics demand a streamlined set of tools that turns concepts into reality and minimises the gap between the original business requirements and the final physical implementation of the web application. This paper will demonstrate how this gap can be reduced by the integration of two techniques, KCPM (Klagenfurt Predesign Conceptual Model) and SBO (Smart Business Object), allowing fully functional web applications to be auto-generated from a set of glossaries.

 
Title:  
DATA QUALITY IN XML DATABASES A Methodology for Semi-structured Database Design Supporting Data Quality Issues
Author(s):  
Eugenio Verbo, Ismael Caballero, Eduardo Fernandez-Medina, Mario Piattini
Abstract:  
As the use of XML as a technology for data exchange has widely spread, the need of a new technology to store semi-structured data in a more efficient way has been emphasized. Consequently, XML DBs have been created in order to store a great amount of XML documents. However, like in previous data models as the relational model, data quality has been frequently left aside. Since data plays a key role in organization efficiency management, its quality should be managed. With the intention of providing a base for data quality management, our proposal address the adaptation of a XML DB development methodology focused on data quality. To do that we have based on some key area processes of a Data Quality Maturity reference model for information management process definition.

 
Title:  
AN EFFICIENT ALGORITHM TO COMPUTE MAX/MIN VALUES IN SLIDING WINDOW FOR DATA STREAMS
Author(s):  
Ying Sha and Jianlong Tan
Abstract:  
With the development of Internet, more and more data-stream based applications emerged, where calculation of aggregate functions plays an important role. Many studies were conducted on aggregation functions; however, an efficient algorithm to calculate Max/Min values remains an open problem. Here, we propose a novel, exact method to computer Max/Min values for the numerical input data. Employing an incrementally calculating strategy on sliding windows, this algorithm gains a high efficiency. We analyze the algorithm and prove the time-complexity and space-complexity in worst cases. Experimental results confirm its high performance on a testing dataset.

 
Title:  
ESTIMATE VALIDITY REGIONS FOR NEAREST NEIGHBOR QUERIES
Author(s):  
Xing Gao, Ali R. Hurson and Krishna Kavi
Abstract:  
Users’ queries for data or services in a mobile computing environment are highly relevant to their current locations. A nearest neighbor (NN) query finds the data object closest to the user’s location; and hence, NN query issued at different locations may lead to different results. The nearest neighbor validity region (NNVR) is the area where an NN query result remains valid. A cached NN result can be used to answer semantically equivalent NN queries issued in the same NNVR. Our analysis discovers that NNVRs carry useful information about neighboring objects’ locations. This paper proposes an algorithm data mining the hidden information in cached NNVRs to increase the proxy caching performance. The experimental results and analysis have demonstrated the effectiveness of the proposed algorithm in reducing query response time and workload on the database server.

 
Title:  
TOWARDS A HOLISTIC INTEGRATION OF SOFTWARE LIFECYCLE PROCESSES USING THE SEMANTIC WEB
Author(s):  
Roy Oberhauser and Rainer Schmidt
Abstract:  
For comprehensive software lifecycle processes, a trichotomy continues to subsist between the software development processes, enterprise IT processes, and the software runtime environment. Currently, integrating software lifecycle processes requires substantial effort, and the information needed for the execution of (semi-)automated software lifecycle workflows is not readily accessible and is typically scattered across semantically heterogeneous sources. Consequently, an interrupted flow of information ensues between the development/maintenance phases and operational phases in the software lifecycle, resulting in ignorance, inefficiencies, and suboptimal product quality and support levels. Furthermore, today’s abstract IT (e.g., ITIL) and software processes are often derived into concrete processes and workflows manually, causing errors, extensive effort, and limiting widespread adoption of best practices. This paper describes an approach for improving information flow throughout the software lifecycle via the (semi-)automated realization of abstract software lifecycle processes and workflows in combination with Semantic Web technologies.

 
Title:  
MULTI OBJECTIVE ANALYSIS FOR TIMEBOXING MODELS OF SOFTWARE DEVELOPMENT
Author(s):  
Vassilis C. Gerogiannis and Pandelis G. Ipsilandis
Abstract:  
In iterative/incremental software development, software deliverables are built in iterations - each iteration providing parts of the required software functionality. To better manage and monitor resources, plan and deliverables, iterations are usually performed during specific time periods, so called “time boxes”. Each time box is further divided into a sequence of stages and a dedicated development team is assigned to each stage. Iterations can be performed in parallel to reduce the project completion time by exploiting a “pipelining” concept, that is, when a team completes the tasks of a stage, it hands over the intermediate deliverables to the team executing the next stage and then starts executing the same stage in the next iteration. In this paper, we address the problem of optimizing the schedule of a software project that follows an iterative, timeboxing process model. A multi objective linear programming technique is introduced to consider multiple parameters, such as the project duration, the work discontinuities of development teams in successive iterations and the release (delivery) time of software deliverables. The proposed model can be used to generate alternative project plans based on the relative importance of these parameters.

 
Title:  
A DATA-DRIVEN DESIGN FOR DERIVING USABILITY METRICS
Author(s):  
Tamara Babaian, WendyLucas and Heikki Topi
Abstract:  
The complexity of Enterprise Information Systems can be overwhelming to users, yet they are an often overlooked domain for usability research. To better understand the ways in which users interact with these systems, we have designed an infrastructure for input logging that is built upon a data model relating system components, user inputs, and tasks. This infrastructure is aware of user representations, task representations, and the history of user interactions. The interface components themselves log user inputs, so that timing data and action events are automatically aligned and are linked to specific tasks. The knowledge gained about user interactions at varying levels of granularity, ranging from keystroke analysis to higher-level task performance, is a valuable resource for both assessing and enhancing system usability.

 
Title:  
A FRAMEWORK FOR THE DEVELOPMENT AND DEPLOYMENT OF EVOLVING APPLICATIONS The Domain Model
Author(s):  
Georgios Voulalas and Georgios Evangelidis
Abstract:  
Software development is an R&D intensive activity, dominated by human creativity and diseconomies of scale. Model-driven architecture improves productivity by introducing formal models that can be understood by computers. Through these models the problems of portability, interoperability, maintenance, and documentation are also successfully addressed. However, the problem of evolving requirements, which is more prevalent within the context of business applications, additionally calls for efficient mechanisms that ensure consistency between models and code and enable seamless and rapid accommodation of changes, without interrupting severely the operation of the deployed application. Having presented a framework that supports rapid development and deployment of evolving web-based applications, this paper elaborates on the Domain Model that maps to the MDA Platform Independent Model and is the cornerstone of the overall infrastructure.

 
Title:  
REUSING PAST QUERIES TO FACILITATE INFORMATION RETRIEVAL
Author(s):  
Gilles Hubert and Josiane Mothe
Abstract:  
This paper introduces a new approach of query reuse in order to help the user to retrieve relevant informa-tion. Past search experiences are a source of information that can be useful for a user trying to find informa-tion answering his information need. For example, a user searching about a new subject can benefit from past search experiences carried out by previous users about the same subject. The approach presented in this paper is based on collecting the different search attempts submitted to a search engine by a user trying to fulfil an information need. This approach takes mainly advantage of implicit links that exist between the dif-ferent search attempts that try to satisfy a single information need. Search experiences are modelled accord-ing to the concepts defined in the domain of version management. This modelling provides multiple possi-bilities to reuse past experiences notably to recommend terms for query reformulation or documents judged relevant by other users.

 
Title:  
EARLY PERFORMANCE ANALYSIS IN THE DESIGN OF SPATIAL DATABASES
Author(s):  
Vincenzo Del Fatto, Massimiliano Giordano, Giuseppe Polese Monica Sebillo and Genoveffa Tortora
Abstract:  
The construction of spatial databases often requires considerable computing and storage resources, due to the inherent complexity of spatial data and their manipulation. Thus, it would be desirable to devise methods enabling a designer to estimate performances of a spatial database since from its early design stages. We present a method for estimating both the size of data and the cost of operations based on the conceptual schema of the spatial database. We also show the application of the method to the design of a spatial database concerning botanic data.

 
Title:  
PARADIGM SHIFT IN INTER-ORGANISATIONAL COLLABORATION A Framework for Web based Dynamice Collaboration
Author(s):  
Ioakim (Makis) Marmaridis and Athula Ginige
Abstract:  
The proliferation of the World Wide Web (web) offers new ways for organisations to do business and collaborate with others to gain competitive advantage. Dynamic eCollaboration has the characteristics to keep up with the fast changing business landscape. It requires however a framework for collaboration that can also keep up with rapid change. In this paper we present the Dynamic eCollaboration model that brings the concepts of P2P collaboration to organisations. It fills this gap and offers a new avenue for organisations of all sizes to embrace collaboration and benefit from it. We also present our technology framework built to support Dy namic eCollaboration. The framework is component-based and extensible with an architecture that can scale. It incorporates a flexible security subsystem, a lightweight workflow engine optimised for web applications and a novel method for bundling and sharing web based information called Bitlet.

 
Title:  
MINING THE WEB FOR LEARNING THE ONTOLOGY
Author(s):  
Bassam M. Aoun and Marie Khair
Abstract:  
The Semantic Web is a network of information linked up in such a way as to be easily processed by machines, on a global scale. To reach semantic web, current web resources should be automatically translated into semantic web resources. This is usually performed through semantic web mining, which aims at combining the two fast-developing research areas, the Semantic Web and Web Mining. A major step to be performed is the ontology-learning phase, where rules are mined from unstructured text and used later on to fill the ontology. Making sure that all rules are found and no additional and inaccurate rules are inserted, remains a critical issue since it constitutes the basis for building the semantic web. The mostly used algorithm for this task is the Apriori algorithm, which is inherited from classical data mining. However, due to the nature of the semantic web, some important rules can be dropped. This paper presents an enhanced version of the Apriori algorithm, En_Apriori, which uses the Apriori algorithm in combination with the maximal association and the X2 test to generate association rules from web/textual documents. This provides a major refinement to the classical ontology learning approach.

 
Title:  
DESIGN AND IMPLEMENTATION OF DATA STREAM PROCESSING APPLICATIONS
Author(s):  
Edwin Kwan, Janusz R. Getta, Ehsan Vossough
Abstract:  
Processing of data streams requires the continuous re-computation of end-user applications over the long and steadily increasing sequences of data items. Design and implementation of the applications is always affected by the specific properties of their domains. This work considers the design and implementation of data stream processing applications in the domains where the limited computational resources, constraints imposed on the implementation techniques and specific properties of applications exclude the use of a general purpose data stream management system. We start from an abstract view of a data stream processing application as an n-ary operation on the input streams and we show how to decompose such an operation into an expression over the binary operations on windows and materializations of intermediate results. Then, we show how to implement an application through the integration of atomic applications, each one processing one data stream at a time. The implementation techniques described in the paper include the representation of atomic application as sequences of operation in an XML based language and translation of XML specifications into the programs in an object-oriented programming language.

 
Title:  
A CONTEXT-BASED APPROACH FOR LINGUISTIC MATCHING
Author(s):  
Youssef BououlidIdrissi and Julie Vachon
Abstract:  
Most of mapping systems currently rely on linguistic matching. Among others, it constitutes a mandatory step in the matching process of virtually all existing ontology alignment solutions. It is actually a key issue in the semantic matching of heterogeneous data sources. As currently implemented by most systems, linguistic mapping somehow boils down to either string comparison or a synonym look-up in a dictionary. But these solutions often proved inefficient when dealing with highly heterogeneous data sources. They can even have some degrading effect if they refer to general purpose dictionaries not taking into account the specificity of data sources' domain. To overcome these limitations and better cope with data source heterogeneity, this article presents Indigo, a system which can compute semantic matching by taking into account data sources' context. The distinctive feature of Indigo is that it enriches data sources with semantic information extracted from their individual development artifacts. During this enrichment step, each concept name is annotated with semantic information coming from its specific domain. Indigo can then compute a more accurate mapping between the two data sources thus enhanced. Indigo was experimented on two case studies. Its performance is compared to the one of two other well-known matching systems applied on the same examples.

 
Title:  
REUSING PAST QUERIES TO FACILITATE INFORMATION RETRIEVAL
Author(s):  
Gilles Hubert and Josiane Mothe
Abstract:  
This paper introduces a new approach of query reuse in order to help the user to retrieve relevant informa-tion. Past search experiences are a source of information that can be useful for a user trying to find informa-tion answering his information need. For example, a user searching about a new subject can benefit from past search experiences carried out by previous users about the same subject. The approach presented in this paper is based on collecting the different search attempts submitted to a search engine by a user trying to fulfil an information need. This approach takes mainly advantage of implicit links that exist between the dif-ferent search attempts that try to satisfy a single information need. Search experiences are modelled accord-ing to the concepts defined in the domain of version management. This modelling provides multiple possi-bilities to reuse past experiences notably to recommend terms for query reformulation or documents judged relevant by other users.

 
Title:  
WEB-BASED DATA MINING SERVICES A Solution Proposal
Author(s):  
Serban Ghenea and Cornelia Oprean
Abstract:  
The paper presents the results obtained in building a web-based solution that provides to registered users, accessing a portal on the Internet, the possibility to perform complex business analysis tasks, using data mining algorithms and services implemented by Microsoft SQL Server 2005 Analysis Services. The database platform sustains the web operation of a complete ERP system, offering support for back-office management and establishing a B2B environment that automates collaborative business processes.

 
Title:  
MODELLING OF SUSPENDED SEDIMENT In Nile River using ANN
Author(s):  
Abdelazim M. Negm, M. M. Elfiky, T. M. Owais, M. H. Nassar
Abstract:  
Artificial neural network (ANN) prediction models can be considered as an efficient tool in predictions once they are trained from examples or patterns. These types of ANN models need large amount of data which should be at hand before thinking to develop such models. In this paper, the capability of ANN model to predict suspended sediment in 2-D flow field is investigated. The data used for training the network are generated from a pre-verified 2-D hydrodynamic and a 2-D suspended sediment models which were recently developed by the authors. About two-thirds of the data are used for training the network while the rest of the data are used for validating and testing the developed ANN model. Field data measured by hydraulic research Institute are used to compare the results of the ANN model. The conjugate gradient learning algorithm is adopted. The results of the developed ANN model proved that the technique is reliable in such field compared to both the results of the previously developed models and the field data provided that the trained network is used to generate prediction within the range of training data.

 
Title:  
ASYNCHRONOUS REPLICATION CONFLICT CLASSIFICATION, DETECTION AND RESOLUTION FOR HETEROGENEOUS DATA GRIDS
Author(s):  
Eva Kühn, Angelika Ruhdorfer and Vesna Šešum-Cavic
Abstract:  
Data replication is a well-known technique in distributed systems, which offers many advantages such as higher data availability, load balancing, fault-tolerance, etc. It can serve to implement data grids where large amounts of data are shared. Besides all advantages, it is necessary to point to the problems, called replication conflicts that arise due to the replication strategies. In this paper, we present an infrastructure how to cope with replication of heterogeneous data in general for conflict detection and resolution and we illustrate its usefulness by means of an industrial business case implementation for the domain of relational databases and show further extensions for more complex resolution strategies. The implementation deals with the special case of asynchronous database replication in a peer-to-peer (multi-master) scenario, the possible conflicts in this particular domain and their classification, the ways of conflict detection, and shows some possible solution methods.

 
Title:  
A CONCERN-ORIENTED AND ONTOLOGY-BASED APPROACH TO CONSTRUCTING FACETS OF INFORMATION SYSTEMS
Author(s):  
Crenguta Bogdan and Luca Dan Serbanati
Abstract:  
A concern-oriented analysis approach for developing information systems is presented. The method uses the concerns of various stakeholders of an information system (IS) for partitioning the system conceptual domain in stakeholder-oriented sub-domains. Mental representations descriptions of stakeholders’ beliefs and knowledge related to each concern are identified and on their basis, a domain ontology can be created. Furthermore, facets of the future IS are constructed by an abstraction mechanism applied on the domain ontology.

 
Title:  
ARCHITECTURE-CENTRIC DATA MINING MIDDLEWARE SUPPORTING MULTIPLE DATA SOURCES AND MINING TECHNIQUES
Author(s):  
Sai Peck Lee and Lai Ee Hen
Abstract:  
In today’s market place, information stored in a consumer database is the most valuable asset of an organization. It houses important hidden information that can be extracted to solve real-world problems in engineering, science, and business. The possibility to extract hidden information to solve real-world problems has led to increasing application of knowledge discovery in databases, and hence the emergence of a variety of data mining tools in the market. These tools offer different strengths and capabilities, helping decision makers to improve business decisions. In this paper, we provide a high-level overview of a proposed data mining middleware whose architecture provides great flexibility for a wide spectrum of data mining techniques to support decision makers in generating useful knowledge to help in decision making. We describe features that we consider important to be supported by the middleware such as providing a wide spectrum of data mining algorithms and reports through plugins. We also briefly explain both the high-level architecture of the middleware and technologies that will be used to develop it.

 
Title:  
COLOR IMAGE PROFILE COMPARISON AND COMPUTING
Author(s):  
Imad El-Zakhem, Amine Ait Younes, Isis Truck, Hanna Greige and Herman Akdag
Abstract:  
This paper describes a method and software to analyze the content of images and build their colorimetric profile as perceived by the user. First, images are being processed relying on a standard or initial set of parameters using the fuzzy set theory and the HLS color space (Hue, Lightness, Saturation). The number of these parameters is considerable and they present the different colors and properties of colors in combinations i.e. red pale, blue dark. Access is done pixel by pixel and at the end of this phase each image will have a detailed initial colorimetric profile. Second, we present a method that will recalculate the amount of colors in the image based on another set of parameters, so the colorimetric profile of the image is being modified accordingly. Avoiding the repetition of the process at the pixel level is the main target of this phase, because reprocessing each image is time consuming and turned to be not feasible. Finally we present the software used to process images and recalculate their colorimetric profiles with some examples.

 
Area 5 - Knowledge Engineering  
Title:  
TOWARDS A GENERAL ONTOLOGYOF COMPUTER PROGRAMS
Author(s):  
Pascal Lando, Anne Lapujade, Gilles Kassel and Frédéric Fürst
Abstract:  
Over the past decade, ontology research has investigated the field of computer programs. This work has aimed at defining conceptual descriptions of the programs so as to master their design and use. Unfortunately, these efforts have only been partially successful. In this paper, we present the basis of a Core Ontology of Programs and Software (COPS) which integrates the field’s main concepts. But, above all, we emphasize the method used to build the ontology. In fact, COPS specializes not only the DOLCE foundational ontology (“Descriptive Ontology for Linguistic and Cognitive Engineering”, Masolo et al., 2003) but also core ontologies of domains (e.g. artefacts, documents) situated on a higher abstraction level. This approach enables us to take into account the “dual nature” of computer programs, which can be considered as both syntactic entities (well-formed expressions in a programming language) and artefacts whose function is to enable computers to process information.

 
Title:  
A CASE-BASED DIALOGUE SYSTEM FOR INVESTIGATING THERAPY INEFFICACY
Author(s):  
Rainer Schmidt and Olga Vorobieva
Abstract:  
ISOR is a Case-Based Reasoning system for long-term therapy support in the endocrine domain and in psychiatry. ISOR performs typical therapeutic tasks, such as computing initial therapies, initial dose recommendations, and dose updates. ISOR deals especially with situations where therapies become ineffective. Causes for inefficacy have to be found and better therapy recommendations should be computed. In addition to former already solved cases, ISOR uses further knowledge forms, especially medical histories of query patients themselves and prototypes. Furthermore, the knowledge base consists of therapies, conflicts, instructions etc. So, different forms and steps of retrieval are performed, while adaptation occurs as an interactive dialog with the user.

 
Title:  
BUILDING AN ONTOLOGY THAT HELPS IDENTIFY CRIMINAL LAW ARTICLES THAT APPLY TO A CYBERCRIME CASE
Author(s):  
El Hassan Bezzazi
Abstract:  
We present in this paper a small formal cybercrime ontology by using concrete tools. The purpose is to show how law articles and legal cases could be defined so that the problem of case resolution is reduced to a classification problem as long as cases are seen as subclasses of articles. Secondly, we show how counterfactual reasoning may be held over it. Lastly, we investigate the implementation of an hybrid system which is based both on this ontology and on a non-monotonic rule based system which is used to execute, in a rule based way, an external ontology dealing with a technical domain in order to clarify some of the technical concepts.

 
Title:  
A MULTI-OBJECTIVE GENETIC ALGORITHM FOR CUTTING-STOCK IN PLASTIC ROLLS INDUSTRY
Author(s):  
Ramiro Varela, César Muñoz, María Sierra and Inés González-Rodríguez
Abstract:  
In this paper, we confront a variant of the cutting-stock problem with multiple objectives. It is an actual problem of an industry that manufactures plastic rolls under customers’ demands. The starting point is a solution calculated by a heuristic algorithm, termed SHRP that aims mainly at optimizing the two main objectives, i.e. the number of cuts and the number of different patterns; then the proposed multi-objective genetic algorithm tries to optimize other secondary objectives such as changeovers, completion times of orders weighted by priorities and open stacks. We report experimental results showing that the multi-objective genetic algorithm is able to improve the solutions obtained by SHRP on the secondary objectives and also that it offers a number of non dominated solutions, so that the expert can chose one of them according to his preferences at the time of cutting the orders of a set of customers.

 
Title:  
IT-BASED PURPOSE-DRIVEN KNOWLEDGE VISUALIZATION
Author(s):  
Wladimir Bodrow and Vladimir Magalashvili
Abstract:  
Knowledge visualization is currently under investigation from different points of view especially because of its importance for Artificial Intelligence, Knowledge Management, Business Intelligence etc. The concepts and technology of knowledge visualization in the presented research are considered from a purpose perspective which focuses on the interdependencies between different knowledge elements. This way the influence of these elements on each other in every particular situation can be visualized. This is crucial e.g. for decision making.

 
Title:  
INCONSISTENCY-TOLERANT KNOWLEDGE ASSIMILATION
Author(s):  
Hendrik Decker
Abstract:  
A recently introduced notion of inconsistency tolerance for integrity checking is revisited. Two conditions that enable an easy verification or falsification of inconsistency tolerance are discussed. Based on a method-independent definition of inconsistency-tolerant updates, this notion is then extended to a family of knowledge assimilation tasks. These include integrity maintenance, view updating and repair of integrity violation. Many knowledge assimilation approaches turn out to be inconsistency-tolerant without needing any specific knowledge about the given status of integrity of the underlying database.

 
Title:  
TOWARDS AUTOMATED INFERENCING OF EMOTIONAL STATE FROM FACE IMAGES
Author(s):  
Ioanna-Ourania Stathopoulou and George A. Tsihrintzis
Abstract:  
Automated facial expression classification is very important in the design of new human-computer interaction modes and multimedia interactive services and arises as a difficult, yet crucial, pattern recognition problem. Recently, we have been building such a system, called NEU-FACES, which processes multiple camera images of computer user faces with the ultimate goal of determining their affective state. In here, we present results from an empirical study we conducted on how humans classify facial expressions, corresponding error rates, and to which degree a face image can provide emotion recognition from the perspective of a human observer. This study lays related system design requirements, quantifies statistical expression recognition performance of humans, and identifies quantitative facial features of high expression discrimination and classification power.

 
Title:  
EMPIRICAL VALIDATION ON KNOWLEDGE PACKAGING SUPPORTING KNOWLEDGE TRANSFER
Author(s):  
Pasquale Ardimento, Teresa Baldassarre, Marta Cimitile and Giuseppe Visaggio
Abstract:  
Transfer of research results, as well as technological innovation, within an enterprise is a key success factor. The introduction of research results aims to improve efficacy and effectiveness of the production processes with respect to business goals, and also to better adapt the products to the market needs. Nevertheless, it is often difficult to transfer research results in production systems because it is necessary, among others, that knowledge be explicit and understandable by stakeholders. Such transfer is demanding, as so many researchers have been studying alternative ways to classic approaches such as books and papers that favour knowledge acquisition on behalf of users. In this context, we propose the concept of Knowledge Package (KP) with a specific structure as alternative. We have carried out an experiment which compared the efficacy of the proposed approach with the classic ones, along with the comprehensibility of the information enclosed in a KP rather than in a set of Papers. The experiment has pointed out that knowledge packages are more effective than traditional ones, for knowledge transfer.

 
Title:  
AGENTS THAT HELP TO DETECT TRUSTWORTHY KNOWLEDGE SOURCES IN KNOWLEDGE MANAGEMENT SYSTEMS
Author(s):  
Juan Pablo Soto, Aurora Vizcaíno, Javier Portillo-Rodríguez and Mario Piattini
Abstract:  
Knowledge Management is a critical factor for companies worried about increasing their competitive advantage. Because of this companies are acquiring knowledge management tools that help them manage and reuse their knowledge. One of the mechanisms most commonly used with this goal is that of Knowledge Management Systems (KMS). However, sometimes KMS are not very used by the employees, who consider that the knowledge stored is not very valuable. In order to avoid it, in this paper we propose a three-level multi-agent architecture based on the concept of communities of practice with the idea of providing the most trustworthy knowledge to each person according to the reputation of the knowledge source. Moreover a prototype that demostrates the feasibility of our ideas is described.

 
Title:  
THEORETICAL FRAMEWORK FOR COOPERATION AND COMPETITION IN EVOLUTIONARY COMPUTATION
Author(s):  
Eugene Eberbach and Mark Burgin
Abstract:  
In the paper the theoretical framework for cooperation and competition of coevolved population members working toward a common goal is presented. We use a formal model of Evolutionary Turing Machine and its extensions to justify that in general evolutionary algorithms belong to the class of super-recursive algorithms. Parallel and Parallel Weighted Evolutionary Turing Machine models have been proposed to capture properly cooperation and competition of the whole population expressed as an instance of multiobjective optimization.

 
Title:  
CHI SQUARE FEATURE EXTRACTION BASED SVMS ARABIC TEXT CATEGORIZATION SYSTEM
Author(s):  
Abdelwadood Moh’d A Mesleh
Abstract:  
This paper aims to implement a Support Vector Machines (SVMs) based text classification system for Arabic language articles. This classifier uses CHI square method as a feature selection method in the pre-processing step of the Text Classification system design procedure. Comparing to other classification methods, our classification system shows a high classification effectiveness for Arabic articles term of Macroaveraged F1 = 88.11 and Microaveraged F1 = 90.57.

 
Title:  
KNOWLEDGE BASED CONCEPTS FOR DESIGN SUPPORT OF AN ARTIFICIAL ACCOMMODATION SYSTEM
Author(s):  
K. P. Scherer
Abstract:  
When conceiving medical information and diagnosis systems, knowledge based systems are used to diagnose failures based on specific patient data. The knowledge is evaluated based on statistical data from the paste and the present information is derived by statistical approaches (Bayes theorem) and analogues cases to interpret the individual patient related situation. An analogue methodical situation is that of the conceptualisation of a new technical system, where the system components with the properties will be configured in such a manner, that a target function is guaranteed under consideration of any constraints. In both situations, the system (human being, technical system) has to be described in a natural language and must be formalised. Based on these formulas logical conclusions can be drawn. Useful representations are formalised knowledge representation methods. For logical conclusions the predicate calculus of first order is used. For information access by both experts and users, comfortable natural language based concepts and the employment of graphical tools are very important to manage the complex knowledge.

 
Title:  
MATHEMATICAL FRAMEWORK FOR GENERALIZATION AND INSTANTIATION OF KNOWLEDGE
Author(s):  
Marek Reformat
Abstract:  
Templates, patterns, and blueprints are constructs that humans use to represent highly abstract knowledge. Quality of such processes as reasoning, speaking, running, and driving depends on people's abilities to process these constructs. Recently, they have been named protoforms. On the other hand, concrete pieces of knowledge can be seen as instances of the protoforms. A very important task is to find mechanisms that will be able to organize and control protoforms and their instances. They would provide methods for defining properties of protoforms and their instances, describing their interactions, and controling ways how they can be merged. The paper describes a concept of applying category theory to describe protoforms and their instances in a more formal way.

 
Title:  
INCREASE PERFORMANCE BY COMBINING MODELS OF ANALYSIS ON REAL DATA
Author(s):  
Dumitru Dan Burdescu and Marian Cristian Mihaescu
Abstract:  
In this paper we investigate several state-of-the-art methods of combining models of analysis. Data is obtained from an e-Learning platform and is represented by user’s activities like downloading course materials, taking tests and exams, communicating with professors and secretaries and other. Combining multiple models of analysis may have as result important information regarding the performance of the e-Learning platform regarding student’s learning performance or capability of the platform to classify students according to accumulated knowledge. This information may be valuable in adjusting platform’s structure, like number or difficulty of questions, to increase performance from presented points of view.

 
Title:  
FORMAL METHOD FOR AUTOMATIC AND SEMANTIC MAPPING OF DISTRIBUTED SERVICE-ONTOLOGIES
Author(s):  
Nacima Mellal and Richard Dapoiny
Abstract:  
Many distributed heterogenous systems exchange information between them. Currently, most of systems are described in terms of ontologies. When ontologies are distributed, arises the problem of achiev- ing sematic interoperability. This problem is undertaken by a process which defines rules to relate relevant parts of different ontologies, called“Ontology Mapping”. This paper describes a methodology for automatic and semantic mapping of ontologies. Our main interest is focused on ontologies describing services of systems. In fact, the notion of service is central in the description and in the functioning of distributed systems. These ontologies are called “Service Ontologies”. So, we investigate an approach where the mapping of ontologies provides full semantic integration between distributed service ontologies on Information Flow model.

 
Title:  
ENTERPRISE ONTOLOGY AND FEATURE MODEL INTEGRATION Approach and Experiences from an Industrial Case
Author(s):  
Kurt Sandkuhl, Christer Thorn and Wolfram Webers
Abstract:  
Based on an industrial application case from automotive industries, this paper discusses integration of an existing feature model into an existing enterprise ontology. Integration is discussed on conceptual and on implementation level. The main conclusion of the work is that while integrating enterprise ontologies and feature models is quite straightforward on a conceptual level, it causes various challenges when implementing the integration with Protégé. As ontologies have a clearly richer descriptive power than feature models, the mapping on a notation level poses no serious technical problems. The main difference of the implementation approaches presented is where to actually place a feature. The first approach follows the information modeling tradition by considering features as model entities with a certain meta-model. The second approach integrates all features and relations directly on the concept level, i.e. features are considered independent concepts.

 
Title:  
OFF-LINE SIGNATURE VERIFICATION Comparison of Stroke Extraction Methods
Author(s):  
Bence Kovári, Áron Horváth, Zsolt Kertész and Csaba Illés
Abstract:  
Stroke extraction is a necessary part of the majority of semantic based off-line signature verification systems. This paper discusses some stroke extraction variants which can be efficiently used in such environments. First the different aspects and problems of signature verification are discussed in conjunction with off-line analysis methods. It is shown, that on-line analysis methods perform usually better than off-line methods because they can make use of the temporal information (and thereby get a better perception of the semantics of the signature). To improve the accuracy of off-line signature verification methods the extraction of semantic information is necessary. Three different approaches are introduced to reconstruct the original strokes of a signature. One purely based on simple image processing algorithms, one with some more intelligent processing and one with a pen model. The methods are examined and compared with regard to their benefits and drawbacks on further signature processing.

 
Title:  
TOWARDS A MULTIMODELING APPROACH OF DYNAMIC SYSTEMS FOR DIAGNOSIS
Author(s):  
Marc Le Goc and Emilie Masse
Abstract:  
This paper presents the basis of a multimodeling methodology that uses a CommonKADS conceptual model to interpret the diagnosis knowledge with the aim of representing the system with three models: a structural model describing the relations between the components of the system, a functional model describing the relations between the values the variables of the system can take (i.e. the functions) and a behavioural model describing the states of the system and the discrete events firing the state transitions. The relation between these models is made with the notion of variable: a variable used in a function of the functional model is associated with an element of the structural model and a discrete event is defined as the affectation of a value to a variable. This methodology is presented in this paper with a toy but pedagogic problem: the technical diagnosis of a car. The motivating idea is that using the same level of abstraction that the expert can facilitate the problem solving reasoning.

 
     
ACT4SOC - Workshop on Architectures, Concepts and Technologies for Service Oriented Computing  
Title:  
Designing a Generic and Evolvable Software Architecture for Service Oriented Computing
Author(s):  
Herwig Mannaert, Kris Ven and Jan Verelst
Abstract:  
Service Oriented Architecture (SOA) is becoming the new paradigm for developing enterprise systems. We consider SOA to be concerned with high-level design of software, which is commonly called software architecture. In this respect, SOA can be considered to be a new architectural style. This paper proposes an advanced software architecture for information systems. It was developed by systematically applying solid software engineering principles such as loose coupling, interface stability and asynchronous communication to contemporary n-tier architectures for information systems in Java Enterprise Edition. The resulting architecture is SOA-compliant, very generic and demonstrates to a very high extent architectural qualities such as evolvability.

 
Title:  
Architectural Models for Client Interaction on Service-Oriented Platforms
Author(s):  
Luiz Olavo Bonino da Silva Santos, Luís Ferreira Pires and Marten van Sinderen
Abstract:  
Service oriented platforms can provide different levels of functional-ity to the client applications as well as different interaction models. Depending on the platform’s goals and the computing capacity of their expected clients the platform functionality can range from just an interface to support the discovery of services to a full set of intermediation facilities. Each of these options re-quires an architectural model to be followed in order to allow the support of the corresponding interaction pattern. This paper discusses architectural models for service-oriented platforms and how different choices of interaction models in-fluence the design of such platforms. Service platforms’ functionality provi-sioning can vary from a simple discovery mechanism to a complete set, includ-ing discovery, selection, composition and invocation. This paper also discusses two architectural design choices reflecting distinct types of functionality provi-sioning namely matchmaker and broker. The broker provides a more complete set of functionality to the clients, while the matchmaker leaves part of the func-tionality and responsibility to the client, demanding a client platform with more computational capabilities.

 
Title:  
TApplying Component Concepts to Service Oriented Design: A Case Study
Author(s):  
Balbir Barn and Samia Oussena
Abstract:  
This paper argues that appropriate modeling methods for service oriented development have not matured at the same pace as the technology because the conceptual underpinning that binds methods and technology has not been sufficiently articulated and developed. The paper describes an adaptation and enhancement of component based techniques to support the development of a service oriented method. As a result of the evaluation of using component concepts to support service oriented design, an integrated conceptual model describing how concepts from services and components are related is presented. The experimental data derives from a complex case study from the Higher Education Enterprise arena.

 
Title:  
An Approach to the Analysis and Evaluation of an Enterprise Service Ecosystem
Author(s):  
Nicolas Repp, Stefan Schulte,Julian Eckert, Rainer Berbner and Ralf Steinmetz
Abstract:  
Currently, the implementation of service-oriented concepts is one of the main activities of many IT and business departments throughout enterprises of various industries. Service-orientation as a concept is no novelty for many enterprises - many software systems and components offering technical and business functionality do comply with service-oriented principles. Nevertheless, the analysis, evaluation, and integration of existing services are often neglected in process models describing the implementation of service-oriented concepts. This paper describes an approach to the analysis and evaluation of those existing services to become part of the enterprise service ecosystem, which we call service inventory. The service inventory is realized as a generic extension to existing systems development methodologies, which allows its integration into the already used service-oriented methodology. The service inventory approach is based on Service-oriented Architecture research, principles from systems analysis and design, as well as on auditing principles.

 
Title:  
Integrated Governance of IT Services for Value Oriented Organizations
Author(s):  
Antonio Folgueras Marcos, Belén Ruiz Mezcua and Ángel García Crespo
Abstract:  
This paper shows a latest generation model for the management and governance of the technologies and Information Systems (IT) in the organiza-tions. IT governance is the key to achieve high level of maturity in the SOA Maturity Model. Currently there are standards and methodologies of interna-tional character that cover in detail the different critical aspects of the Informa-tion Technologies’ governance such as CobiT, Itil, ISO20000 and Balanced Scorecard for IT. This governance model starts from the knowledge acquired in the mentioned standards and allows carrying out the tacit and strategic govern-ance of all the activities in an information systems department. The model de-picted in this paper includes: the monthly and tacit control of the IT processes, the system’s portfolio management, the IT strategy planning and the alignment of the strategies with the operations (four alignments because considers: sys-tems and business). This model is called IG4 (Information Governance Four Generation) due to the fact that it includes important improvements on the clas-sic management and governance IT models.

 
Title:  
An Algorithm for Automatic Service Composition
Author(s):  
Eduardo Silva, Luís Ferreira Pires and Marten van Sinderen
Abstract:  
Telecommunication companies are struggling to provide their users with value-added services. These services are expected to be context-aware, at-tentive and personalized. Since it is not economically feasible to build services separately by hand for each individual user, service providers are searching for alternatives to automate service creation. The IST-SPICE project aims at devel-oping a platform for the development and deployment of innovative value-added services. In this paper we introduce our algorithm to cope with the task of automatic composition of services. The algorithm considers that every avail-able service is semantically annotated. Based on a user/developer service re-quest a matching service is composed in terms of component services. The composition follows a semantic graph-based approach, on which atomic ser-vices are iteratively composed based on services' functional and non-functional properties.

 
Title:  
Interoperating Context Discovery Mechanisms
Author(s):  
Tom Broens, Remco Poortinga and Jasper Aarts
Abstract:  
Context-Aware applications adapt their behaviour to the current situation of the user. This information, for instance user location and user avail-ability, is called context information. Context is delivered by distributed con-text sources that need to be discovered before they can be used to retrieve con-text. Currently, multiple context discovery mechanisms exist, exhibiting het-erogeneous capabilities (e.g. communication mechanisms, and data formats), which can be available to context-aware applications at arbitrary moments dur-ing the application’s lifespan. In this paper, we discuss a middleware mecha-nism that enables a (mobile) context-aware application to interoperate transpar-ently with different context discovery mechanisms available at run-time. The goal of the proposed mechanism is to hide the heterogeneity and availability of context discovery mechanisms for context-aware applications, there by facilitating their development.

 
Title:  
Using Temporal Business Rules to Synthesize Service Composition Process Models
Author(s):  
Jian Yu, Jun Han, Paolo Falcarin and Maurizio Morisio
Abstract:  
Based on our previous work on the conformance verification of ser-vice compositions, in this paper we present a framework and associated tech-niques to generate the process models of a service composition from a set of temporal business rules. Dedicated techniques including path-finding, branch structure introduction, and parallel structure introduction are used to semi-automatically synthesize the process models from the semantics-equivalent Fi-nite State Automata of the rules. These process models naturally satisfy the pre-scribed behavioral constraints of the rules. With the domain knowledge en-coded in the temporal business rules, an executable service composition program, e.g. a BPEL program, can be further generated from the process models.

 
Special Session on Metamodelling - Utilization in Software Engineering (MUSE 2008)  
Title:  
A META-MODEL FOR REQUIREMENTS VARIABILITY ANALYSIS Application to Tool Generation and Model Composition
Author(s):  
Bruno Gonzalez-Baixauli, Miguel A. Laguna and Julio Cesar Sampaio do Prado Leite
Abstract:  
Variability analysis techniques have an important drawback: the analysis of Non-Functional Requirements. Usually, these techniques do not deal with them fully, or only mention that they should be considered. In our framework, we use an intentional model-based approach where the functional models are used to define the variability space and the non-functional models are the criteria to choose a variant. We found two problems using this approach: a) integration of functional and non-functional models, and b) scalability due to variants number. Our proposed solution is to use ideas from aspect oriented software development. Therefore, we use aspects relationships to relate functional and non-functional models, and we obtain a better separation of concerns that allows improving scalability. In this paper we define the meta-model to set the modeling language used by our framework. The meta-model is applied to: a) generate a modeling environment using a meta-modeling tool; and b) define the rules that describe the composition of aspectual models to create new models.

 
Title:  
SOLVING DESIGN ISSUES IN WEB META-MODEL APPROACH TO SUPPORT END-USER DEVELOPMENT
Author(s):  
Buddhima De Silva and Athula Ginige
Abstract:  
End-user development is proposed as a solution to the issues business organisations face when developing web applications to support their business processes. We are proposing a meta-model based development approach to support End-User Development. End-users can actively participate in web application development using tools to populate and instantiate the meta-model. The meta-model has three abstraction levels: Shell, Application and Function. At Shell Level, we model aspects common to all business web applications such as navigation and access control. At Application Level, we model aspects common to specific web applications such as workflows. At Function Level, we model requirements specific to the identified use cases. In this paper we discuss how we have solved the issues in application development for business end-users such as need for central repository of data, common log in, optimizing user model, application portability and balance between “Do it Yourself” (DIY) and professional developers in hierarchical meta-model approach. These solutions are being incorporated into Component based E-Application Development and Deployment Shell (CBEADS©) version 4, supporting meta-model implementation. We believe that these solutions will help end-users to efficiently and effectively develop web applications using meta-model based development approach.

 
Title:  
DOMAIN-SPECIFIC MODELLING WITH ATOM3
Author(s):  
Hans Vangheluwe, Ximeng Sun and Eric Bodden
Abstract:  
Using domain-specific modelling environments maximally constrains users, matching their mental model of the problem domain, and allows them to only build syntactically correct models. Anecdotal evidence shows that domain-specific modelling can drastically improve productivity as well as product quality. In this article, the foundations of (domain-specific) modelling language design are presented. It is shown how all aspects can be explicitly (meta-)modelled enabling the efficient synthesis of domain-specific, visual, modelling environments. The case in point of AToM3, A Tool for Multi-formalism and Meta Modelling, is elaborated. Concepts are illustrated by modelling, analysis, simulation, and eventual synthesis of software for Traffic networks.

 
Title:  
A FRAMEWORK FOR EXECUTING CROSS-MODEL TRANSFORMATIONS BASED ON PLUGGABLE METAMODELS
Author(s):  
Geert Delanote, Sven De Labey, Koen Vanderkimpen and Eric Steegmans
Abstract:  
The design of complex software systems requires developers to use a variety of modeling languages in order to model various system aspects. The heterogeneity of these modeling languages gives rise to new challenges. Design decisions must be communicated across heterogeneous models, thus creating a need for cross-model communication. Furthermore, models must be transformable between different modeling languages, thus creating a need for cross-model transformations. By supporting only a single modeling language and by providing limited interoperability, however, the majority of today's modeling tools cannot provide cross-model communication nor transformation, as such jeopardizing the consistency of the design as a whole. In this paper, we present the design of a transformation framework, Chameleon, which supports cross-model transformations based on pluggable metamodels. We discuss how Chameleon eases the realization of concrete metamodels by offering abstract modeling constructs, and we show how it is able to execute transformations between concrete instances of such metamodels.

 
Copyright © INSTICC

Page updated on 18/12/09