CONFERENCE
Area 1 - Programming Languages
Area 2 - Software Engineering
Area 3 - Distributed and Parallel Systems
Area 4 - Information Systems and Data Management
Area 5 - Knowledge Engineering
WORKSHOP
Workshop on Metamodelling – Utilization in Software Engineering (MUSE 2006)

 
Area 1 - Programming Languages
Title:  
FROM STATIC TO DYNAMIC PROCESS TYPES
Author(s):  
Franz Puntigam
Abstract:  
Process types -- a kind of behavioral types -- specify constraints on message acceptance for the purpose of synchronization and to determine object usage and component behavior in object-oriented languages. So far process types have been regarded as a purely static concept for Actor languages incompatible with inherently dynamic programming techniques. We propose solutions of related problems causing the approach to become useable in more conventional dynamic and concurrent languagues. The proposed approach can ensure message acceptability and support local and static checking of race-free programs.

Title:  
ON STATE CLASSES AND THEIR DYNAMIC SEMANTICS
Author(s):  
Ferruccio Damiani, Elena Giachino, Paola Giannini and Emanuele Cazzola
Abstract:  
We introduce "state classes", a construct to program objects that can be safely concurrently accessed. State classes model the notion of object's "state" (intended as some abstraction over the value of fields) that plays a key role in concurrent object-oriented programming (as the "state" of an object changes, so does its coordination behavior). We show how state classes can be added to Java-like languages by presenting StateJ, an extension of Java with state classes. The operational semantics of the state class construct is illustrated both at an abstract level, by means of a core calculus for StateJ, and at a concrete level, by defining a translation from StateJ into Java.

Title:  
ASPECTBOXES – CONTROLLING THE VISIBILITY OF ASPECTS
Author(s):  
Alexandre Bergel, Robert Hirschfeld, Siobhán Clarke and Pascal Costanza
Abstract:  
Aspect composition is a challenging issue where no agreement currently exist within the aspect oriented programming community. In this paper we present a modular construct for aspects, called aspectboxes, that enables aspects application to be limited to a well defined scope. An aspectbox encapsulates class and aspect definitions. Classes can be imported into an aspect-box defining a base system to which aspects can be applied. Refinements defined by an aspect are visible only within the aspectbox that defines this aspect. Outside this aspectbox, the base system behaves as if there were no aspect.

Title:  
SOFTWARE IMPLEMENTATION OF THE IEEE 754R DECIMAL FLOATING-POINT ARITHMETIC
Author(s):  
Marius Cornea, Cristina Anderson and Charles Tsen
Abstract:  
The IEEE Standard 754-1985 for Binary Floating-Point Arithmetic is being revised, and an important addition to the current text is the definition of decimal floating-point arithmetic. This is aimed mainly to provide a robust, reliable framework for financial applications that are often subject to legal requirements concerning rounding and precision of the results in the areas of banking, telephone billing, tax calculation, currency conversion, insurance, or accounting in general. Using binary floating-point calculations to approximate decimal calculations has led in the past to the existence of numerous proprietary software packages, each with its own characteristics and capabilities. New algorithms are presented in this paper which were used for a generic implementation in software of the IEEE 754R decimal floating-point arithmetic, but may also be suitable for a hardware implementation. In the absence of hardware to perform IEEE 754R decimal floating-point operations, this new software package that will be fully compliant with the standard proposal should be an attractive option for various financial computations. Preliminary performance results are included, showing one to two orders of magnitude improvement over a software package currently incorporated in GCC

Title:  
ON ABILITY OF ORTHOGONAL GENETIC ALGORITHMS FOR THE MIXED CHINESE POSTMAN PROBLEM
Author(s):  
Hiroshi Masuyama, Tetsuo Ichimori and Toshihiko Sasama
Abstract:  
The well known Chinese Postman Problem has many applications, and this problem has been proved to be NP-hard in graphs where directed and non-directed edges are mixed. In this paper, in order to investigate the salient feature of orthogonal design, we designed a genetic algorithm adopting an orthogonal crossover operation to solve this (mixed Chinese Postman) problem and evaluate the salient ability. The results indicate that for problems of small sizes, the orthogonal genetic algorithm can find near-optimal solutions within a moderate number of generations. We confirmed that the orthogonal design shows better performance, even for graph scales where simple genetic algorithms almost never find the solution. However, only the introduction of orthogonal design is not yet effective for the Chinese Postman Problem of practical size where a solution can be obtained in less than 104 generations. This paper concludes that the optimal design scale of orthogonal array to this mixed Chinese Postman Problem does not conform to the same scale as the multimedia multicast routing problem.

Title:  
ASSOCIATIVE PROGRAMMING AND MODELING: ABSTRACTIONS OVER COLLABORATION
Author(s):  
Bent Bruun Kristensen
Abstract:  
Associations as abstractions over collaborations are motivated and explored. Associations are seen as first class concepts at both modeling and programming levels. Associations are seen as concepts/phenomena and possess properties. Various notations for collaboration in object-oriented programming and modeling are discussed and compared to associations. Concurrent and interleaved execution of objects is described in relation to associations.

Title:  
ZÁS - ASPECT-ORIENTED AUTHORIZATION SERVICES
Author(s):  
Paulo Zenida, Manuel Menezes de Sequeira, Diogo Henriques and Carlos Serrão
Abstract:  
This paper proposes Zás, a novel, flexible, and expressive authorization mechanism for Java. Zás has been inspired by Ramnivas Laddad's proposal to modularize Java Authentication and Authorization Services (JAAS) using an Aspect-Oriented Programming (AOP) approach. Zás' aims are to be simultaneously very expressive, reusable, and easy to use and configure. Zás allows authorization services to be non-invasively added to existing code. It also cohabits with a wide range of authentication mechanisms. Zás uses Java 5 annotations to specify permission requirements to access controlled resources. These requirements may be changed directly during execution. They may also be calculated by client supplied permission classes before each access to the corresponding resource. These features, together with several mechanisms for permission propagation, expression of trust relationships, depth of access control, etc., make Zás, we believe, an interesting starting point for further research on the use of AOP for authorization.

Title:  
A DECLARATIVE EXECUTABLE MODEL FOR OBJECT-BASED SYSTEMS BASED ON FUNCTIONAL DECOMPOSITION
Author(s):  
Pierre Kelsen
Abstract:  
Declarative models are a commonly used approach to deal with software complexity: by abstracting away the intricacies of the implementation these models are often easier to understand than the underlying code. Popular modeling languages such as UML can however become complex to use when modeling systems in sufficient detail. In this paper we introduce a new declarative model, the EP-model, named after the basic entities it contains - events and properties - that possesses the following features: it has a small metamodel; it supports a graphical notation; it can represent both static and dynamic aspects of an application; finally, it allows executable models to be described by annotating model elements with code snippets. By leaving complex parts at the code level this hybrid approach achieves executability while keeping the basic modeling language simple.

Title:  
AVOIDING TWO-LEVEL SYSTEMS: USING A TEXTUAL ENVIRONMENT TO ADDRESS CROSS-CUTTING CONCERNS
Author(s):  
David Greaves
Abstract:  
We believe that, owing to the paucity of textual facilities in contemporary HLLs (high-level languages), large software systems frequently require an additional level of meta-programming to sufficiently address their cross-cutting concerns. A programming team can either implement its system by both writing the main application in a slightly customised language and the corresponding customised compiler for it, or it can use a macro pre-processor to provide the remaining cross-cutting requirements not found in the chosen HLL. With either method, a two-level system arises. This paper argues that textual macro-programming is an important cross-cutting medium, that existing proposals for sets of pre-defined AOP (aspect-oriented programming) joinpoints are overly constrictive and that a generalised meta-programming facility, based on a {\em textual environment} should instead be directly embedded in HLLs. The paper presents the semantics of the main additions required in an HLL designed with this feature. We recommend that the textual features must be compiled out as the reference semantics would generally be too inefficient if naively interpreted.

Area 2 - Software Engineering
Title:  
BRIDGING BETWEEN MIDDLEWARE SYSTEMS: OPTIMISATIONS USING DOWNLOADABLE CODE
Author(s):  
Jan Newmarch
Abstract:  
There are multiple middleware systems and no single system is likely to become predominant. There is therefore an interoperability requirement between clients and services belonging to different middleware systems. Typically this is done by a bridge between invocation and discovery protocols. In this paper we introduce three design patterns based on a bridging service cache manager and dynamic proxies. This is illustrated by examples including a new custom lookup service which allows Jini clients to discover and invoke UPnP services. There is a detailed discussion of the pros and cons of each pattern.

Title:  
DEVELOPING A CONFIGURATION MANAGEMENT MODEL FOR USE IN THE MEDICAL DEVICE INDUSTRY
Author(s):  
Fergal McCaffery, Rory O’Connor and Gerry Coleman
Abstract:  
This paper outlines the development of a Configuration Management model for the MEDical device software industry (CMMED). The paper details how medical device regulations associated with Configuration Management (CM) may be satisfied by adopting less than half of the practices from the CM process area of the Capability Maturity Model Integration (CMMI). It also investigates how the CMMI CM process area may be extended with additional practices that are outside the remit of the CMMI, but are required in order to satisfy medical device regulatory guidelines.

Title:  
ENGINEERING A COMPONENT LANGUAGE: COMPJAVA
Author(s):  
Hans Albrecht Schmid and Marco Pfeifer
Abstract:  
After first great enthousiasm about the new generation of component languages like ArchJava, ComponentJ and ACOEL, a closer inspection and use of these languages identified together with their strong points some smaller, but disturbing drawbacks that are listed in the paper. It would be harmful if those fine languages which complement OO-languages in a perfect way would not find a wide acceptance due to these drawbacks.Therefore, we took an engineering approach to the construction of a new Java-based component language without those drawbacks. That means, we derived general component language requirements from those drawbacks, designed a first language version meeting the requirements, used it in projects and re-iterated three times through the same cycle with improved language versions. The result, called CompJava, which should be fairly stable by now, is presented in the paper.

Title:  
MDE FOR BPM - A Systematic Review
Author(s):  
Jose Manuel Perez, Francisco Ruiz and Mario Piattini
Abstract:  
Due to the rapid change in the business processes of organizations, business process management (BPM) has come into being. Although BPM helps business analysts to manage all concerns related to business processes, this is not enough. The organization’s value chain changes very rapidly; to modify simultaneously the systems that support the business management process is impossible. MDE (Model Driven Engineering) is a good support for transferring these business process changes to the systems that implement these processes. Thus, by using any MDE approach, such as MDA, the alignment between business people and software engineering should be improved. To discover the different proposals that exist in this area, a systematic review was performed. As a result, the OMG’s metamodel of business process definition (BPDM) has been identified as the standard that will be the key for the application of MDA for BPM.

Title:  
USING PRE-REQUIREMENTS TRACING TO INVESTIGATE REQUIREMENTS BASED ON TACIT KNOWLEDGE
Author(s):  
Andrew Stone and Pete Sawyer
Abstract:  
Pre-requirements specification tracing concerns the identification and maintenance of relationships between requirements and the knowledge and information used by analysts to inform the requirements' formulation. However, such tracing is often not performed as it is a time-consuming process. This paper presents a tool for retrospectively identifying pre-requirements traces by working backwards from requirements to the documented records of the elicitation process such as interview transcripts or ethnographic reports. We present a preliminary evaluation of our tool’s performance using a case study. One of the key goals of our work is to identify requirements that have weak relationships with the source material. There are many possible reasons for this, but one is that they embody tacit knowledge. Although we do not investigate the nature of tacit knowledge in RE we believe that even helping to identify the probable presence of tacit knowledge is useful. This is particularly true for circumstances when requirements' sources need to be understood during, for example, the handling of change requests.

Title:  
GENERIC FEATURE MODULES: TWO-STAGED PROGRAM CUSTOMIZATION
Author(s):  
Sven Apel, Martin Kuhlemann and Thomas Leich
Abstract:  
With feature-oriented programming (FOP) and generics programmers have probate means for structuring software so that its elements can be reused and extended. This paper addresses the issue if both approaches are equivalent. While FOP targets at large-scale building blocks and compositional programming, generics provide fine-grained customization at type-level. We contribute an analysis that reveals the individual capabilities of both approaches with respect to program customization. Therefrom, we extract guidelines for programmers in what situations which approach performs well. Furthermore, we present a fully implemented language proposal that integrates FOP and generics in order to combine their strengths. Our approach facilitates two-dimensional program customization: (1) selecting sets of features, (2) parameterizing features subsequently. This allows a broader spectrum of code reuse to be covered, reflected by proper language level mechanisms. We underpin our proposal by a applying our proposal to a non-trivial case study.

Title:  
A DETECTION METHOD OF FEATURE INTERACTIONS FOR TELECOMMUNICATION SERVICES USING NEW EXECUTION MODEL
Author(s):  
Sachiko Kawada, Masayuki Shimokura and Tadashi Ohta
Abstract:  
A service, which behaves normally, behaves differently when initiated with another service. This undesirable behavior is called a feature interaction. In investigating the international benchmark for detecting interactions in telecommunication services, it was found that many interactions that do not actually occur (which are called “seeming interactions” in this paper) were miss-detected. The reason for miss-detection of seeming interactions is that interactions were detected using a state transition model which does not properly represent a process flow in a real system. Since seeming interactions cause increase of the time for solving interactions, it is an important issue to avoid miss-detections. In this paper, to avoid miss-detection of seeming interactions, a new detection method of interactions based on a new specification execution model which properly reflects a process flow in a real system is proposed.

Title:  
USING LINGUISTIC PATTERNS FOR IMPROVING REQUIREMENTS SPECIFICATION
Author(s):  
Carlos Videira, David Ferreira and Alberto Rodrigues da Silva
Abstract:  
Despite the efforts made to overcome the problems associated with the development of information systems, we must consider that it is still an immature activity, with negative consequences in time, budget and quality. One of the root causes for this situation is the fact that many projects do not follow a structured, standard and systematic approach, like the methodologies and best practices proposed by Software Engineering. In this paper, we describe how linguistic patterns can be used to improve the quality of requirements specifications, using them as the basis for a new requirements specification language, called ProjectIT-RSL. To guarantee the consistency of the written requirements and the integration with generative programming tools, the requirements are analysed by parsing tools, and immediately validated according with the syntactic and semantic rules of the language.

Title:  
ADVANCES ON TESTING SAFETY-CRITICAL SOFTWARE - Goal-driven Approach, Prototype-tool and Comparative Evaluation
Author(s):  
Guido Pennella, Christian Di Biagio, Gianfranco Pesce and Giovanni Cantone
Abstract:  
The reference company for this paper – a multination organization, Italian branch, which works in the domain of safety-critical systems - evaluated the major tools that the market provides for testing safety-critical software, as not sufficiently featured for her quality improvement goals. Consequently, in order to investigate the space of possible solutions, if any, the company’s Research Lab. started an academic cooperation, which leaded to share knowledge and eventually to establish a common research team. Once that we had transformed those goals in detailed technical requirements, and evaluated that it was possible to realize them conveniently in a tool, we passed to analyze, construct, and eventually utilize in field the prototype “Software Test Framework”. This tool allows non-intrusive measurements on different hard-soft targets of a distributed system running under one or more Unix standard OS, e.g. LynxOS, AIX, Solaris, and Linux. The tool acquires and graphically displays the real-time flow of data, so enabling users to verify and validate software products, and quickly diagnose and resolve emerging performance problems. This paper reports on the characteristics of Software Test Framework, its architecture, and results from a case study. Based on comparison of results with previous tools, we can say that Software Test Framework is leading to a new concept of tool for the domain of safety critical software.

Title:  
AN APPLICATION OF THE 5-S ACTIVITY THEORETIC REQUIREMENTS METHOD
Author(s):  
Robert B. K. Brown, Peter Hyland and Ian C. Piper
Abstract:  
Requirements analysis in highly interactive systems necessarily involves eliciting and analysing informal and complex stakeholder utterances. We investigate in ACtivity Theory may provide a useful basis for a new method. Preliminary results indicate that Activity Theory may cope well with problems of this kind, and may indeed offer some improvements.

Title:  
LEARNING EFFECTIVE TEST DRIVEN DEVELOPMENT - Software Development Projects in an Energy Company
Author(s):  
Wing Kum Amy Law
Abstract:  
The tests needed to prove, verify, and validate a software application are determined before the software application is developed. This is the essence of test driven development, an agile practice built upon sound software engineering principles. When applied effectively, this practice can have many benefits. The question becomes how to effectively adopt test driven development. This paper describes the experiences and lessons learned by two teams who adopted test driven development methodology for software systems developed at TransCanada. The overall success of test driven methodology is contingent upon the following key factors: experienced team champion, well-defined test scope, supportive database environment, repeatable software design pattern, and complementary manual testing. All of these factors and the appropriate test regime will lead to a better chance of success in a test driven development project.

Title:  
WEB METRICS SELECTION THROUGH A PRACTITIONERS’ SURVEY
Author(s):  
Julian Ruiz, Coral Calero and Mario Piattini
Abstract:  
There are a lot of web metrics proposals. However, most previous work does not include their practical application. The risk of doing so, is to limit all the effort made just to an academic exercise. In order to eliminate this gap as well as to be able to apply the work developed, it is necessary to involve the different stakeholders related to web technologies as an essential part of web metrics definition. So, it is crucial to know the perception they have about web metrics, especially those related to the development and maintenance of web sites and applications. In this paper, we present the work we have done to find out which web metrics are considered useful by web developers and maintainers. This study has been performed on the basis of the 385 web metrics classified in WQM, a Web Quality Model defined in a previous work, using as validation tool, a survey made by professionals of web technologies. As a result, we have found out that the most weighted metrics were related to usability. That means that web professionals give more importance to the user of metrics than to their own effort.

Title:  
BUILDING MAINTENANCE CHARTS AND EARLY WARNING ABOUT SCHEDULING PROBLEMS IN SOFTWARE PROJECTS
Author(s):  
Sergiu Gordea and Markus Zanker
Abstract:  
Imprecise effort estimations are a well known problem of software project management that frequently conducts to the setting of unrealistic deadlines. The estimations are even less precise when the development of new product releases is mixed with the maintenance of older versions of the system. Software engineering measurement should assess the development process and discover problems occurring into it. However, there are evidences indicating a low success rate of measurement programs mainly because they are not able to extract knowledge and present it in a form easy understandable for developers and managers. They are also not able to suggest corrective actions basing on the collected metric data. In this paper we propose an approach for classifying time efforts into maintenance categories, and we propose the usage of maintenance charts for controlling the development process and warning about scheduling problems. Identifying scheduling problems as soon as possible will allow managers to plan effective corrective actions for coping with the planed releasing deadlines.

Title:  
A FRAMEWORK FOR THE DEVELOPMENT OF MONITORING SYSTEMS SOFTWARE
Author(s):  
Ildefonso Martínez-Marchena, Llanos Mora-López and Mariano Sidrach de Cardona
Abstract:  
This paper describes a framework for the development of software for monitoring installations. Usually, the monitoring of systems is done by building a program for each installation, with no use of previously developed programs or has been done by using SCADA programs (Supervisory Control And Data Acquisition), although these tools are basically for controlling not only for monitoring -but taking into account the small complexity of these type of installation the use of a SCADA program is not justified. The proposed framework solves the monitoring of a installation in a easy way. In this framework the generation of a monitoring program consists of three well established phases. The first step is to model the system or installation using a set of generic description rules and the XML language. The second step is to describe the communications among the different devices. To do this, we have used the OPC technology (OLE for process control). With OPC we have established an abstraction layer that makes it possible to communicate any devices in a generic way. We have built an OPC server for each device which does not depend on the type of device. In the third step, it is defined the way in which the monitored data will be stored and displayed. The framework also incorporates modules that allow us to store and visualize all the data obtained from the different devices. We have used the proposed framework to build complete applications for monitoring two different solar energy installations.

Title:  
TOWARDS A LANGUAGE INDEPENDENT REFACTORING FRAMEWORK
Author(s):  
Carlos López, Raúl Marticorena, Yania Crespo and Francisco Javier Pérez
Abstract:  
Using metamodels to keep source code information is one of the current trends in refactoring tools. This representation makes possible to detect refactoring opportunities through metrics and heuristics, and to execute refactorings on metamodel instances. This paper describes an approach to language independent reuse in metamodel based refactoring detection and execution. We use an experimental metamodel, MOON, and analyze the problems of migrating from MOON to UML 2.0 metamodel or adapting from UML 2.0 to MOON. Some code refactorings can be detected and applied on basic UML abstractions. Nevertheless, other refactorings need information related to program instructions. “Action” concept, included in UML 2.0, is a fundamental unit of behaviour specification that allows to store program instructions and to obtain certain information related to this granularity level. Therefore, we validate the suitability of the UML 2.0 metamodel as a solution for developing refactoring frameworks.

Title:  
UNIFIED DESCRIPTION AND DISCOVERY OF P2P SERVICES
Author(s):  
G. Athanasopoulos, A. Tsalgatidoum and M. Pantazoglou
Abstract:  
Our era has been marked by the emergence of the service oriented computing (SOC) paradigm. This new trend has reshaped the way distributed applications are built and has influenced current computing paradigms, such as p2p and grid computing. SOC’s main objective is to leverage interoperability among applications and systems; however, the emergence of various types of services such as web, grid and p2p services has raised several interoperability concerns among these services as well as within each of these service models. In order to surpass these incompatibilities, appropriate middleware and mechanisms need to be developed so as to provide the necessary layers of abstraction and a unified framework that will obscure a service user from the underlying details of each service platform. Yet, for the development of such middleware and mechanisms to be effective, appropriate conceptual models need to be constructed. Within this paper, we briefly present a generic service model which was constructed to facilitate the unified utilization of heterogeneous services, with emphasis on its properties for the modeling of p2p services. Moreover, we illustrate how this model was instantiated for the representation of JXTA services and describe the service description and discovery mechanisms that were built upon it. We regard this generic service model as a first step in achieving interoperability between incompatible types of services.

Title:  
TOWARDS ANCHORING SOFTWARE MEASURES ON ELEMENTS OF THE PROCESS MODEL
Author(s):  
Bernhard Daubner, Bernhard Westfechtel and Andreas Henrich
Abstract:  
It is widely accepted that software measurement should be automated by proper tool support whenever possible and reasonable. While many tools exist that support automated measurement most of them lack the possibility to reuse defined metrics and to conduct the measurement in a standardized way. This article presents an approach to anchor software measures on elements of the process model. This makes it possible to define the relevant software measures independently of a concrete project. At project runtime the work breakdown structure is used to establish a link between the measurement anchor points within the process model and the project entities that actually have to be measured. Utilizing the project management tool Maven a framework has been developed that allows to automate the measurement process.

Title:  
A DYNAMIC ANALYSIS TOOL FOR EXTRACTING UML 2 SEQUENCE DIAGRAMS
Author(s):  
Paolo Falcarin and Marco Torchiano
Abstract:  
There is a wide range of formats and meta-models to represent the information extracted by reverse engineering tools. Currently UML tools with reverse engineering capabilities are not truly interoperable due to differences in the interchange format and cannot extract complete and integrated models. The forthcoming UML 2.0 standard includes a complete meta-model and a well defined interchange format (XMI 2.0). There is an available implementation of the meta-model, therefore it is a viable option to use UML 2.0 the modeling format for reverse engineered models. In this paper we propose a technique to automatically extract sequence diagrams from Java programs, compliant to the UML 2.0 specifications. The proposed approach takes advantage of the Eclipse platform and different plug-ins to provide an integrated solution: it relies on a new dynamic analysis technique, based on Aspect Oriented Programming; it recovers the interactions between objects also in presence of reflection and polymorphism.

Title:  
REACTIVE, DISTRIBUTED AND AUTONOMIC COMPUTING ASPECTS OF AS-TRM
Author(s):  
E. Vassev, H. Kuang, O. Ormandjieva and J. Paquet
Abstract:  
Autonomic computing is a new research area focusing on developing complex computing systems smarter and easier to manage. The main objective of this research is a rigorous investigation of an architectural approach for developing and evolving reactive autonomic (self-managing) systems, and for continuous monitoring of their quality. To our knowledge, ours is the first attempt to model reactive behavior in the autonomic systems. In this paper, we draw upon our research experience and the experience of other autonomic computing researchers to discuss the main aspects of Autonomic Systems Timed Reactive Model (AS-TRM) architecture and demonstrate its reactive, distributed and autonomic computing nature.

Title:  
A SYSTEMATIC REVIEW MEASUREMENT IN SOFTWARE ENGINEERING - State-of-the-art in Measure
Author(s):  
swaldo Gómez, Hanna Oktaba, Mario Piattini and Félix García
Abstract:  
The present work provides a summary of the state of art in software measures by means of a systematic review on the current literature. Nowadays, many companies need to answer the following questions: How to measure?, When to measure and What to measure?. There have been a lot of efforts made to attempt to answer these questions, and this has resulted in a large amount of what is sometimes confusing and unclear information. This needs to be properly processed and classified in order to provide a better overview of the current situation. We have used a Measurement Software Ontology to classify and put the amount of data in this field in order. We have also analyzed the results of the systematic review, to show the trends in the software measurement field and the software process on which the measurement efforts have focused. It has allowed us to discover what parts of the process are not supported enough by measurements, to thus motivate future research in those areas.

Title:  
MINING ANOMALIES IN OBJECT-ORIENTED IMPLEMENTATIONS THROUGH EXECUTION TRACES
Author(s):  
Paria Parsamanesh, Amir Abdollahi Foumani and Constantinos Constantinides
Abstract:  
The term “anomaly” refers any phenomenon that can negatively affect software quality. Examples include low cohesion, high coupling and crosscutting. In this paper we present a new technique of identifying anomalies in object-oriented implementations based on the observation of patterns of invoked operations during execution. In our technique we deploy a relational database to store execution paths in order to extract knowledge.

Title:  
A SCENARIO GENERATION METHOD USING A DIFFERENTIAL SCENARIO
Author(s):  
Masayuki Makino and Atsushi Ohnishi
Abstract:  
A generation method of scenarios using differential information between normal scenarios is presented. Behaviours of normal scenarios belonging to the same problem domain are quite similar. We derive the differential information between them and apply the information to generate new alternative/exceptional scenarios. Our method will be illustrated with examples. This paper describes (1) a language for describing scenarios in which simple action traces are embellished to include typed frames based on a simple case grammar of actions, (2) introduction of the differential scenario, and (3) examples of scenario generation using the differential scenario.

Title:  
SYSTEM TEST CASES FROM USE CASES
Author(s):  
Javier J. Gutiérrez, María J. Escalona, Manuel Mejías and Jesús Torres
Abstract:  
Use cases have become a widely used technique to define the functionality of a software system. Use cases are also the main artefact to obtain test cases for verifying the correct implementation of the functionality in the system under test. This paper describes a new, formal and systematic approach for generating system test cases from use cases. This process has been designed specially for testing the system from the point of view of the actors, through it graphical user interfaces.

Title:  
INTRODUCTION TO CHARACTERIZATION OF MONITORS FOR TESTING SAFETY-CRITICAL SOFTWARE
Author(s):  
Christian Di Biagio, Guido Pennella, Anna Lomartire and Giovanni Cantone
Abstract:  
The goal of this paper is to characterize software technologies to test hard real-time software by focusing on measurement of CPU and memory loads, performance monitoring of processes and their threads, intrusiveness, and some other key features and capabilities, in the context of the a multination organization, Italian branch, which works in the domain of safety-critical systems, from the points of view of the project managers of such an organization, on one side, and the applied researcher, on the other side. The paper first sketches on the state of the art in the field of testing technologies for safety-critical systems, then presents an characterization model, which is based on goals of the reference company, and then applies that model to major testing tools available.

Title:  
REVERSE ENGINEERING ELECTRONIC SERVICES - From e-Forms to Knowledge
Author(s):  
Costas Vassilakis, George Lepouras and Akrivi Katifori
Abstract:  
On their route to e-governance, public administrations have developed electronic services, through which citizens and enterprises can conduct their transactions with the government. Each electronic service encompasses a significant amount of knowledge in the form of examples, help texts, legislation excerpts, validation checks etc. This knowledge has been offered by domain experts in the phases of service analysis, design and implementation, being however bundled within the software, it cannot be readily retrieved and used in other organizational processes, including the development of new services. In this paper, we present an approach for reverse engineering electronic services, in order to formulate knowledge items of a high level of abstraction, which can be made available to the employees of the organizations. Moreover, the knowledge items formulated in the reverse engineering process are stored into a knowledge-based e-service development platform, making them readily available for use in the development of other services.

Title:  
A PRIMITIVE EXECUTION MODEL FOR HETEROGENEOUS MODELING
Author(s):  
Frédéric Boulanger and Guy Vidal-Naquet
Abstract:  
Heterogeneous modeling is modeling using several modeling methods. Since many different modeling methods are used in different crafts, heterogeneous modeling is necessary to build a heterogeneous model of a system that takes the modeling habits of the designers into account. A model of computation is a formal description of the behavioral aspect of a modeling method. It is the set of rules that allows to compute the behavior of a system by composing the behaviors of its parts or components. Heterogeneous modeling allows different parts of the system to be modeled using different models of computation. Therefore, some parts of the system may obey some rules while other parts obey other rules for the composition of their behaviors. However, computing the behavior of a system which is modeled using several models of computation can be difficult if the meaning of each model of computation, and what happens at their boundary, is not well defined. In this article, we propose an execution model that provides a framework of primitive operations that should allow to express how a model of computation is interpreted to compute the behavior of a model of a system. When models of computation are ""implemented"" in this execution model, it becomes possible to specify exactly what is the meaning of the joint use of several models of computation in the model of a system.

Title:  
SYSML-BASED WEB ENGINEERING - A Succseful Way to Design Web Applications
Author(s):  
Haroon Tarawneh
Abstract:  
This paper discusses the importance of a new modeling language SysML (system modeling language) and shows how it differs from UML2.0 (unified modeling language) in the development of web-based applications. The development of Web applications has become more complex and challenging than most of us think. In many ways, it is also different and more complex than traditional software development and there is a lack of a proven methodology that guides software engineers in building web-based applications. In this paper we recommended using SysML for building and designing web-based applications.

Title:  
VIEWPOINT FOR MAINTAINING UML MODELS AGAINST APPLICATION CHANGES
Author(s):  
Walter Cazzola, Ahmed Ghoneim and Gunter Saake
Abstract:  
The urgency that characterizes many request for evolution force the systems administrators/developers of directly adapting the system without passing through the adaptation of its design. This creates a gap between the design information and the system it describes. The existing design models provide a static and often outdated snapshot of the systems unrespectful of the system changes. Software developers spend a lot of time for evolving the system and then for updating the design information according to the evolution of the system. To this respect, we present an approach to automatically keep the design information (UML diagrams in our case) consistent with the system when it evolves. The UML diagrams are bound to the application and all the changes to it are reflected to the diagrams as well. This approach can be applied either to the design models automatically extracted from the implementation code or to the models designed at design-time.

Title:  
MODELLING THE UNEXPECTED BEHAVIOURS OF EMBEDDED SOFTWARE USING UML SEQUENCE DIAGRAMS
Author(s):  
Hee-jin Lee, In-Gwon Song, Sang-Uk Jeon, Doo-Hwan Bae and Jang-Eui Hong
Abstract:  
Although the UML 2.0 sequence diagrams incorporate several modelling features for embedded software recently, they have some deficiency to depict the exceptional behaviours of embedded software conveniently. Real-time and embedded system may be laid on unexpected states because system’s user can generate episodic events in various conditions. In this paper, we propose expressive and intuitive modelling feature for such behaviour modelling. Specially, our suggestion provides new syntax and semantics required to express the exceptional behaviours using UML sequence diagram. This modelling feature is explained and proved with an example of call-setup procedure of CDMA mobile phone.

Area 3 - Distributed and Parallel Systems
Title:  
ALGORITHMIC SKELETONS FOR BRANCH & BOUND
Author(s):  
Michael Poldner and Herbert Kuchen
Abstract:  
Algorithmic skeletons are predefined components for parallel programming. We will present a skeleton for branch & bound problems for MIMD machines with distributed memory. This skeleton is based on a distributed work pool. We discuss two variants, one with supply-driven work distribution and one with demand-driven work distribution. This approach is compared to a simple branch & bound skeleton with a centralized work pool, which has been used in a previous version of the skeleton library Muesli. Based on experimental results for two example applications, namely the $n$-puzzle and the traveling salesman problem, we show that the distributed work pool is clearly better and enables good runtimes and in particular scalability. Moreover, we discuss some implementation aspects such as termination detection as well as overlapping computation and communication.

Title:  
IMPACT OF WRAPPED SYSTEM CALL MECHANISM ON COMMODITY PROCESSORS
Author(s):  
Satoshi Yamada, Shigeru Kusakabe and Hideo Taniguchi
Abstract:  
Split-phase style transactions separate issuing a request and receiving the result of an operation in different threads, and are useful in hiding latencies of unpredictably long operations. We apply this style to system calls in order to reduce overhead caused by system call mechanism on commodity processors. This style is also useful in enhancing locality of reference when executing the same system calls in multiple threads. In this paper, we evaluate the effectiveness of split-phase system call on commodity processors, and we call this mechanism as Wrapped System Call (WSC) mechanism. WSC mechanism can be effective even on commodity platforms which do not have explicit fine-grain multithread support. We evaluate WSC mechanism based on a performance evaluation model by using a simplified benchmark. We also apply WSC mechanism to variants of cp program to observe the effect on the enhancement of locality of reference. When we apply WSC mechanism to cp program, the combination of our split-phase style system calls and our scheduling mechanism is effective in improving throughput by reducing mode changes and exploiting locality of reference.

Title:  
A HYBRID TOPOLOGY ARCHITECTURE FOR P2P FILE SHARING SYSTEMS
Author(s):  
Juan Pedro Muñoz-Gea, Josemaría Malgosa-Sanahuja, Pilar Manzanares-Lopez, Juan Carlos Sanchez-Aarnoutse and Antonio M. Guirado-Puerta
Abstract:  
Over the Internet today, there has been much interest in emerging Peer-to-Peer (P2P) networks because they provide a good substrate for creating data sharing, content distribution, and application layer multicast applications. There are two classes of P2P overlay networks: structured and unstructured. Structured P2P networks can efficiently locate items, but the searching process is not user friendly. Conversely, unstructured P2P networks have an easy mechanism to search a content, but the lookup process is inefficient. In this paper, we propose a hybrid structured and unstructured topology in order to take advantages of both kind of networks. In addition, our proposal guarantees that if a content is at any place in the network, it will be reachable with probability one. Simulation results show that the behaviour of the network is stable and that the network distributes the contents efficiently to avoid network congestion.

Title:  
A METHODOLOGY FOR ADAPTIVE RESOLUTION OF NUMERICAL PROBLEMS ON HETEROGENEOUS HIERARCHICAL CLUSTERS
Author(s):  
Wahid Nasri, Sonia Mahjoub and Slim Bouguerra
Abstract:  
Solving a target problem by using a single algorithm or writing portable programs that perform well is not always efficient on any parallel environment due to the increasing diversity of existing computational supports where new characteristics are influencing the execution of parallel applications. The inherent heterogeneity and the diversity of networks of such environments represent a great challenge to efficiently implement parallel applications for high performance computing. Our objective within this work is to propose a generic framework based on adaptive techniques for solving a class of numerical problems on cluster-based heterogeneous hierarchical platforms. Toward this goal, we refer to adaptive approaches to better adapt a given application to a target parallel system. We apply this methodology on a basic numerical problem, namely solving the matrix multiplication problem, while determining an adaptive execution scheme minimizing the overall execution time depending on the problem and architecture parameters.

Title:  
TOWARDS A QUALITY MODEL FOR GRID PORTALS
Author(s):  
Mª Ángeles Moraga, Coral Calero, Mario Piattini and David Walker
Abstract:  
Researchers require multiple computing resources when conducting their computational research; this makes necessary the use of distributed resources. In response to the need for dependable, consistent and pervasive access to distributed resources, the Grid came into existence. Grid portals subsequently appeared with the aim of facilitating the use and management of distributed resources. Nowadays, many Grid portals can be found. In addition, users can change from one Grid portal to another with only a click of a mouse. So, it is very important that users regularly return to the same Grid portal, since otherwise the Grid portal might disappear. However, the only mechanism that makes users return is high quality. Therefore, in this paper and with all the above considerations in mind, we have developed a Grid portal quality model from an existing portal quality model, namely, PQM. In addition, the model produced has been applied to two specific Grid portals.

Title:  $
LANGUAGE-BASED SUPPORT FOR SERVICE ORIENTED ARCHITECTURES: FUTURE DIRECTIONS
Author(s):  
Pablo Giambiagi, Olaf Owe, Gerardo Schneider and Anders P. Ravn
Abstract:  
The fast evolution of the Internet has popularized service-oriented architectures (SOA) with their promise of dynamic IT-supported inter-business collaborations. Yet this popularity does not reflect on the number of actual applications using the architecture. Programming models in use today make a poor match for the distributed, loosely-coupled, document-based nature of SOA. The gap is actually increasing. For example, interoperability between different organizations, requires contracts to reduce risks. Thus, high-level models of contracts are making their way into service-oriented architectures, but application developers are still left to their own devices when it comes to writing code that will comply with a contract. This paper surveys existing and future directions regarding language-based solutions to the above problem.

Title:  
AN APPROACH TO MULTI AGENT COOPERATIVE SCHEDULING IN THE SUPPLY CHAIN, WITH EXAMPLES
Author(s):  
Joaquim Reis
Abstract:  
We present an approach to multi-agent cooperative supply-chain production distribution scheduling, together with some results of its application based on simulations. The approach emphasises a scheduling temporal perspective, it is based on a set of three steps each agent must perform, in which the agents communicate through an interaction protocol, and presupposes the sharing of some specific temporal information (among other) about the scheduling problem, for coordination. It allows the set of agents involved to conclude if a given scheduling problem has, or has not, any feasible solutions. In the first case, agent actions are prescribed to repair a first solution, if it contains constraint violations. The resulting overall agent scheduling behaviour is cooperative.

Title:  
DEVELOPING A FAULT ONTOLOGY ENGINE FOR EVALUATION OF SERVICE-ORIENTED ARCHITECTURE USING A CASE STUDY SYSTEM
Author(s):  
Binka Gwynne and Jie Xu
Abstract:  
This paper reports on the current progress of research into the development and implementation of a Fault Ontology Engine. It was devised to facilitate the testing and evaluation of Service-Oriented Architecture, using ontologically supported software fault injection testing mechanisms. The aims of this research stem from the importance of SOA evaluation and the notion that testing and evaluation methods could be supported by autonomous software machines, due to the potential dynamics, size, and complexity of SOA, and the variety of resources offered as services. This paper contains descriptions of experimental work carried out in order to generate information for modelling the fault and failure domains of a real-world case study system. It is planned that analysis of the information from this case study system will be used in the Ontology Engine and adapted to support testing and evaluation mechanisms for SOA systems.

Title:  
A PEER-TO-PEER SEARCH IN DATA GRIDS BASED ON ANT COLONY OPTIMIZATION
Author(s):  
Uros Jovanovic and Bostjan Slivnik
Abstract:  
A method for (1) an efficient discovery of data in large distributed raw datasets and (2) collection of thus procured data is considered. It is a pure peer-to-peer method without any centralized control and is therefore primarily intended for a large-scale, dynamic (data)grid environments. It provides a simple but highly efficient mechanism for keeping the load it causes under control and proves especially usefull if data discovery and collection is to be performed simultaneoulsy with dataset generation. The method supports a user-specified extraction of structured metadata from raw datasets, and automatically performs aggregation of extracted metadata. It is based on the principle of ant colony optimization (ACO). The paper is focused on effective data aggregation and includes the detailed description of the modifications of the basic ACO algorithm that are needed for effective aggregation of the extracted data. Using a simulator, the method was vigorously tested on the wide set of different network topologies for different rates of data extraction and aggregation. Results of the most significant tests are included.

Title:  
AN APPROACH TO MULTI AGENT VISUAL COMPOSITION WITH MIXED STYLES
Author(s):  
Joaquim Reis
Abstract:  
Applications of computer systems that mix Art, Science and Engineeering have appeared as a result of the evolution of information technologies in the last three decades. Frequently, they involve the use of Artificial Intelligence techniques and they have appeared in the fields of music, literary arts and, more recently, visual arts. This article proposes a computational system based on creative intelligent agents that, by making use of the shape grammar formalism, can support visual composition synthesis activities. In this system, each agent gives its creative contribution through a style of its own. Different modes of agent contribution can be put into perspective like, for instance, cooperative or non cooperative modes, the resulting composition emerging from these contributions.

Area 4 - Information Systems and Data Management
Title:  
COMBINING INFORMATION EXTRACTION AND DATA INTEGRATION IN THE ESTEST SYSTEM
Author(s):  
Dean Williams and Alexandra Poulovassilis
Abstract:  
We describe an approach which builds on techniques from Data Integration and Information Extraction in order to make better use of the unstructured data found in application domains such as the Semantic Web which require the integration of information from structured data sources, ontologies and text. We describe the design and implementation of the ESTEST system which integrates available structured and semi-structured data sources into a virtual global schema which is used to partially configure an information extraction process. The information extracted from the text is merged with this virtual global database and is available for query processing over the entire integrated resource. As a result of this semantic integration, new queries can now be answered which would not be possible from the structured and semi-structured data alone. We give some experimental results from the ESTEST system in use.

Title:  
A FRAMEWORK FOR THE DEVELOPMENT AND DEPLOYMENT OF EVOLVING APPLICATIONS - Elaborating on the Model Driven Architecture Towards a Change-resistant Development Framework
Author(s):  
Georgios Voulalas and Georgios Evangelidis
Abstract:  
Software development is an R&D intensive activity, dominated by human creativity and diseconomies of scale. Current efforts focus on design patterns, reusable components and forward-engineering mechanisms as the right next stage in cutting the Gordian knot of software. Model-driven development improves productivity by introducing formal models that can be understood by computers. Through these models the problems of portability, interoperability, maintenance, and documentation are also successfully addressed. However, the problem of evolving requirements, which is more prevalent within the context of business applications, additionally calls for efficient mechanisms that ensure consistency between models and code, and enable seamless and rapid accommodation of changes, without interrupting severely the operation of the deployed application. This paper introduces a framework that supports rapid development and deployment of evolving web-based applications, based on an integrated database schema. The proposed framework can be seen as an extension of the Model Driven Architecture targeting a specific family of applications.

Title:  
SMART BUSINESS OBJECT - A New Approach to Model Business Objects for Web Applications
Author(s):  
Xufeng (Danny) Liang and Athula Ginige
Abstract:  
At present, there is a growing need to accelerate the development of web applications and to support continuous evolution of web applications due to evolving business needs. The object persistence capability and web interface generation capability in contemporary MVC (Model View Controller) web application development frameworks and model-to-code generation capability in Model-Driven Development tools has simplified the modelling of business objects for developing web applications. However, there is still a mismatch between the current technologies and the essential support for high-level, semantic-rich modelling of web-ready business objects for rapid development of modern web applications. Therefore, we propose a novel concept called Smart Business Object (SBO) to solve the above-mentioned problem. In essence, SBOs are web-ready business objects. SBOs have high-level, web-oriented attributes such as email, URL, video, image, document, etc. This allows SBO to be modelled at a higher-level of abstraction than traditional modelling approaches. A lightweight, near-English modelling language called SBOML (Smart Business Object Modelling Language) is proposed to model SBOs. We have created a toolkit to streamline the creation (modelling) and consumption (execution) of SBOs. With these tools, we are able to build fully functional web applications in a very short time without any coding.

Title:  
MEASURING EFFECTIVENESS OF COMPUTING FACILITIES IN ACADEMIC INSTITUTES A NEW SOLUTION FOR A DIFFICULT PROBLEM
Author(s):  
Smriti Sharma and Veena Bansal
Abstract:  
There has been a constant effort to evaluate the success of Information Technology in organizations. This kind of investment is extremely hard to evaluate because of difficulty in identifying tangible benefits, as well as high uncertainty about achieving the expected value. Though a lot of research has taken place in this direction, but not much is written about evaluating IT in non-profit organizations like educational institutions. Measures for evaluating success of IT in such kind of institutes are markedly different from that of business organizations. The purpose of this paper is to build further upon the existing body of research by proposing a new model for measuring effectiveness of computing facilities in academic institutes. As a baseline, Delone & McLean’s model for measuring the success of Information System (DeLone & McLean 1992,DeLone & McLean 2003) is used, as it is the most pioneering model in this regard.

Title:  
DISCOVERY AND AUTO-COMPOSITION OF SEMANTIC WEB SERVICES
Author(s):  
Philippe Larvet and Bruno Bonnin
Abstract:  
In order to facilitate the on-demand delivery of new services for mobile terminals as well as for fixed phones, we propose a user-centric solution based on Semantic Service-Oriented Architecture (SSOA) for instant building and delivery of new services composed with existing Web services discovered and assembled on-the-fly. This solution, based on semantic descriptions of Web services, is made of three main mechanisms: a semantic service discoverer, transparent for the user, allows to find the pertinent Web services matching with the user's original request, expressed vocally or by a SMS or a simple text ; a semantic service composer, using the semantic descriptions of the Web services, allows to combine and orchestrate the discovered services in order to build a new service fully matching the user's request, and a service deliverer makes the new service immediately accessible by the user.

Title:  
PCA-BASED DATA MINING PROBABILISTIC AND FUZZY APPROACHES WITH APPLICATIONS IN PATTERN RECOGNITION
Author(s):  
Luminita State, Catalina Cocianu, Panayiotis Vlamos and Viorica Stefanescu
Abstract:  
The aim of the paper is to develop a new learning by examples PCA-based algorithm for extracting skeleton information from data to assure both good recognition performances, and generalization capabilities. Here the generalization capabilities are viewed twofold, on one hand to identify the right class for new samples coming from one of the classes taken into account and, on the other hand, to identify the samples coming from a new class. The classes are represented in the measurement/feature space by continuous repartitions, that is the model is given by the family of density functions , where H stands for the finite set of hypothesis (classes). The basis of the learning process is represented by samples of possible different sizes coming from the considered classes. The skeleton of each class is given by the principal components obtained for the corresponding sample. The recognition algorithm results by a defuzzyfication technique that identifies the class whose skeleton is the “nearest” to the tested example, where the closeness degree is expressed in terms of the amount of disturbance determined by the decision of allotting it to the corresponding class.

Title:  
PROGRAM VERIFICATION TECHNIQUES FOR XML SCHEMA-BASED TECHNOLOGIES
Author(s):  
Suad Alagic, Mark Royer and David Briggs
Abstract:  
Representation and verification techniques for XML Schema types, structures, and applications, in a program verification system PVS are presented. Type derivations by restriction and extension as defined in XML Schema are represented in the PVS type system using predicate subtyping. Availability of parametric polymorphism in PVS makes it possible to represent XML sequences and sets via PVS theories. Powerful PVS logic capabilities are used to express complex constraints of XML Schema and its applications. Transaction verification methodology developed in the paper is based on declarative, logic-based specification of the frame constraints and the actual transaction updates. A sample XML application given in the paper includes constraints typical for XML schemas such as keys and referential integrity, and in addition ordering and range constraints. The developed proof strategy is demonstrated by a sample transaction verification with respect to this schema. The overall approach has a model theory based on the view of XML types and structures as theories. The core of this model theory is also presented in the paper.

Title:  
ADMIRE FRAMEWORK: DISTRIBUTED DATA MINING ON DATA GRID PLATFORMS
Author(s):  
Nhien An Le Khac, Tahar Kechadi and Joe Carthy
Abstract:  
In this paper, we present the ADMIRE architecture; a new framework for developing novel and innovative data mining techniques to deal with very large and distributed heterogeneous datasets in both commercial and academic applications. The main ADMIRE components are detailed as well as its interfaces allowing the user to efficiently develop and implement their data mining applications techniques on a Grid platform such as Globus ToolKit, DGET, etc.

Title:  
USAGE TRACKING LANGUAGE: A META LANGUAGE FOR MODELLING TRACKS IN TEL SYSTEMS
Author(s):  
Christophe Choquet and Sébastien Iksal
Abstract:  
In the context of distance learning and teaching, the re-engineering process needs a feedback on the learners' usage of the learning system. The feedback is given by numerous vectors, such as interviews, questionnaires, videos or log files. We consider that it is important to interpret tracks in order to compare the designer’s intentions with the learners’ activities during a session. In this paper, we present the usage tracking language – UTL. This language is designed to be generic and we present an instantiation of a part of it with IMS-Learning Design, the representation model we chose for our three years of experiments.

Title:  
ON CONTEXT AWARE PREDICATE SEQUENCE QUERIES
Author(s):  
Hagen Höpfner
Abstract:  
Due to the limited input capabilities of small mobile information system clients like mobile phones, it is not a must to support a descriptive query language like SQL. Furthermore, information systems with mobile clients have to address characteristics resulting from clients mobility as well as from wireless communications. These additional functions can be supported by a reasonable, well-defined notation of queries. Moreover, such systems should be context aware. In this paper we present a query notation named ""context aware predicate sequence queries"" which respects these issues.

Title:  
CLICKSTREAM DATA MINING ASSISTANCE - A Case-Based Reasoning Task Model
Author(s):  
Cristina Wanzeller and Orlando Belo
Abstract:  
This paper presents a case-based reasoning system to assist users in knowledge discovery from clickstream data. The system is especially oriented to store and make use of the knowledge acquired from the experience in solving specific clickstream data mining problems inside a corporate environment. We describe the main design, implementation and characteristics of this system. The system was implemented as a prototype Web-based application, centralizing the past mining processes in a corporative memory. Its main goal is the decentralized recommendation of the most suited mining strategies to address the problem at hand, accepting as inputs the characteristics of the available clickstream data and the analysis requirements. The system also takes advantage and integrates corporative related information resources, supporting a semi-automated data gathering approach along the organization.

Title:  
A DATA MINING APPROACH TO LEARNING PROBABILISTIC USER BEHAVIOR MODELS FROM DATABASE ACCESS LOG
Author(s):  
Mikhail Petrovskiy
Abstract:  
The problem of user behavior modeling arises in many fields of computer science and software development. User models play an important role in recommendation and collaborative filtering systems, in intrusion detection systems and in some tasks of software engineering. In this paper we investigate a data mining approach for learning probabilistic user behavior models from the database usage logs. We propose simple but effective procedure for translating database traces into representation suitable for application of user behavior modeling techniques based on sequential, associative or classification data mining models. However, most existing methods have a serious drawback - they rely on the order of actions and ignore time intervals between actions. To avoid this problem we propose our novel method based on combination of decision trees classification algorithm and empirical time-dependent feature map, motivated by potential functions theory. As a result, the designed method generates understandable probabilistic user behavior models taking into account time dependencies. The performance of the proposed method was experimentally estimated on real-world data. The comparison to the results of state-of-the-art data mining methods has confirmed outstanding performance of our method in predictive user behavior modeling and has demonstrated competitive results in anomaly detection.

Title:  
VIRTUAL MUSEUM – AN IMPLEMENTATION OF A MULTIMEDIA OBJECT-ORIENTED DATABASE
Author(s):  
Rodrigo Filev Maia and Jorge Rady Almeida Junior
Abstract:  
This paper describes the main characteristics involved in the process of using multimedia content in the Internet sites and it presents a proposal for an implementation of an object-oriented database, in order to assist the multimedia data exigency in a dynamic website. It is described an implementation of the proposed architecture, consisting of a virtual museum made for the Contemporary Art Museum of the USP, called Virtual MAC, which was elected as the 3rd best virtual museum of the world by INFOLAC Web 2005 (UNESCO) . The main objective of Virtual MAC is to create a virtual collection of works at art and make it available on Internet. Our analysis shows that it is more appropriate to use the Object Oriented paradigm instead of Relational Modelling due to the nature of the multimedia data and the structure of the dynamic web site used for the Virtual MAC.

Title:  
AN ANALYSIS OF THE EFFECTS OF SPATIAL LOCALITY ON THE CACHE PERFORMANCE OF BINARY SEARCH TREES
Author(s):  
Thomas B. Puzak and Chun-Hsi Huang
Abstract:  
The topological structure of binary search trees does not translate well into the linear nature of a computer's memory system, resulting in high cache miss rates on data accesses. This paper analyzes the cache performance of search operations on several varieties of binary trees. Using uniform and nonuniform key distributions, the number of cache misses encountered per search is measured for Vanilla, AVL, and two types of Cache Aware Trees. Additionally, concrete measurements of the degree of spatial locality observed in the Trees is provided. This allows the trees to be evaluated for situational merit, and for definitive explanations of their performance to be given. Results show that the balancing operations of AVL trees effectively negates any spatial locality gained through naive allocation schemes. Furthermore, for uniform input this paper shows that large cache lines are only beneficial to trees that consider the cache's line size in their allocation strategy. Results in the paper demonstrate that adaptive cache aware allocation schemes that approximate the key distribution of a tree have universally better performance than static systems that favor a particular key distribution.

Title:  
A UNIFIED APPROACH FOR SOFTWARE PROCESS REPRESENTATION AND ANALYSIS
Author(s):  
Vassilis C. Gerogiannis, George Kakarontzas and Ioannis Stamelos
Abstract:  
This paper presents a unified approach for software process management which combines object-oriented (ΟΟ) structures with formal models based on (high-level timed) Petri nets. This pairing may be proved beneficial not only for the integrated representation of software development processes, human resources and work products, but also in analysing properties and detecting errors of a software process specification, before the process is put to actual use. The use of OO models provides the advantages of graphical abstraction, high-level of understanding and manageable representation of a software process classes and instances. The resulted OO models are mechanically transformed into a high-level timed Petri net representation to derive a model for formally proving process properties as well as applying managerial analysis. We demonstrate the applicability of our approach by addressing a software process modelling example problem used in the literature to exercise various software process modelling notations.

Title:  
USING LINGUISTIC TECHNIQUES FOR SCHEMA MATCHING
Author(s):  
Ozgul Unal and Hamideh Afsarmanesh
Abstract:  
Organizations from a variety of domains have now clearly realized the need for collaboration to achieve higher goals and/or to be more productive. Among others, the collaboration requirement has become prominent in the Biodiversity domain. Nevertheless, like in any other collaborative network, different Biodiversity nodes represent a variety of heterogeneous structuring/organization of the information, which is very challenging. Automatic resolution of semantic and schematic heterogeneity still remains a bottleneck for providing integrated access to and data sharing among heterogeneous, autonomous, and distributed biodiversity databases in a network of biodiversity organizations. In order to deal with this problem, matching components among database schemas need to be identified and heterogeneity needs to be resolved, by creating the corresponding mappings in a process called schema matching. One important step in this process is the identification of the syntactic and semantic similarity among elements from different schemas, usually referred to as Linguistic Matching. The Linguistic Matching component of a schema matching and integration system, called SASMINT, is the focus of this paper. Unlike other systems, which typically utilize only a limited number of similarity metrics, SASMINT makes an effective use of NLP techniques for the Linguistic Matching and proposes a weighted usage of several syntactic and semantic similarity metrics. In order to demonstrate the accuracy of weighted sum of metrics, a number of tests have been carried out, results of which are presented in this paper. Since it is not easy for the user to determine the weights, SASMINT provides a component called Sampler as another novelty, to support automatic generation of weights.

Title:  
FORMAL FRAMEWORK FOR SEMANTIC INTEROPERABILITY
Author(s):  
Nadia Yaacoubi Ayadi, Mohamed Ben Ahmed and Yann Pollet
Abstract:  
We address in this paper the general issue of "a posteriori" semantic interoperability between systems relying on semantically heterogeneous schemas, having been designed for the purpose of independent specific goals and activities. As conceptual schemas, we opt here for UML conceptual hierarchies of classes, integrating a very general notion of specialisation, including exclusive/inclusive and/or complete/incomplete constraints, etc., and from which the classical inheritance link is a particular case. In our approach, we formalize all the knowledge embedded in hierarchies in terms of a consistent logical description. Using these set of predicates, we interpret an UML hierarchy as a property lattice by adding new abstractions of classes (that can be empty or non empty). However, a property lattice is not unique. According to a particular attribute scaling, we may obtain different lattices that are all equivalent to the initial schema. So, we introduce the notion of conceptual structure that is the equivalence set of all equivalent lattices. In this context, given a set of assertions stating the existence of various semantic links between properties (attributes, concepts) of two schemas S1 and S2 , our problem is to automatically build an "interoperation" structure equivalent to relevant parts of S1 andS2 schemas. So, we propose an algorithm to incrementally reorganize property lattices, based on schema logical formulation. For the purpose of the reorganization algorithm, we propose a set of elementary operators.

Title:  
ON THE EVALUATION OF TREE PATTERN QUERIES
Author(s):  
Yangjun Chen
Abstract:  
The evaluation of Xpath expressions can be handled as a tree embedding problem. In this paper, we propose two strategies on this issue. One is ordered-tree embedding based and the other is unordered-tree embedding based. For the ordered-tree embedding, our algorithm needs only O(|T|Þ|P|) time and O(|T|Þ|P|) space, where |T| and |P| stands for the numbers of the nodes in the target tree T and the pattern tree P, respectively. We show that the unordered-tree embedding is NP-complete by a reduction from the constraint satisfaction problem (CSP). Based on such a reduction, an algorithm is devised for the unordered problem. In the case that the branching of pattern trees is limited, the algorithm works in polynomial time.

Title:  
CRYSTALLIZATION OF AGILITY - Back to Basics
Author(s):  
Asif Qumer and Brian Henderson-Sellers
Abstract:  
There are a number of agile and traditional methodologies for software development. Agilists provide agile principles and agile values to characterize the agile methods but there is no clear and inclusive definition of agile methods; subsequently it is not feasible to draw a clear distinction between traditional and agile software development methods in practice. The purpose of this paper is to explain the concept of agility in detail; and then to suggest a definition of agile methods that would facilitate us to rank or differentiate agile methods from other available methods.

Title:  
DATA MINING METHODS FOR GIS ANALYSIS OF SEISMIC VULNERABILITY
Author(s):  
Florin Leon and Gabriela M. Atanasiu
Abstract:  
This paper aims at designing some data mining methods of evaluating the seismic vulnerability of regions in the built infrastructure. A supervised clustering methodology is employed, based on k-nearest neighbor graphs. Unlike other classification algorithms, the method has the advantage of taking into account any distribution of training instances and also data topology. For the particular problem of seismic vulnerability analysis using a Geographic Information System, the gradual formation of clusters (for different values of k) allows a decision-making stakeholder to visualize more clearly the details of the cluster areas. The performance of the k-nearest neighbor graph method is tested on three classification problems, and finally it is applied to a sample from a digital map of Iasi, a large city located in the North-Eastern part of Romania.

Title:  
SOFTWARE SYNTHESIS OF THE WEB-BASED QUESTIONNARIE SYSTEM
Author(s):  
Masahiro Yamamoto
Abstract:  
The questionnaires on the web are increasing in the fields of business and personals recently. However, staff peoples of business areas and conventional personals can not implement them by themselves. Usually they ask professionals of information technology fields for building of such kinds of questionnaires. In this case it takes many times and costs much. If staff peoples of business fields and personals can easily make it by themselves, it is very useful. Software synthesis system of a web–based questionnaire system for them is developed.

Title:  
WEB INFORMATION SYSTEM: A FOUR LEVEL ARCHITECTURE
Author(s):  
Roberto Paiano, Anna Lisa Guido and Leonardo Mangia
Abstract:  
Business processes are playing a very important role in companies and the explicit introduction of them in Information System architecture is a must. According to the interest shown towards Web Application it is important to introduce a new web-oriented class of software, which is able to gives to the manager the possibility to operating directly with the process (we will talk about process oriented WIS - Web Information System). It is necessary to replace the three-level logic of the traditional application development (Data, Business Logic, Presentation), that hides processes in the Business Logic, with a four-level logic that allows to separate the process level from the application level: definition and management of the processes will be not tied solely to business logic. Our research (work in progress) focus is on an innovative framework (software architecture and methodology) for Information System development that links together the know-how acquired in Web Application design and on process definition concepts.

Title:  
INFORMATION SYSTEM DESIGN AND PROTOTYPING USING FORM TYPES
Author(s):  
Jelena Pavićević, Ivan Luković, Pavle Mogin and Miro Govedarica
Abstract:  
The paper presents the form type concept that generalizes screen forms that users utilize to communicate with an information system. The concept is semantically rich enough to enable specifying such an initial set of constraints, which makes it possible to generate application prototypes together with related implementation database schema. IIS*Case is a CASE tool based on the form type concept that supports conceptual modelling of an information system and its database schema. The paper outlines a way how this tool can generate XML specifications of application prototypes of an information system. The aim is to improve IIS*Case through implementation of a module which can produce an executable prototype of an information system, automatically.

Title:  
EFFECTİVENESS OF WEB BASED PBL USİNG COURSE MANAGEMENT TECHNOLOGİES: A CASE STUDY
Author(s):  
Havva H. Basak and Serdar Ayan
Abstract:  
Maritime education and training has typically focused on delivering practical courses for a practical vocation. In the modern environment, maritime personnel now need to be more professional, more open to change and more business-like in their thinking. This has led to changes in the education system that supports the maritime industries. Teaching thinking skills has become a major agenda for education. Problem Based Learning is a part of this thinking. Problem-Based Learning (PBL) within a web-based environment in the delivery of an undergraduate courses has been investigated. The effects was evaluated by comparing the performances of the students using the web-based PBL and comparing the outcomes with those of the traditional PBL. The outcomes of the experiments was positive. By having real life problems as focal points and students as active problem-solvers, the learning paradigm would shift towards the attainment of higher thinking skills.

Title:  
A BAYESIAN NETWORK TO STRUCTURE A DATA QUALITY MODEL FOR WEB PORTALS
Author(s):  
Angélica Caro, Coral Calero, Houari Sahraoui, Ghazwa Malak and Mario Piattini
Abstract:  
The technological advances and the use of the internet have favoured the appearance of a great diversity of web applications, among them Web portals. Through them, organizations develop their businesses in a highly competitive environment. One decisive factor for this competitiveness is the assurance of its data quality. In previous works, a data quality model for Web portals has been developed. The model is represented as a matrix that links the user expectations of data web quality to the portal functionalities. Into this matrix a set of 34 attributes where classified. However, the quality attributes on this model have not an operational structure, necessary to be used actual assessment. In this paper we present how we have structured these attributes by means of a probabilistic approach, using Bayesian Networks. The final objective is to use the Bayesian network obtained for evaluating the quality of a data portal (or a subset of its characteristics).

Title:  
A DETECTION METHOD OF STAGNATION SYMPTOMS BY USING PROJECT PROGRESS MODELS GENERATED FROM PROJECT REPORTS
Author(s):  
Satoshi Tsuji, Yoshitmo Ikkai and Masanori Akiyoshi
Abstract:  
The purpose of this research is to extract ``stagnation symptoms'' from progress reports about research project. The stagnation symptom is defined as a portion where remarkable stagnation is seen in a project progress. Concretely, according to project managers, stagnation symptoms can be classified into the following three kinds: first one is a bottleneck of the project grasped from one document, the second is clarified by comparing with the most recent document, and the third is clarified from changes of working object in a series of documents. We propose the method to extract stagnation symptoms with structural analysis of project progress. A progress model that is a structural chart to express progress of a project is generated from documents with label tags, which indicate contexts or attributes beforehand. multilevel layer model by detailed degree and situation analysis by color and relation analysis of details and basis by propagation of color. Stagnation symptoms are automatically extracted by applying stagnation symptom extraction rules to the progress model. The proposed method has been applied to set of real progress reports. It could extract stagnation symptoms that were extracted in sense manually.

Area 5 - Knowledge Engineering
Title:  
MINING OF COMPLEX OBJECTS VIA DESCRIPTION CLUSTERING
Author(s):  
Alejandro García López, Rafael Berlanga and Roxana Danger
Abstract:  
In this work we intend to present a formal framework for mining complex objects, being those characterised by a set of attributes and their corresponding values. First we will do an introduction of the various Data Mining techniques available in the literature to extract association rules. We will as well show some of the drawbacks of these techniques and how our proposed solution is going to tackle them. Then we will show how applying a clustering algorithm as a pre-processing step on the data allow us to find groups of attributes and objects that will provide us with a richer starting point for the Data Mining process. Then we will define the formal framework, its decission functions and its interesting measurement rules, as well as a newly designed Data Mining algorithms that will be specifically tuned for our objective. We will also show the type of knowledge to be extracted in the form of a set of association rules. Finally we will state our conclusions and propose the future work.

Title:  
APPROXIMATE REASONING TO LEARN CLASSIFICATION RULES
Author(s):  
Amel Borgi
Abstract:  
In this paper, we propose an original use of approximate reasoning not only as a mode of inference but also as a means to refine a learning process. This work is done within the framework of the supervised learning method SUCRAGE which is based on automatic generation of classification rules. Production rules whose conclusions are accompanied by belief degrees, are obtained by supervised learning from a training set. These rules are then exploited by a basic inference engine: it fires only the rules with which the new observation to classify matches exactly. To introduce more flexibility, this engine was extended to an approximate inference which allows to fire rules not too far from the new observation. In this paper, we propose to use approximate reasoning to generate new rules with widened premises: thus imprecision of the observations are taken into account and problems due to the discretization of continuous attributes are eased. The objective is then to exploit the new base of rules by a basic inference engine, easier to interpret. The proposed method was implemented and experimental tests were carried out.

Title:  
A PATTERN SELECTION ALGORITHM IN KERNEL PCA APPLICATIONS
Author(s):  
Ruixin Yang, John Tan and Menas Kafatos
Abstract:  
Principal Component Analysis (PCA) has been extensively used in different fields including earth science for spatial pattern identification. However, the intrinsic linear feature associated with standard PCA prevents scientists from detecting nonlinear structures. Kernel-based principal component analysis (KPCA), a recently emerging technique, provides a new approach for exploring and identifying nonlinear patterns in scientific data. In this paper, we recast KPCA in the commonly used PCA notation for earth science communities and demonstrate how to apply the KPCA technique into the analysis of earth science data sets. In such applications, a large number of principal components should be retained for studying the spatial patterns, while the variance cannot be quantitatively transferred from the feature space back into the input space. Therefore, we propose a KPCA pattern selection algorithm based on correlations with a given geophysical phenomenon. We demonstrate the algorithm with two widely used data sets in geophysical communities, namely the Normalized Difference Vegetation Index (NDVI) and the Southern Oscillation Index (SOI). The results indicate the new KPCA algorithm can reveal more significant details in spatial patterns than standard PCA.

Title:  
A RETRIEVAL METHOD OF SIMILAR QUESTION ARTICLES FROM WEB BULLETIN BOARD
Author(s):  
Yohei Sakurai, Soichiro Miyazaki and Masanori Akiyoshi
Abstract:  
This paper proposes a retrieval method of similar question articles from Web bulletin board, which basically uses cosine similarity index derived from a user's query sentence and article question sentences. Since these sentences are mostly short, it is difficult to distinguish whether article question sentences are similar to a user's query sentence or not simply by applying the conventional cosine similarity index. Therefore our method modifies the elements of word vector used in cosine similarity index, which are derived from a sentence structure from the viewpoints of common words and non-common words between a user's query sentence and article question sentences. Our proposed method is considered to be effective through experiments.

Title:  
AN EXTRACTION METHOD OF TIME-SERIES NUMERICAL DATA FROM ENTERPRISE PRESS RELEASES
Author(s):  
Masanori Akiyoshi, Mayu Gen, Masaki Samejima and Norihisa Komoda
Abstract:  
This paper addresses an extraction method of time-series numerical data from enterprise press releases for business strategy design. Business strategy consists of logical actions for continuously producing enterprise outcome. The business strategy design process that is partially based on competitive environment analysis may extremely resort to professional skills so far. To enhance and accelerate the competitive environment analysis, we focus on press releases of competitors in order to extract numerical data related to products or services. Sentences in press releases are well organized and grammatically correct. Therefore such extraction is simply done by identifying the keywords of products or services and the unit indicator co-occurrence. In addition to such simple approach, we clarify the specific rules to applying our method to practical press releases.

Title:  
PARTNER ASSESSMENT USING MADM AND ONTOLOGY FOR TELECOM OPERATORS
Author(s):  
Long Zhang and Xiaoyan Chen
Abstract:  
Nowadays, the revenue of telecom operators generated by traditional services declined dramatically while the value added services involving third party value added service providers (partners) are becoming the most prominent source of revenue growth. To regulate the behaviours of the partners and make the operators be able to select best service for end users among the services from different providers, a flexible partner assessment framework is required. This paper 1) presents a flexible partner assessment framework based on Multiple Attribute Decision Making (MADM) method for telecom operators to adapt to the changing requirements of value-added services; 2) proposes ontology to model the complicated relationship in the assessment factors to achieve high extensibility for the continually increasing decision knowledge for partner assessment. From our study, the method adopted and the system proposed can handle the partner assessment problem and support service selection reasonably in telecom industry.

Title:  
GEOSPATIAL PUBLISHING - Creating and Managing Geo-Tagged Knowledge Repositories
Author(s):  
Arno Scharl
Abstract:  
International media have recognized the potential of geo-browsers such as NASA World Wind and Google Earth, for example when Web and television coverage on hurricane “Katrina” used interactive geospatial projections to illustrate its path and the scale of destruction. Yet these early applications only hint at the true potential of geo-browsing technology to build and maintain virtual communities, and to revolutionize the production, distribution and consumption of media products. Investigating this potential, this paper discusses geospatial publishing with a special focus on extracting geospatial context from unstructured textual resources. A content analysis of online coverage based on a suite of text mining tools sheds light on the popularity and adoption of geo-browsing platforms. While such platforms might help enrich a company’s portfolio of media products, they also pose a threat for existing players through attracting new competitors; e.g., independent providers of geospatial metadata or location-based services.

Title:  
MODELLING AND MANAGING KNOWLEDGE THROUGH DIALOGUE: A MODEL OF COMMUNICATION-BASED KNOWLEDGE MANAGEMENT
Author(s):  
Violaine Prince
Abstract:  
In this paper, we describle a model that relies on the following assumption; ontology negotiation and creation is necessary to make knowledge sharing and KM successful through communication. We mostly focus on the modifying process, i.e. dialogue, and we show a dynamic modification of agents knowledge bases could occur through messages exchanges, messages being knowledge chunks to be mapped with agents KB. Dialogue takes account of both success and failure in mapping. We show that the same process helps repair its own anomalies. We describe an architecture for agents knowledge exchange through dialogue, an instantiation of which has been previously presented in ICEIS2005 Proceedings. Last we conclude about the benefits of introducing dialogue features in knowledge management.

Title:  
A VIEW ON THE WEB ENGINEERING NATURE OF WEB BASED EXPERT SYSTEMS
Author(s):  
Ioannis M. Dokas and Alexandre Alapetite
Abstract:  
The Web has become the ubiquitous platform for distributing information and computer services. The tough Web competition, the way people and organizations rely on Web applications, and the increasing user requirements for better services have raised their complexity. Expert systems can be accessed via the Web, forming a set of Web applications known as Web based expert systems. This paper supports that the Web engineering and expert systems principals should be combined when developing Web based expert systems. A development process model will be presented that illustrates, in brief, how these principals can be combined. Based on this model, a publicly available Web based expert system called Landfill Operation Management Advisor (LOMA) was developed. In addition, the results of an accessibility evaluation on LOMA – the first ever reported on Web based expert systems – will be presented. Based on this evaluation some thoughts on accessibility guidelines specific to Web based expert systems will be reported.

Title:  
SPECIFICATION OF DEPENDENCIES FOR IT SERVICE FAULT MANAGEMENT
Author(s):  
Andreas Hanemann, David Schmitz, Patricia Marcu and Martin Sailer
Abstract:  
The provisioning of IT services is often based on a variety of resources and underlying services. To deal with this complexity the dependencies between these elements have to be well-known. In particular, dependencies are needed for tracking a failure in a higher-level service being offered to customers down to the provisioning infrastructure. Another usage of dependencies is the impact estimation of an assumed or actual resource failure onto the services to allow for decision making about appropriate measures. Starting from a real-world service provisioning scenario a set of requirements is derived in this paper which has to be addressed by the modeling of dependencies within a service configuration management solution. A subsequent analysis of the state-of-the-art shows the contributions and limitations of existing research approaches and industry tools. To cope with the requirements found earlier, a methodology is proposed to model dependencies for given service provisioning scenarios. Afterwards, some examples are provided for the real-world scenario. The proposed dependency modeling is part of a larger solution for an overall service management information repository.

Title:  
A MODEL MULTI-AGENTS FOR SHARING AND EXCHANGING KNOWLEDGE IN COMMUNITY OF PRACTICES
Author(s):  
Kenfack Clauvice
Abstract:  
This paper attempt to show, how communities of practices evolve knowledge transfer. We are focusing on the elaboration of a frame to analyse and underlie the logics of the modalities of functioning of communities of practices. We qualify this concept as an abstract regrouping knowledge creation. By adopting a processual perspective, we will try to present the mechanisms of sharing knowledge within a community of practices (CoPs). The communities of practices is a collection of agents (human beings) who have rather strong common points such as, their level of social capacity, their competences, and the cognitive capacities. The development of the exchanges is based on abstract boundaries; the couple knowledge/community implies that exchanges of information takes place through mechanisms of co-operation, negotiation and through a specific communication language of community members. However, the legitimacy of exchanged knowledge is recognized only with interpersonal confidence association that creates for itself progressively interactions. Besides, these exchanges take place only through rules and standards established by the whole of the members. After having pointed out the theoretical bases of the community concept of practices and the sharing knowledge mechanisms, we will present an approach of simulation using the paradigm of the multi-agents systems, sharing knowledge’s within the community of practices.

Title:  
SOME SPECIFIC HEURISTICS FOR SITUATION CLUSTERING PROBLEMS
Author(s):  
Boris Melnikov, Alexey Radionov, Andrey Moseev and Elena Melnikova
Abstract:  
The present work is a continuation of several preceding author's works dedicated to a specific multi-heuristic approach to discrete optimization problems. This paper considers those issues of this multi-heuristic approach which relate to the problems of clustering situations. In particular it considers the issues of the author’s approach to the problems and the description of specific heuristics for the problems. We give the description of a particular example from the group of “Hierarchical Clustering Algorithms”, which we use for clustering situations. We also give descriptions of some common methods and algorithms related to such clustering. There are two examples of metrics on situations sets for two different problems; one of the problems is a classical discrete optimization problem and the other one is a game-playing programming problem.

Title:  
ROA MODULAR LDAP-BASED APPROACH TO INDUSTRIAL SOFTWARE REVISION CONTROL
Author(s):  
Cristina De Castro and Paolo Toppan
Abstract:  
A software revision control system stores and manages successive, revised versions of applications, so that every design stage can be easily backtracked. In an industrial context, revision control concerns the evolution of software installed on complex systems and plants, where the need for revision is likely to arise from many different and correlated factors. The concept of revision control must thus be merged in this broader framework, in order to represent all the aspects defining a plant installation lifecycle. In this paper, some schemes are discussed for the definition of a software revision information system representing such factors. An LDAP-based architecture is addressed for modelling and storing their evolution.

Title:  
AN ACQUISITION KNOWLEDGE PROCESS FOR SOFTWARE DEVELOPMENT
Author(s):  
Sandro Ronaldo Bezerra Oliveira, Alexandre Marcos Lins de Vasconcelos, Albérico Lima de Pena Júnior and Lúcio Câmara e Silva
Abstract:  
Knowledge must be managed efficiently through the capture, maintenance and dissemination of it in an organization. However, knowledge related to business processes execution is distributed in documents, corporative systems and in key-members minds making the access, preservation and distribution of this knowledge to other members more difficult. In this context, systematic knowledge acquisition processes are necessary to acquire and preserve organizational knowledge. This work presents a process to acquire tacit and explicit organization members’ knowledge related to business processes, and the functionalities of a tool developed to support the execution of this process in a software development context. This tool is part of a software process implementation environment, called ImPProS, developed at CIn/UFPE – Center of Informatics/Federal University of Pernambuco.

Workshop on Metamodelling – Utilization in Software Engineering (MUSE 2006)
Title:  
A PROCESS META-MODEL IN A GRADUAL SOFTWARE PROCESS IMPLEMENTATION ENVIRONMENT - Process Meta-Model for a Software Process Definition and Improvement
Author(s):  
Sandro Ronaldo Bezerra Oliveira, Alexandre Marcos Lins de Vasconcelos, José Francisco Pereira and Igor Cavalcanti Ramos
Abstract:  
PSEE - Process-centered Software Engineering Environment - PSEE has one of its intentions to provide that phases of the software process life cycle (definition, simulation, enacting and evaluation) can be automatized. This work presents the structure and the automation of a software process meta-model capable to group terminologies of processes based on quality models/norms and to help in the implementation and refinement these types of processes. This implementation must be made from characteristics and properties that define an organization or a specific domain of software project. The meta-model services were automatized trough a tool and the description of them can be found in this paper.

Title:  
A META-MODELLING APPROACH TO EXPRESS CHANGE REQUIREMENTS
Author(s):  
Anne Etien, Colette Rolland, Camille Salinesi
Abstract:  
Organisations have to evolve frequently in order to remain competitive and to take into account changes in their environment. We develop a co-evolution approach to jointly make evolve the information system and the business processes. This approach relies on an explicit specification of change requirements defined with operators expressing gaps between the As-Is and the To-Be situations. However, such gaps based approach can also be used in an other evolution context, when a database or a workflow model evolves. Thus, instead of specifying new operators associated to the Map meta-model used in this co-evolution approach, we propose to define a generic typology of gaps to facilitate a precise definition of change requirements under the form of gaps. The paper presents the approach to generate a gap typology and illustrates it with the Map meta-model.

Title:  
MODEL-DRIVEN HMI DEVELOPMENT – CAN META-CASE TOOLS RELIEVE THE PAIN?
Author(s):  
Detlef Zuehlke and Carsten Bock
Abstract:  
Today metamodeling and domain-specific languages represent many promising beginnings to create nongeneric tool support for individual modeling tasks. Due to the inherent complexity and numerious variants of human-machine interfaces (HMIs) model-driven development becomes increasingly interesting for manufacturers and suppliers in the automobile industry. Particularly, the development of powerful user interfaces requires appropriate development processes as well as easy-to-use software tools. Since suitable tool kits are missing in the field of HMI development this paper describes the utilization of visual domain-specific languages for model-driven useware engineering in general and model-based specification of automotive HMIs in special. Moreover, results from a survey among developers are presented revealing the requirements for HMI specific tool support. Additionally, experiences with using current meta-CASE tools as well as standard office applications for creating a visual domain-specific language are presented. Based on these experiences requirements for future meta-CASE tools are derived.

Title:  
A GENERIC MODEL FOR CONNECTING MODELS IN A MULTILEVEL MODELLING FRAMEWORK
Author(s):  
Jan Pettersen Nytun
Abstract:  
In science and elsewhere models are weaved together forming complex knowledge structures. This article presents a generic way of connecting models both vertically and horizontally in a multilevel modelling framework. One model can be connected vertically to several models allowing a model element to be an instance of several metaclasses and different views can then be managed in an integrated way. Models at the same level can also be connected by defining the correspondence between model elements. We consider a model to be a graph composed of structure (form) and symbols (names); symbols identify structure inside or/and outside the model. A set of symbols belonging to a model can form a border; two models are connected when one border from each model is connected; two borders are connected when there is established a correspondence between the symbols of the two borders. A model is to a large extent defined by the role it plays in relation to what it models, such a role can again be described with models. When two models are connected the two borders involved can each have a model connected that in some sense describes the border; also the correspondence between two symbols can have a model describing the correspondence; of course these models can again in some way be connected.

....................................................................................................................................................................................................................................................................
Copyright © INSTICC

Page updated on 17/12/09