CONFERENCE
Area 1 - Programming Languages  
Area 2 - Software Engineering  
Area 3 - Distributed and Parallel Systems  
Area 4 - Information Systems and Data Management  
Area 5 - Knowledge Engineering  

 
WORKSHOPS
Workshop on e-Health Services and Technologies (EHST 2008)  
Workshop on Architectures, Concepts and Technologies for Service Oriented Computing (ACT4SOC)  
SPECIAL SESSIONS  
Metamodelling Utilization in Software Engineering (MUSE 2008)
Special Session on Global Software Development: Challenges and Advances
Special Session on Applications in Banking and Finances
DOCTORAL CONSORTIUM

 
Area 1 - Programming Languages  
Title:  
LANGUAGE-NEUTRAL SUPPORT OF DYNAMIC INHERITANCE
Author(s):  
Jose Manuel Redondo, Francisco Ortin and J. Baltasar Garcia Perez-Schofield
Abstract:  
Virtual machines have been successfully applied in diverse scenarios to obtain several benefits. Application interoperability and distribution, code portability, and improving the runtime performance of programs are examples of these benefits. Techniques like JIT compilation have improved virtual machine runtime performance, becoming an adequate alternative to develop different types of software products. We have extended a production JIT-based virtual machine so they offer low-level support for structural reflection, in order to obtain the aforementioned advantages in dynamic languages implementation.
As various dynamic languages offer support for dynamic inheritance, the next step in our research work is to enable this support in the aforementioned JIT-based virtual machine. Our approach enables dynamic inheritance in a language-neutral way, supporting both static and dynamic languages, so no language specification have to be modified to enable these features. It also enables static and dynamic languages to interoperate, since both types are now low-level supported by our machine.

 
Title:  
SCALA ROLES - A Lightweight Approach towards Reusable Collaborations
Author(s):  
Michael Pradel and Martin Odersky
Abstract:  
Purely class-based implementations of object-oriented software are often inappropriate for reuse. In contrast, the notion of objects playing roles in a collaboration has been proven to be a valuable reuse abstraction. However, existing solutions to enable role-based programming tend to require vast extensions of the underlying programming language, and thus, are difficult to use in every day work. We present a programming technique based on dynamic proxies that allows to augment an object's type at runtime while preserving strong static type safety. It enables role-based implementations that lead to more reuse and better separation of concerns.

 
Title:  
A SOURCE CODE BASED MODEL TO GENERATE GUI - GUI Generation based on Source Code with Declarative Language Extensions
Author(s):  
Marco Monteiro, Paula Oliveira and Ramiro Gonçalves
Abstract:  

Due to data-driven application nature and its increasing complexity, developing its user interface can be a repetitive and time-consuming activity. Consequently, developers tend to focus more on the user interface aspects and less on business related code. In this paper, we’re presenting an alternative approach to graphical user interface development for data-driven applications, where the key concept is the generation of concrete graphical user interface from a source code based model. The model includes the original source code metadata and non-intrusive declarative language extensions that describes the user interface structure. Some Object Relational Mapping tools already use a similar concept to handle interoperability between the data layer and the business layer. Our approach applies the same concept to handle business and presentation layer interoperability.
Also, concrete user interface implementation will be delegated to specialized software packages, developed by external entities, that provides complete graphical user interfaces services to the application. When applying our approach, were expecting faster graphical user interface development, allowing developersto refocus on the source code and concentrate their efforts on application core logic.


 
Title:  
ENSURING SAFE USAGE OF BUFFERS IN PROGRAMMING LANGUAGE C
Author(s):  
Milena Vujosevic-Janicic
Abstract:  

We consider the problem of buffer overflows in C programs. This problem is very important because buffer overflows are suitable targets for security attacks and sources of serious programs' misbehavior. Buffer overflow bugs can be detected in run-time by dynamic analysis, and before run-time by static analysis. In this paper we present a new static, modular approach for automated detection of buffer overflows. Our approach is flow-sensitive and inter-procedural, and it deals with both statically and dynamically allocated buffers. Its architecture is flexible and pluggable - for instance, for checking generated correctness and incorrectness conditions, it can use any external automated theorem prover that follows SMT-LIB standards. The system uses an external and easily extendable knowledge database that stores all the reasoning rules so they are not hard-coded within the system. We also report on our prototype implementation, the FADO tool, and on its experimental results.


 
Title:  
GENERIC TRAITS IN STATICALLY TYPED LANGUAGES - How to do It?
Author(s):  
Andreas Svendsen and Birger Møller-Pedersen
Abstract:  

Traits have been proposed as a code reuse mechanism for dynamically typed languages, such as Squeak, an open source dialect of Smalltalk. The paper addresses the issues that come up when introducing traits in statically typed languages, such as Java. These issues can be resolved by introducing generic traits, but at some cost. The paper studies three different generic mechanisms: Java Generics, Templates and Virtual Types, by implementing them all by a preprocessor to Java. The three different approaches are tested by a number of examples. Based on this, the paper gives a first answer to whether generic traits can be combined with statically typed languages or not, and which of the three generic mechanisms are most adequate.


 
Title:  
MODULAR CONCURRENCY - A New Approach to Manageable Software
Author(s):  
Peter Grogono and Brian Shearing
Abstract:  

Software systems bridge the gap between the information processing needs of the world and computer hardware. As system requirements grow in complexity and hardware evolves, the gap does not necessarily widen, but it undoubtedly changes. Although today's applications require concurrency and today's hardware provides concurrency, programming languages remain predominantly sequential. Concurrent programming is considered too difficult and too risky to be practiced by ``ordinary programmers''. Software development is moving towards a paradigm shift, following which concurrency will play a fundamental role in programming.
In this paper, we introduce an approach that we believe will reduce the difficulties of developing and maintaining certain kinds of concurrent software. Building on earlier work but applying modern insights, we propose a programming paradigm based on processes that exchange messages. Innovative features include scale-free program structure, extensible modular components with multiple interfaces, protocols that specify the form of messages, and separation of semantics and deployment. We suggest that it may be possible to provide the flexibility and expressiveness of programming with processes while bounding the complexity caused by nondeterminism.


 
Title:  
DSAW - A Dynamic and Static Aspect Weaving Platform
Author(s):  
Luis Vinuesa, Francisco Ortin, José M. Félix and Fernando Álvarez
Abstract:  

Aspect Oriented Software Development is an effective realization of the Separation of Concerns principle. A key issue of this paradigm is the moment when components and aspects are weaved together, composing the final application. Static weaving tools perform application composition previously to its execution. This approach reduces dynamic aspectation of running applications. In response to this limitation, dynamic weaving aspect tools perform application composition at runtime. The main benefit of dynamic weaving is runtime adaptability; its main drawback is runtime performance.
Existing research has identified the suitability of hybrid approaches, obtaining the benefits of both methods in the same platform. Applying static weaving where possible and dynamic weaving when needed provides a balance between runtime performance and dynamic adaptability. This paper presents DSAW, an aspect-oriented system that supports both dynamic and static weaving homogeneously over the .Net platform. An aspect can be used to adapt an application both statically and dynamically, without needing to modify its source code. Moreover, DSAW is language and platform neutral, and source code of neither components nor aspects is required.


 
Title:  
SEPARATING PROGRAM SEMANTICS FROM DEPLOYMENT
Author(s):  
Nurudeen Lameed and Peter Grogono
Abstract:  

Designing software to adapt to changes in requirements and environment is a key step for preserving software investment. As time passes, applications often require enhancements as requirements change or hardware environment changes. The mainstream programming languages lack suitable abstractions that are capable of providing the needed flexibility for the effective implementation, maintenance and refactoring of parallel and distributed systems. Software must be modified to match today's needs but must not place even greater strain on software developers. Hence, software must be specially designed to accommodate future changes. This paper proposes an approach that facilitates software development and maintenance. In particular, it explains how the semantics of a program can be separated from its deployment onto multiprocessor or distributed systems. Through this approach, software investment may be preserved when new features are added or when functionality does not change but the environment does.


 
Title:  
senseGUI – A DECLARATIVE WAY OF GENERATING GRAPHICAL USER INTERFACES
Author(s):  
Mariusz Trzaska
Abstract:  
A declarative way of creating GUIs is also known as model-based generation. Most of existing solutions require dedicated tools and quite complicated knowledge from the programmer. They also utilize special languages. In contrast, we propose a method which utilizes annotations existing in present programming languages. The method greatly improves generating common GUIs for popular languages. Annotations allow the programmer for marking particular parts of a source code defining class structures. Using such simple annotations, the programmer can describe basic properties of the desired GUI. In the simplest form it is enough just to mark attributes (or methods) for which widgets should be created. There is also a way to define more detailed description including labels, the order of items, different widgets for particular data items, etc. Using a generated form (GUI), the application user can create, edit and see instances of data objects. Our research is supported by a working prototype library called senseGUI (Java). The library is a part of a bigger framework implementing managing objects (links, extents). Together with the senseGUI library (which could be used independently) the framework greatly reduces the programmer’s effort to create efficient, usable and user-friendly applications.

 
Title:  
ARIA LANGUAGE - Towards Agent Orientation Paradigm
Author(s):  
Mohsen Lesani and Niloufar Montazeri
Abstract:  

As building large-scale software systems is complex, several software engineering paradigms have been devised. Agent oriented paradigm is one of the most predominant contributions to the field of software engineering and has the potential to significantly improve current practice of the field. The paradigm should be elaborated both practically and conceptually. Most existing agent oriented frameworks do not offer agent definition languages but propose to define agents with the help of agent libraries in existing object oriented languages. Few frameworks that propose languages lack conceptual principles for agent orientation. The contribution of this paper is twofold. Firstly, an agent oriented language called Aria and its compiler are proposed. Aria language is a superset of Java language and the compiler compiles a program in Aria to an equivalent program in Java. These enable Aria to fully integrate with and preserve all the existing knowledge and code in Java. Secondly, the three well-known object oriented principles of abstraction, inheritance and polymorphism are redefined for agent orientation. As chat room is a distributed application, it is selected as a sample case, designed and developed successfully in Aria. In addition, agent MVC architecture is offered as the second case.


 
Title:  
AN APPROACH TO THE DEVELOPMENT OF PROGRAMMING SOFTWARE FOR DISTRIBUTED COMPUTING AND INFORMATION PROCESSING SYSTEMS
Author(s):  
V. P. Kutepov, V. N. Malanin and N. A. Pankov
Abstract:  
The paper describes original approach to the development of programming software for distributed and parallel computer systems that includes parallel programming language based on dataflow model and operational tools including load management solution for its effective realization on large scale parallel distributed systems.

 
Area 2 - Software Engineering  
Title:  
VERIFICATION OF SCENARIOS USING THE COMMON CRITERIA
Author(s):  
Atsushi Ohnishi
Abstract:  

Software is required to comply with the laws and standards of software security. However, stakeholders with less concern regarding security can neither describe the behaviour of the system with regard to security nor validate the system’s behaviour when the security function conflicts with usability. Scenarios or use-case specifications are common in requirements elicitation and are useful to analyze the usability of the system from a behavioural point of view. In this paper, the authors propose both (1) a scenario language based on a simple case grammar and (2) a method to verify a scenario with rules based on security evaluation criteria.


 
Title:  
FROM UML TO ANSI-C - An Eclipse-based Code Generation Framework
Author(s):  
Mathias Funk, Alexander Nyßen and Horst Lichter
Abstract:  

Model-driven engineering has recently gained broad acceptance in the field of embedded and real-time software systems. While larger embedded and real-time systems, developed e.g. in aerospace, telecommunication, or automotive industry, are quite well supported by model-driven engineering approaches based on the UML, small embedded and real-time systems, as they can for example be found in the industrial automation industry, are still handled a bit novercal. A major reason for this is that the code generation facilities, being offered by most of the UML modeling tools on the market, do indeed support C/C++ code generation in all its particulars, but neglect the generation of plain ANSI-C code. However, this would be needed for small embedded and real-time systems, which have special characteristics in terms of hard time and space constraints.
Therefore we developed a framework, which allows to generate ANSI conformant C code from UML models. It is built on top of Eclipse technology, so that it can be integrated easily with available UML modeling tools. With our work we hope to increase the applicability of model-driven engineering (MDE) approaches to the field of small embedded and real-time systems, to make this promising approach a valid option in this rather special domain.


 
Title:  
EMPIRICAL ASSESSMENT OF EXECUTION TRACE SEGMENTATION IN REVERSE-ENGINEERING
Author(s):  
Philippe Dugerdil and Sebastien Jossi
Abstract:  

Reverse-engineering methods using dynamic techniques rests on the post-mortem analysis of the execution trace of the programs. However, one key problem is to cope with the amount of data to process. In fact, such file could contain hundreds of thousands of events. To cope with this data volume, we recently developed a trace segmentation technique. This let us compute the correlation between classes and identify cluster of closely correlated classes. However, no systematic study of the quality of the clusters has been conducted so far. In this paper we present a quantitative study of the performance of our technique with respect to the chosen parameters of the method. We then highlight the need for a benchmark and present the framework for the study. Then we discuss the matching metrics and present the results we obtained on the analysis of two very large execution traces. Finally we define a clustering quality metrics to identify the parameters providing the best results.


 
Title:  
RESOURCE SUBSTITUTION WITH COMPONENTS - Optimizing Energy Consumption
Author(s):  
Christian Bunse and Hagen Höpfner
Abstract:  

Software development for mobile systems is becoming increasingly complex. Beneath enhanced functional-ity, resource scarcity of devices is a major reason. The relatively high energy requirements of such systems are a limiting factor due to reduced operating times. Reducing energy consumption of mobile devices in or-der to prolong their operation time has thus been an interesting research topic in past years. Interestingly the focus has mostly been on hardware optimization, energy profiles, or techniques such as “Micro-Energy Harvesting“. Only recently, the impact of software on energy consumption by optimizing the use of re-sources has moved into the center of attention. Extensive wireless data transmissions, that are expensive, slow, and energy intensive can - for example - be reduced if mobile clients locally cache received data. Un-fortunately, optimization at compile time is often inefficient since the optimal use of existing resources can-not really be foreseen. This paper discusses and applies novel strategies that allow systems to dynamically adapt at runtime. The focus is on resource substitution strategies that allow achieving a certain Quality-of-Service while sticking to a given energy limit.


 
Title:  
USER GUIDANCE OF RESOURCE-ADAPTIVE SYSTEMS
Author(s):  
João Pedro Sousa, Rajesh Krishna Balan, Vahe Poladian, David Garlan and Mahadev Satyanarayanan
Abstract:  

This paper presents a framework for engineering resource-adaptive software systems targeted at small mobile devices. The proposed framework empowers users to control tradeoffs among a rich set of service-specific aspects of quality of service. After motivating the problem, the paper proposes a model for capturing user preferences with respect to quality of service, and illustrates prototype user interfaces to elicit such models. The paper then describes the extensions and integration work made to accommodate the proposed framework on top of an existing software infrastructure for ubiquitous computing.
The research question addressed here is the feasibility of coordinating resource allocation and adaptation policies in a way that end-users can understand and control in real time. The evaluation covered both systems and the usability perspectives, the latter by means of a user study. The contributions of this work are: first, a set of design guidelines for resource-adaptive systems, including APIs for integrating new applications; second, a concrete infrastructure that implements the guidelines. And third, a way to model quality of service tradeoffs based on utility theory, which our research indicates end-users with diverse backgrounds are able to leverage for guiding the adaptive behaviors towards activity-specific quality goals.


 
Title:  
FAULTS ANALYSIS IN DISTRIBUTED SYSTEMS - Quantitative Estimation of Reliability and Resource Requirements
Author(s):  
Christian Dauer Thorenfeldt Sellberg, Michael R. Hansen and Paul Fischer
Abstract:  

We live in a time where we become ever more dependent on distributed computing. Predictable quantitative properties of reliability and resource requirements of these systems are of outmost importance. But today quantitative properties of these systems can only be established after the systems are implemented and released for test, at which point problems can be costly and time consuming to solve. We present a new method, a process algebra and simulation tool for estimating quantitative properties of reliability and resource requirements of a distributed system with complex behaviour hereunder complex fault-tolerance behaviour. The simulation tool allows tailored fault injection e.g. random failure and attacks. The method is based upon pi-calculus (Milner, 1999) to which it adds a stochastic fail-able process group construct. Performance is quantitatively estimated using reaction rates (Priami, 1995). We show how to model and estimate quantitative properties of a CPU scavenging grid with fault-tolerance. To emphasize the expressiveness of our language called Gpi we provide design patterns for encoding higher-order functions, object-oriented classes, process translocation, conditional loops and conditional control flow. The design patterns are used to implement linked lists, higher-order list functions and binary algebra. The focus of the paper is on practical application.


 
Title:  
DESIGN ACTIVITIES FOR SUPPORTING THE EVOLUTION OF SERVICE-ORIENTED ARCHITECTURE
Author(s):  
Dionisis X. Adamopoulos
Abstract:  

The advent of deregulation combined with new opportunities opened by advances in telecommunications technologies has significantly changed the paradigm of telecommunications services, leading to a dramatic increase in the number and type of services that telecommunication companies can offer. Building new advanced multimedia telecommunications services in a distributed and heterogeneous environment is very difficult, unless there is a methodology to support the entire service development process in a structured and systematic manner, and assist and constrain service designers and developers by setting out goals and providing specific means to achieve these goals. Therefore, in this paper, after a brief presentation of a proposed service creation methodology, its service design phase is examined in detail focusing on the essential activities and artifacts. In this process, the exploitation of important service engineering techniques and UML modelling principles is especially considered. Finally, alternative and complementary approaches for service design are highlighted and a validation attempt is briefly outlined.


 
Title:  
ORACLE SECUREILES - A Filesystem Architecture in Oracle Database Server
Author(s):  
Niloy Mukherjee, Amit Ganesh, Krishna Kuchithapadam and Sujatha Muthulingam
Abstract:  

Over the last decade, the nature of content stored on computer storage systems has evolved from being relational to being semi-structured, i.e., unstructured data accompanied by relational metadata. Average data volumes have increased from a few hundred megabytes to hundreds of terabytes. Simultaneously, data feed rates have also increased with increase in processor, storage and network bandwidths. Data growth trends seem to be following Moore's law and thereby imply an exponential explosion in content volumes and rates in the years to come. We introduce Oracle SecureFiles System, a storage architecture designed to provide highly scalable storage and access execution of unstructured and structured content as first-class objects within the Oracle relational database management system. Oracle SecureFiles breaks the performance barrier that has been keeping unstructured content out of databases. The architecture provides capability to maximize utilization of storage usage through compression and deduplication and preserves data management robustness through Oracle database server features such as transactional atomicity, durability, availability, read-consistent query-ability and security of the database management system.


 
Title:  
QUALITY AND VALUE ANALYSIS OF SOFTWARE PRODUCT LINE ARCHITECTURES
Author(s):  
Liliana Dobrica and Eila Niemela
Abstract:  
The concern of a software product line architecture systematic analysis is how to take better advantage of views and analyze value and quality attributes in an organized and repetitive way. In this approach architecture descriptions evolve from the conceptual level to a more concrete level. Architecture analysis at the conceptual level provides a knowledge base of the domain architecture so as to perform a more comprehensive analysis of quality attributes at the concrete level description. Concrete architecture descriptions permit more relevant and accurate scenario-based analysis results for the development of quality attributes such as portability and adaptability.

 
Title:  
ON THE CLARIFICATION OF THE SEMANTICS OF THE EXTEND RELATIONSHIP IN USE CASE MODELS
Author(s):  
Miguel A. Laguna and.José M. Marqués
Abstract:  

Use cases are a useful and simple technique to express the expected behavior of an information system in successful scenarios or in exceptional circumstances. The weakness of use cases have been always the vague semantics of the relationships, in particular the extend relationship. The main contribution of this article is an attempt of clarification of the different interpretations that can be adopted. A major revision of the UML standard would be unpractical, but the extension point concept could be completed, including minimum and maximum multiplicity attributes. Using these minor changes, the legal combination of base/extending use cases in the requirements models would be unequivocally defined. Therefore, the ambiguity of the original UML models would be removed.


 
Title:  
ANALYZING IMPACT OF INTERFACE IMPLEMENTATION EFFORTS ON THE STRUCTURE OF A SOFTWARE MARKET - OSS/BSS Market Polarization Scenario
Author(s):  
Oleksiy Mazhelis, Pasi Tyrväinen and Jarmo Matilainen
Abstract:  
A vertical software market usually undergoes the process of vertical disintegration resulting in several software layers provided by independent software vendors. However, as argued in this paper, the process of vertical disintegration may be affected by the high efforts of software interface implementation and maintenance. Should the required efforts be large, a threshold for entering the market emerges, thereby hampering the vertical disintegration process. In the paper, the impact of the interface implementation efforts on the vertical market evolution is studied on the case of the so-called operations support systems and business support systems (OSS/BSS) software, which is employed by the telecom operators in order to support their daily operations. The efforts are compared for two prototypical software vendors serving incumbent operators and new operators respectively. Total efforts are an order of magnitude larger in the former case. Furthermore, even if only latest protocols are taken into account, the efforts are significantly larger in the former case, therefore requiring several times greater number of employees to implement them. Therefore, a conclusion is made that the OSS/BSS market is likely to polarize into the vertical submarket of large software vendors serving incumbent operators, and the submarket of small vendors serving young operators. The latter submarket, due to the lower entry threshold for new vendors is more likely to be vertically disintegrated.

 
Title:  
LOCALIZING BUGS IN PROGRAMS - Or How to Use a Program’s Constraint Representation for Software Debugging?
Author(s):  
Franz Wotawa
Abstract:  

The use of a program's constraint representation for various purposes like testing and verification is not new. In this paper, we extend the applicability of constraint representations to fault localization. Given the source code of a program and a test case, which specifies the input parameters and the expected output, we are interested in localizing the root cause of the revealed misbehavior. We first show how programs can be compiled into their constraint representations. Based on the constraint representation we show how to compute root causes using constraint solver. Moreover, we discuss how the approach can be integrated with program assertions and unit tests.


 
Title:  
IMPROVING THE SECURITY OF MOBILE-PHONE ACCESS TO REMOTE PERSONAL COMPUTERS
Author(s):  
Alireza P. Sabzevar and João Pedro Sousa
Abstract:  

Cell phones are assuming an increasing role in personal computing tasks, but cell phone security has not evolved on par with this new role. In a class of systems that leverage cell phones to facilitate access to remote services, compromising a phone may provide the means to compromise or abuse the remote services. This paper presents the background to this class of systems, examines the threats they are exposed to, and discusses possible countermeasures. A concrete solution is presented, which is based on multi-factor authentication and an on-demand strategy for minimizing exposure. This solution is built on top of a representative off-the-shelf commercial product called SoonR. Rather than proposing a one-size-fits-all solution, this work enables end-users to manage the tradeoff between security assurances and the overhead of using the corresponding features. The contributions of this paper are a discussion of the problem and a set of guidelines for improving the design of security solutions for remote access systems.


 
Title:  
VISUAL ABSTRACT NOTATION FOR GUI MODELLING AND TESTING - VAN4GUIM
Author(s):  
Rodrigo M. L. M. Moreira and Ana C. R. Paiva
Abstract:  

This paper presents a new Visual Notation for GUI Modelling and testing (VAN4GUIM) which aims to hide, as much as possible, formalism details inherent to models used in model-based testing (MBT) approaches and to promote the use of MBT in industrial environments providing a visual front-end for modelling which is more attractive to testers than textual notation. This visual notation is developed as five different UML profiles and based on three notations/concepts: Canonical Abstract Prototyping notation; ConcurTaskTrees (CTT) notation; and the Window Manager concept. A set of translation rules was defined in order to automatically perform conversion from VAN4GUIM to Spec#. GUI models are developed with VAN4GUIM notation then translated automatically to Spec# that can be then completed manually with additional behaviour not included in the visual model. As soon as a Spec# model is completed, it can be used as input to Spec Explorer (model-based testing tool) which generates test cases and executes those tests automatically.


 
Title:  
ELUSIVE BUGS, BOUNDED EXHAUSTIVE TESTING AND INCOMPLETE ORACLES
Author(s):  
W. E. Howden
Abstract:  

Elusive bugs involve combinations of conditions that may not fit into any informal or intuitive testing scheme. One way to attack them is with Bounded Exhaustive Testing, in which all combinations of inputs for a bounded version of an application are tested. Studies of the effectiveness of BET for known bugs indicates that it is a promising approach. Because of the large numbers of tests that are involved, BET normally depends on automated test generation and test execution. This in turn requires the use of an automated oracle. In some cases the construction of a complete automated oracle would require the development of a second version of the application. This may be avoidable if incomplete oracles are used. Two classes of incomplete oracles are identified: necessity and sufficiency oracles. Basic rules are given for combining simple oracles in an incremental fashion, resulting in oracles that are more widely effective. Examples are given of experiments using a necessity and a sufficiency oracle.


 
Title:  
TOWARDS A CLASSIFICATION SCHEME IN ORTHOGONAL DIMENSIONS OF REUSABILITY
Author(s):  
Markus Aulkemeier, Jürgen Heine, Emilio G. Roselló, Jacinto G. Dacosta and J. Baltasar García Perez-Scholfield
Abstract:  

The reuse of existing bits of code has emerged as a habitual practice in software engineering. Despite the lively interest that has been directed towards this field, the major part of existent literature and publications is based on concrete aspects and models of reuse what provides a fragmented and compartmentalized vision of this domain. No holistic and unifying proposal exists that sorts the reuse domain as a conceptual software characteristic in a comprehensive way. Related to this context, the present work contributes a three-dimensional sorting model for reusable software artefacts. The three dimensions are independence, contract specification and composition, identified as fundamental dimensions of reusable software artefacts.


 
Title:  
ADJUSTING ANALOGY SOFTWARE EFFORT ESTIMATION BASED ON FUZZY LOGIC
Author(s):  
Mohammad Azzeh, Daniel Neagu and Peter Cowling
Abstract:  

Analogy estimation is a well known approach for software effort estimation. The underlying assumption of this approach is the more similar the software project description attributes are, the more similar the software project effort is. One the difficult activities in analogy estimation is how to derive a new estimate from retrieved solutions. Using retrieved solutions without adjustment to considered problem environment is not often sufficient. Thus, they need some adjustment to minimize variation between current case and retrieved cases. The main objective of the present paper is to investigate the applicability of fuzzy logic based software projects similarity measure to adjust analogy estimation and derive a new estimate. We proposed adaptation techniques which take into account the similarity between two software projects in terms of each feature. In earlier work, a similarity measure between software projects based on fuzzy C-means has been proposed and validated theoretically against some well known axioms such as: Normality, Symmetry, transitivity, etc. This similarity measure will be guided towards deriving a new estimate.


 
Title:  
RAPID APPLICATION DEVELOPMENT IN SYNERGY WITH PERSISTENCE FRAMEWORK
Author(s):  
Choon How Choo and Sai Peck Lee
Abstract:  

This paper proposes the concept, architecture, design and development of a rapid application development toolkit that will leverage on a persistence framework named PersistF, to subsequently provide an easy-to-use and customizable front-end web application development environment for software developers to perform rapid web application development. The proposed rapid application development toolkit consists of two main parts – RADEWeb and PersistF Configuration Wizard, to enable software developers not only to deliver their target web application within a shorter timeframe through an easy-to-use front-end environment, but also to achieve encapsulation of database access from the business objects of the web application.


 
Title:  
FUNCTION POINT SIZE ESTIMATION FOR OBJECT ORIENTED SOFTWARE BASED ON USE CASE MODEL
Author(s):  
A. Chamundeswari and Chitra Babu
Abstract:  

Precise size estimation earlier in the Software Development Life Cycle (SDLC) has always been a challenge for the software industry. In the context of Object Oriented (OO) software, Use Case Model (UCM) is widely used to capture the functionality addressed in the software. Existing size estimation techniques such as Use Case Points (UCP) and Use case Size Points (USP) do not adhere to any standard. Consequently, lots of variations are possible, leading to inaccurate size estimation. On the other hand, Function Point Analysis (FPA) has been standardized. However, the current estimation approaches based on FPA employ object modeling that happens later in the SDLC rather than the UCM. In order to gain the advantages of FPA as well as UCM, this paper proposes a new approach for size estimation of OO software. This approach is based on the UCM by adapting it to FPA. Mapping rules are proposed for proper identification and classification of various components from UCM to FPA. Estimation results obtained using the proposed approach are compared with those using finer granular level object model which adapts FPA at design phase. The close agreement between these two results indicates that the proposed approach is suitable for accurate software size estimation earlier in the SDLC.


 
Title:  
ENGINEERING PROCESS BASED ON GRID USE CASES FOR MOBILE GRID SYSTEMS
Author(s):  
David G. Rosado, Eduardo Fernández-Medina, Mario Piattini and Javier López
Abstract:  
The interest to incorporate mobile devices into Grid systems has arisen with two main purposes. The first one is to enrich users of these devices while the other is that of enriching the own Grid infrastructure. Security of these systems, due to their distributed and open nature, is considered a topic of great interest. A formal approach to security in the software life cycle is essential to protect corporate resources. However, little attention has been paid to this aspect of software development. Due to its criticality, security should be integrated as a formal approach into the software life cycle. We are developing a methodology of development for secure mobile Grid computing based systems that helps to design and build secure Grid systems with support for mobile devices directed by use cases and security use cases and focused on service-oriented security architecture. In this paper, we will present one of the first steps of our methodology consisting of analyzing security requirements of mobile grid systems. This analysis will allow us to obtain a set of security requirements that our methodology must cover and implement.

 
Title:  
RESOLVING INCOMPATIBILITY DURING THE EVOLUTION OF WEB SERVICES WITH MESSAGE CONVERSION
Author(s):  
Vadym Borovskiy, Alexander Zeier, Jan Karstens and Heinz Ulrich Roggenkemper
Abstract:  

One of the challenges that Web service providers face is service evolution management. In general, the challenge is to ensure the substitutability of service versions, i.e. correct functioning of all ongoing client applications relying on the old version of a service after the version has been substituted with a new one. To achieve the desired substitutability the architecture of a service implementation must be extensible. Unfortunately, no currently available design approach can guarantee a perfectly extensible architecture that preserves full backward compatibility during its evolution. Hence, incompatibilities are very likely to occur if the old version of a service is replaced with a new one. This paper addresses the incompatibility problem and describes a solution to the problem. This solution is based upon the already known design pattern of message translation and the ASP.NET 2.0 Web service platform. Using the platform's API the standard ASP.NET pipeline has been augmented with an additional step of applying XSL transformations to the XML payload of the messages. The solution is then verified against the Electronic Commerce Service from Amazon.com web services suite. Thus, the contribution of the current work is a new .NET implementation of the translator pattern.


 
Title:  
FINE-GRAINED INTEGRATED MANAGEMENT OF SOFTWARE CONFIGURATIONS AND TRACEABILITY RELATIONS
Author(s):  
Pietro Colombo, Vieri del Bianco and Luigi Lavazza
Abstract:  
Software Configuration Management is essential to manage the evolution of non trivial software systems. Requirements for SCM support are continuously growing, demanding for a seamless integration with the traditional development tools and support for management activities like change management, change impact analysis, etc. This paper presents SCCS-XP, a SCM platform supporting change management and traceability among fine-grained software artifacts. SCCS-XP exploits a XML-based language to represent versioned software artifacts. SCCS-XP provides the basic capabilities to build full-fledged SCM environments featuring traceability management, change management and integrates nicely with development tools.

 
Title:  
AN INCREMENTAL APPROACH TO SOFTWARE REENGINEERING BASED ON OBJECT-DATA MAPPING
Author(s):  
Giacomo Bucci, Valeriano Sandrucci and Enrico Vicario
Abstract:  

We address the problem of reengineering legacy systems towards adoption of current predominant technologies, i.e. object-oriented (OO) programming and relational databases (RDB). To smooth the reengineering process we follow an evolutionary approach based on the construction of a mapping layer decoupling application logic from persistent data, so that application reengineering and data reengineering are made independent and carried out incrementally. The mapping layer does not impose any particular environment, container or whatsoever. Therefore, program development can be carried out based on well established OO design principles.
In reimplementing applications, rather than trying to identify applicative classes exclusively from the legacy code, we follow the guidelines of iterative development processes such as UP, giving the due consideration to actual user requirements.


 
Title:  
A COMPONENT-BASED SOFTWARE ARCHITECTURE - Reconfigurable Software for Ambient Intelligent Networked Services Environments
Author(s):  
Michael Berger, Lars Dittmann, Michael Caragiozidis, Nikos Mouratidis, Christoforos Kavadias and Michael Loupis
Abstract:  

This paper describes a component-based software architecture and design methodology, which will enable efficient engineering, deployment, and run-time management of reconfigurable ambient intelligent services. A specific application of a media player is taken as an example to show the development of software bundles according to the proposed methodology. Furthermore, a software tool has been developed to facilitate composition and graphical representation of component based services. The tool will provide a model of a generic reusable component, and the user of the tool will be able to instantiate reusable components using this model implementation. The work has been carried out within the European project COMANCHE that will utilize component models to support Software Configuration Management.


 
Title:  
STRUCTURING DESIGN KNOWLEDGE IN SERVICE-ORIENTED ARCHITECTURE
Author(s):  
Dionisis X. Adamopoulos
Abstract:  

Web services are emerging technologies that can be considered as the result of the continuous improvement of Internet services due to the tremendous increase in demand that is being placed on them. They are rapidly evolving and are expected to change the paradigms of both software development and use, by promoting software reusability over the Internet, by facilitating the wrapping of underlying computing models with XML, and by providing diverse and sophisticated functionality fast and flexibly in the form of composite service offerings. In this paper, the different facets of Web services are identified and a flexible approach to engineering complex Web services is adopted in the form of a proposed framework for the development of Web services. After the examination of its main constituent parts, it is argued that its full potential and that of Web service engineering in general, is realized through the gradual formation of a rich service grid offering value-added supporting functionality and therefore the main desirable properties of such a service grid are highlighted. Finally, the paper outlines a validation approach for the proposed framework and assembles important pointers for future work and concluding remarks.


 
Title:  
LASER SIMULATION - Methods of Pulse Detection in Laser Simulation
Author(s):  
Jana Hájková
Abstract:  

This paper deals with the problem of laser simulation. At the beginning it gives a broader overview of the project of laser simulation which is processed at the University of West Bohemia. The simulation is described in several fundamental steps to gain basic concept of the process. Particular techniques of data sets obtaining, their processing and usage for the simulation and also the importance of simulation system verification is highlighted to understand the whole approach well. Several methods for automatic pulse detection are described in detail as the main topic of the paper. Pulse detection is the main part of pulse extraction, which is one of the most important data processing steps. The main idea of each described method is explained and their problems and possible ways of their elimination are discussed. All methods are tested for several selected samples which were chosen because of any typical or, on the contrary, specific feature. For each method, the problematical samples are mentioned and reasons of inaccuracies are discussed. At the end of the paper future plans for the project with the focus on the alternatives of system automation are introduced.


 
Title:  
HANDLING DEVELOPMENT TIME UNCERTAINTY IN AGILE RELEASE PLANNING
Author(s):  
Kevin Logue and Kevin McDaid
Abstract:  

When determining the functionality to complete in upcoming software releases, decisions are typically based upon uncertain information. Both the business value and cost to develop chosen functionality are highly susceptible to uncertainty. This paper proposes a relatively simple statistical methodology that allows for uncertainty in both business value and cost. In so doing it provides key stakeholders the ability to determine the probability of completing a release on time and to budget. The technique is lightweight in nature and consistent with existing agile planning practices. A case study is provided to demonstrate how the method may be used.


 
Title:  
GOOAL AUTOMATIC DESIGN TOOL - A Role Posets based Tool to Produce Object Models from Problem Descriptions
Author(s):  
Hector G. Perez-Gonzalez, Sandra Nava-Muñoz, Alberto Nuñez-Varela and Jugal Kalita
Abstract:  

A number of software analysts may produce different, perhaps all of them correct, solutions from one specific software requirement document. This is because natural language understanding is complex and because each analyst has distinct design experience. A methodology and approach that can be automated and that uses a proposed semi-natural language called 4WL is used to accelerate the production of reliable accords between different stakeholders. The supporting software tool called GOOAL, Graphic Object Oriented Analysis Laboratory automatically produces simple object models (UML diagrams) from English or Spanish statements with minimal user participation. These statements, faithfully describe the original problem description sentences. The models are generated analyzing each sentence of the intermediate 4W language version of the original sentence set. With this methodology and supporting software tool, students of Object Oriented technology can visualize the design decisions being made by the system. This methodology and software tool has been used to support the learning process in object Oriented analysis and design courses. The original tool was developed to “understand” English and it was validated with design artefacts produced by several experts of the University of Colorado. The main results reported by the students, are related with the use of good design practices, a better understanding of UML language and a major interest in the pre programming process. Its technical contribution is the role posets technique.


 
Title:  
AUTOMATIC GENERATION OF INTERACTIVE PROTOTYPES FOR DOMAIN MODEL VALIDATION
Author(s):  
António Miguel Rosado da Cruz and João Pascoal Faria
Abstract:  

This paper presents an approach to domain models validation with customers, end users and other stakeholders. From an early system model that represents the main domain (or business) entities in a UML class diagram, with classes, relationships, attributes and constraints, it is automatically generated an interactive form-based application prototype supporting the basic CRUD operations (create, retrieve, update and delete). The generated form-based user interface provides some features that are derived from the model’s constraints and increase the prototype usability. This prototype allows the early validation of core system models, and can also be used as a basis for subsequent developments. The prototype generation process follows a model-driven development approach: the domain model, conforming to a defined domain meta-model, is first transformed to an application model, conforming to a defined application meta-model, based on a set of transformation rules; then a generator for a specific platform produces the executable files (currently, XUL and RDF files).


 
Title:  
DYNAMISM IN REFACTORING CONSTRUCTION AND EVOLUTION - A Solution based on XML and Reflection
Author(s):  
Rául Marticorena and Yania Crespo
Abstract:  

Current available refactoring tools, even stand-alone or integrated in development environments, offer a static set of refactoring operations. Users (developers) can run these refactorings on their source codes, but they can not adjust, enhance, evolve them or even increase the refactoring set in a smooth way. Refactoring operations are hand coded using some support libraries. The problem of maintain or enrich the refactoring tools and their libraries are the same of any kind of software, introducing complexity dealing with refactoring, manage and transform software elements, etc. On the other hand, available refactoring tools are mainly language dependent, so the effort to reuse refactoring implementations is enormous, when we change the source code programming language. This paper describes our work on aided refactoring construction and evolution based on declarative definition of refactoring operations. Our proposal allows to build refactoring from the scratch but also compose refactoring operations declaratively. The solution is based on frameworks, XML and reflective programming. Certain language independence is also achieved, allowing to ease migration from one programming language to another, or even to bring rational support for multilanguage development environments.


 
Title:  
MODELS FOR INTERACTION, INTEGRATION AND EVOLUTION OF PRE-EXISTENT SYSTEMS AT ARCHITECTURAL LEVEL
Author(s):  
Juan Muñoz López, Jaime Muñoz Arteaga, Francisco Javier Álvarez Ramírez, Manuel Mora Tavarez and Ma. Lourdes Y. Margain Fernández
Abstract:  
This paper describes a set of models that may serve as the basis for the creation of architectural patterns for interaction, evolution and integration of pre-existent systems. Proposed models are based on a identification of system’s specialized components for operation, control and direction making easier to find connecting points between systems. This set of models covers some rationality needed to adapt pre-existent systems to an evolving environment where organization and technologies are under continuous change.

 
Title:  
AN ESTIMATIVE MODEL OF THE POINTED DEFECTS RATE IN SOFTWARE PRE-REVIEW FOR NOVICE ENGINEERS IN CHINESE OFFSHORE COMPANY
Author(s):  
Zuoqi Wang, Yixiao Qu, Masanori Akiyoshi and Norihisa Komoda
Abstract:  

This paper quantitatively discusses the pre-review effectiveness of software developed by novice software engineers in Chinese offshore company. Pre-review process is applied to the software product developed by novice engineers before the normal test process for keeping the quality of product. We extract the factors that influence the number of defects. Then, the collected data of the pointed defects rate and the factors in 27 pre-review are analysed by using the “quantification theory type I” to create a mathematical model for estimating the pointed defects rate. The coefficient of determination R of the obtained estimative model is 0.86. The model provides sufficient accuracy. In the model, “ difficulty of task” is the most effective factor.


 
Title:  
A UML-BASED VARIABILITY SPECIFICATION FOR PRODUCT LINE ARCHITECTURE VIEWS
Author(s):  
Liliana Dobrica and Eila Niemela
Abstract:  
In this paper we present a rigorous and practical notation for specifying variability in product line architecture views expressed in the Unified Modeling Language (UML). The specification notation paves the way for the development of tools. The notation has been used for the explicit representation of variations and their locations in software product line architectures based on a design method already established. The benefit of a more familiar and widely used notation facilitates a broader understanding of the architecture and enables more extensive tool support for manipulating it.

 
Title:  
A HW/SW CO-REUSE METHODOLOGY BASED ON DESIGN REFINEMENT TEMPLATES IN UML DIAGRAMS
Author(s):  
Masahiro Fujita, Takeshi Matsumoto and Hiroaki Yoshida
Abstract:  

In general, a design refinement process of an electronic system including both hardware and software components traces similar process of other systems in requirements analysis and system-level design phases. It is more true especially when they belong to the same product domains. Therefore, we can reuse various documents of analysis and design refinement processes easily by making templates of their design refinement processes for both hardware and software implementation. In this paper, we propose a methodology that generates those templates and illustrate that the template made from the design refinement process of a Compact-Flash (CF) memory interface can actually be used in that of an ATM switch. Both of which are typical HW/SW co-designs where most of the control is performed by software. The generated templates can be applied to various designs which have internal structure such as ``IO + intelligent buffers''.
We use UML (Unified Modeling Language) for description of the design templates and prove the efficiency of the use of templates by showing the similarity of UML diagrams.


 
Title:  
RIGOROUS COMMUNICATION MODELLING AT TRANSACTION LEVEL WITH SYSTEMC
Author(s):  
Tomi Metsälä, Tomi Westerlund, Seppo Virtanen and Juha Plosila
Abstract:  

We introduce a communication model for ActionC, a framework for rigorous development of embedded computer systems. The concept of ActionC is the integration of SystemC, an informal design language, and Action Systems, a formal modelling language supporting verification and stepwise correctness-preserving refinement of system models. The ActionC approach combines the possibility to use a formal correct-by-construct method and an industry standard design language that also includes a simulation environment. Translation of an Action Systems model to the corresponding ActionC model is carried out with the means provided by SystemC and in a way that preserves the semantics of the underlying formal model. Hence, the ActionC framework allows us to reliably simulate Action Systems descriptions using standard SystemC tools, which is especially important for validating the initial formal specification of a system. Our initial experiments with ActionC have successfully produced correct-proven simulatable SystemC descriptions of Action Systems.


 
Title:  
PATTERN-BASED BUSINESS-DRIVEN ANALYSIS AND DESIGN OF SERVICE ARCHITECTURES
Author(s):  
Veronica Gacitua-Decar and Claus Pahl
Abstract:  

Service architectures are an increasingly adopted architectural approach for solving the Enterprise Application Integration (EAI) problem originated by business process automation requirements. In previous work, we developed a methodological framework for the designing of service architectures for EAI. The framework is structured in a layered architecture called LABAS, and is distinguished by using architectural abstractions in different layers. This paper describes the pattern-based techniques used in LABAS for service identification, for transformation from business models to service architectures and for architecture modifications.


 
Title:  
LEARNABILITY AND ROBUSTNESS OF USER INTERFACES - Towards a Formal Analysis of Usability Design Principles
Author(s):  
Steinar Kristoffersen
Abstract:  

Models are often seen as context-free abstractions that make translation into their next step of refinement more efficient, at the same time as formal reasoning about their properties can detect and correct errors early in the process. Assessing this strategy for usability design, this paper proposes a broad set of novel concepts and explications in a framework for logical reasoning about the properties of an interactive system, as seen from the user perspective. The discussion is based on well-known principles of usability. The objective is to lay the ground, albeit still rather informally, of a program of assessing the usability of an interactive system using formal methods. Further research can then extend this into a strong algebra of interactive systems.


 
Title:  
SOFTWARE RE-STRUCTURING - An Architecture-Based Tool
Author(s):  
Violeta Bozhikova, Mariana Stoeva, Anatoly Antonov and Vladimir Nikolov
Abstract:  

The practice shows that many software systems are now large and complex and have been evolving for many years. Because the structure of these systems is usually not well documented, great research effort is needed to find appropriate abstractions of their structure than simplifying their maintenance, evolution and adaptation. A variety of techniques and tools are developed trying to effectively solve this problem. In this paper is discussed an Architecture-Based Framework for software re-structuring and how this framework is implemented in an evolving, user-driven and flexible tool that can effectively support software re-structuring process.


 
Title:  
SOFTWARE EFFORT ESTIMATION AS A CLASSIFICATION PROBLEM
Author(s):  
Ayşe Bakır, Burak Turhan and Ayşe Bener
Abstract:  

Software cost estimation is still an open challenge. Many researchers have proposed various methods that usually focus on point estimates. Software cost estimation, up to now, has been treated as a regression problem. However, in order to prevent over/under estimates, it is more practical to predict the interval of estimations instead of the exact values. In this paper, we propose an approach that converts cost estimation into a classification problem and classifies new software projects in one of the effort classes each corresponding to an effort interval. Our approach integrates cluster analysis with classification methods. Cluster analysis is used to determine effort intervals while different classification algorithms are used to find the corresponding effort classes. The proposed approach is applied to seven public data sets. Our experimental results show that hit rates obtained for effort estimation are around 90%-100%s. For point estimation, the results are also comparable to those in the literature.


 
Title:  
AN INTERMEDIATION SYSTEM BASED ON AGENTS MODELLING TO SHARE KNOWLEDGE IN A COMMUNITY OF PRACTICES
Author(s):  
Clauvice Kenfack and Danielle Boulanger
Abstract:  

This paper presents an intermediation multi-agent system to manage the distributed collaborative design environment like CoPs. The JADE-based intermediation system (JAIS) uses community enactment mechanism and agent integration mechanism. The community enactment mechanism is the system kernel and follows the specifications of the community of practice reference model. The system kernel supports four agents (moderator, user, expert and newcomer agents) to manage the community, whereas the integration mechanism supports an intermediation agent to interact, coordinate and monitor the activities between agents. JAIS facilitates the team interaction in a collaborative and distributed environment.


 
Title:  
PREDICTING DEFECTS IN A LARGE TELECOMMUNICATION SYSTEM
Author(s):  
Gözde Koçak, Burak Turhan and Ayşe Bener
Abstract:  

In a large software system knowing which files are most likely to be fault-prone is a valuable information for project managers. They can use such information in prioritizing code inspection and allocating resources accordingly. However, our experience shows that it is difficult to collect and analyze fine-grained test defects in a large and complex software system. On the other hand, previous research has shown that companies can safely use cross company data with nearest neighbor sampling to predict their defects in case they are unable to collect local data. In this study we analyzed 25 projects of a large telecommunication system. To predict defect proneness of modules we learned from Nasa MDP data. In our experiments we used static call graph based ranking (CGBR) as well as nearest neighbor sampling for constructing method level defect predictors. Our results suggest that, for the analyzed projects, at least 70% of the defects can be detected by inspecting only i) 6% of the code using a Naïve Bayes model, ii) 3% of the code using CGBR framework.


 
Title:  
REFACTORING PREDICTION USING CLASS COMPLEXITY METRICS
Author(s):  
Yasemin Köşker, Burak Turhan and Ayşe Bener
Abstract:  
In the lifetime of a software product, development costs are only the tip of the iceberg. Nearly 90% of the cost is maintenance due to error correction, adoptation and mainly enhancements. As Belady and Lehman (Lehman and Belady, 1985) state that software will become increasingly unstructured as it is changed. One way to overcome this problem is refactoring. Refactoring is an approach which reduces the software complexity by incrementally improving internal software quality. Our motivation in this research is to detect the classes that need to be rafactored by analyzing the code complexity. We propose a machine learning based model to predict classes to be refactored. We use Weighted Naïve Bayes with InfoGain heuristic as the learner and we conducted experiments with metric data that we collected from the largest GSM operator in Turkey. Our results showed that we can predict 82% of the classes that need refactoring with 13% of manual inspection effort on the average.

 
Title:  
MODELS, FEATURES AND ALGEBRAS - An Exploratory Study of Model Composition and Software Product Lines
Author(s):  
Roberto E. Lopez-Herrejon
Abstract:  

Software Product Lines (SPL) are families of related programs distinguished by the features they provide. Feature Oriented Software Development (FOSD) is a paradigm that raises features to first-class entities in the definition and modularization of SPL. The relevance of model composition has been addressed in UML 2 with new construct Package Merge. In this paper we show the convergence that exists between FOSD and Package Merge. We believe exploring their synergies could be mutually beneficial. SPL compositional approaches could leverage experience on the composition of non-code artifacts, while model composition could find in SPL new problem domains on which to evaluate and apply their theories, tools and techniques.


 
Area 3 - Distributed and Parallel Systems  
Title:  
BEHAVIOR CHARACTERIZATION AND PERFORMANCE EVALUATION OF A HYBRID P2P FILE SHARING SYSTEM
Author(s):  
Juan Pedro Muñoz-Gea, Josemaria Malgosa-Sanahuja, Pilar Manzanares-Lopez, Juan Carlos Sanchez-Aarnoutse and Joan Garcia-Haro
Abstract:  
Peer-to-Peer (P2P) networks show a set of distinctive features which increase the need to previously simulate the new proposals. In order to perform an adequate evaluation of a P2P application, it is necessary a real characterization of these applications, considering the queries behavior and the node dynamism. After validating the new P2P algorithm by means of simulation, developers have the possibility to prove a real P2P application instance in an emulated environment. In each step, the detected errors or misbehaviors are appropriately corrected. The main contribution of this paper is twofold: first, to adequately characterize the real behavior of P2P overlay networks (including dynamic and static aspects) and second, evaluate a real P2P overlay network under the above constraints using one of the most popular (high level and user friendly) simulation frameworks.

 
Title:  
REPLICATION IN SERVICE ORIENTED ARCHITECTURES
Author(s):  
Michael Ameling, Marcus Roy and Bettina Kemme
Abstract:  
Multi-tier architectures have become the main building block in service-oriented architecture solutions with stringent requirements on performance and reliability. Replicating the reusable software components of the business logic and the application dependent state of business data is a promising means to provide fast local access and high availability.
However, while replication of databases is a well explored area and the implications of replica maintenance are well understood, this is not the case for data replication in application servers where entire business objects are replicated, Web Service interfaces are provided, main memory access is much more prevalent, and which have a database server as a backend tier.
In this paper, we introduce possible replication architectures for multi-tier architectures, and identify the parameters influencing the performance. We present a simulation prototype that is suitable to integrate and compare several replication solutions. We describe in detail one solution that seems to be the most promising in a wide-area setting.

 
Title:  
STRATEGIES FOR OPTIMIZING QUERYING THIRD PARTY RESOURCES IN SEMANTIC WEB APPLICATIONS
Author(s):  
Albert Weichselbraun
Abstract:  
One key property of the Semantic Web is its support for interoperability. Combining knowledge sources from different authors and locations yields refined and better results.
Current Semantic Web applications only use a limited amount of particularly useful and popular information providers like Swoogle, geonames, etc. for their queries.
As more and more applications facilitating Semantic Web technologies emerge, the load caused by these applications is expected to grow, requiring more efficient ways for querying external resources.
This research suggests an approach for query optimization based on ideas originally proposed by McQueen for optimal stopping in business economics.
Applications querying external resources are modeled as decision makers looking for optimal action/answer sets, facing search costs for acquiring information, test costs for checking the acquired information, and receiving a reward depending on the usefulness of the proposed solution.
Applying these concepts to the information system domain yields strategies for optimizing queries to external services. An extensive evaluation compares these strategies to a conventional coverage based approach, based on real world response times taken from three different popular Web services.

 
Title:  
PERFORMANCE AND COMPLEXITY EVALUATION OF MULTI-PATH ROUTING ALGORITHMS FOR MPLS-TE
Author(s):  
K. Abboud, A. Toguyeni and A. Rahmani
Abstract:  
This paper discusses and evaluates the behaviour of a DS-TE algorithm (DiffSev aware MPLS traffic Engineering) called PEMS, and a dynamic multipath routing algorithm for load balancing (LBWDP), applied on a huge topology that correspond to real network.
To clarify network topologies and routing algorithms that are suitable for MPLS Traffic Engineering, we evaluate them from the viewpoint of network scalability and end-to-end quality. We characterize typical network topologies and practical routing algorithms. Using a network topology generated by BRITE that has many alternative paths can provide a real simulation of the internet and gives a good evaluation for the end-to-end quality and the network use. In this paper, we first review MPLS-TE, DiffServ and load balancing. We then discuss the general issues of designing for a lot of DS-TE and load balancing algorithms. Based on our works, a generic procedure for deploying and simulating these algorithms is proposed. We also discuss the results and a comparison between the algorithms. Putting these together, we present a practical issues of Traffic Engineering, load balancing and a working solution for DS-TE in the Internet.

 
Title:  
PIPELINED PARALLELISM IN MULTI-JOIN QUERIES ON HETEROGENEOUS SHARED NOTHING ARCHITECTURES
Author(s):  
Mohamad Al Hajj Hassan and Mostafa Bamha
Abstract:  
Pipelined parallelism was largely studied and successfully implemented, on shared nothing machines, in several join algorithms in the presence of ideal conditions of load balancing between processors and in the absence of data skew. The aim of pipelining is to allow flexible resource allocation while avoiding unnecessary disk input/output for intermediate join results in the treatment of multi-join queries.
The main drawbacks of pipelining in existing algorithms is that communication and load balancing remain limited to the use static approaches (generated during query optimization phase) based on hashing to redistribute data over the network and therefore cannot solve data skew problem and load imbalance between processors in on heterogeneous multi-processor architectures where the load of each processor may vary in a dynamic and unpredictable way.
In this paper, we present a new paralel join algorithm allowing to solve the problem of data skew while guaranteeing perfect balancing properties, on heterohenous multi-processor Shared Nothing architectures. The performance of this algorithm is analyzed using the scalable portable BSP (Bulk Synchronous Parallel) cost model.

 
Title:  
ADAPTING GRID SERVICES FOR URGENT COMPUTING ENVIRONMENTS
Author(s):  
Jason Cope and Henry Tufo
Abstract:  
Emerging urgent computing tools can quickly allocate computational resources for the execution of time critical jobs. Grid applications and workflows often use Grid services and service-oriented architectures. Currently, urgent computing tools cannot allocate or manage Grid services. In this paper, we evaluate a service-oriented approach to Grid service access and provisioning for urgent computing environments. Our approach allows resource providers to define urgent computing resources and Grid services at a much finer granularity than was previously possible. It accommodates new urgent computing resource types, requires minimum reconfiguration of existing services, and provides adaptive Grid service management tools. We evaluate our service-oriented, urgent computing approach by applying our tools to resource and data management Grid services commonly used in urgent computing workflows.

 
Title:  
USING MESSAGE PASSING FOR DEVELOPING COARSE-GRAINED APPLICATIONS IN OPENMP
Author(s):  
Bielecki Wlodzimierz and Palkowski Marek
Abstract:  
A technique for extracting coarse-grained parallelism in loops is presented. It is based on splitting a set of dependence relations into two sets. The first one is to be used for generating code scanning slices while the second one permits us to insert send and receive functions to synchronize the slices execution. Codes of send and receive functions based on both OpenMP and POSIX locks functions are presented. A way of proper inserting and executing send and receive functions is demonstrated. Using agglomeration and free-scheduling are discussed for the purpose of improving program performance. Results of experiments are presented.

 
Title:  
TECHNICAL CLASSIFICATION OF RUNTIME ENVIRONMENTS FOR MOBILE APPLICATIONS
Author(s):  
Sören Blom, Matthias Book, Volker Gruhn, Ruslan Hrushchak and André Köhler
Abstract:  
The hype surrounding Web 2.0 and technologies such as AJAX shows: The future of distributed application development lies in Rich Internet Applications (RIAs), which are based on highly distributed components and characterized by the intensive use of communication networks, complex interaction patterns and advanced GUI capabilities. As service providers begin to tap into the mobile market by extending the reach of their established e-commerce systems to mobile devices, a core challenge is the choice of a runtime environment and middleware that adapts well to the existing architecture, yet is a safe investment for the years to come. This paper surveys the current state and the future of runtime environments suitable for developing RIAs for mobile clients. We compare the main characteristics of established technologies and promising new developments, and assess their future potential. Furthermore, we consider middleware products that provide an additional support layer solving typical mobility problems such as distribution, connectivity, performance, reliability, etc.

 
Title:  
A HYBRID DIAGNOSTIC-RECOMMENDATION SYSTEM FOR AGENT EXECUTION IN MULTI-AGENT SYSTEMS
Author(s):    
Andrew Diniz da Costa, Carlos J. P. de Lucena, Viviane T. da Silva and Paulo Alencar
Abstract:  
Open multi-agent systems are societies with autonomous and heterogeneous agents that can work together to achieve similar or different goals. Agents executing in such systems may not be able to achieve their goals due to failures during system execution. This paper’s main goals are to understand why such failures occurred and what can be done to remediate the problem. The distributed, dynamic and open nature of multi-agent systems calls for a new form of failure handling approach to address its unique requirements, which involves both diagnosing specific failures and recommending alternative plans for successful agent execution and goal attainment. In this paper, we discuss solutions to the main challenges of creating a system that can perform diagnoses and provide recommendations about agent executions to support goal attainment, and propose a hybrid diagnostic-recommendation framework that provides support for methods to address such challenges.

 
Title:    
AN EXTENDED MASTER WORKER MODEL FOR A DESKTOP GRID COMPUTING PLATFORM (QADPZ)
Author(s):  
Monica Vlădoiu and Zoran Constantinescu
Abstract:  
In this paper we first present briefly QADPZ, an open source platform for heterogeneous desktop grid computing, which enables users from a local network (organization-wide) or Internet (volunteer computing) to share their resources. Users of the system can submit compute-intensive applications to the system, which are then automatically scheduled for execution. The scheduling is made based on the hardware and software requirements of the application. Users can later monitor and control the execution of the applications. Each application consists of one or more tasks. Applications can be independent, when the composing tasks do not require any interaction, or parallel, when the tasks communicate with each other during the computation. QADPZ uses a master worker-model that is improved with some refined capabilities: push of work units, pipelining, sending more work-units at a time, adaptive number of workers, adaptive timeout interval for work units, and use of multithreading, to be presented further in this paper. These improvements are meant to increase the performance and efficiency of such applications.

 
Title:  
THREATS TO THE TRUST MODEL OF MOBILE AGENT PLATFORMS
Author(s):  
Michail Fragkakis and Nikolaos Alexandris
Abstract:  
Mobile agent systems employ a number of security features to address the various threats. Despite these mechanisms they do have to make certain assumptions for the trustfulness of other entities within the agent system. This paper intends to present the ways in which mobile agent architectures address important threats concerning their trust model, by comparing the behaviour of four major mobile agent platforms. The conclusions drawn are then used to point out deficiencies of current technology and highlight issues that need to be addressed by future research.

 
Title:  
OPTIMIZING SKELETAL STREAM PROCESSING FOR DIVIDE AND CONQUER
Author(s):  
Michael Poldner and Herbert Kuchen
Abstract:  
Algorithmic skeletons intend to simplify parallel programming by providing recurring forms of program structure as predefined components. We present a new distributed task parallel skeleton for a very general class of divide and conquer algorithms for MIMD machines with distributed memory. Our approach combines skeletal internal task parallelism with stream parallelism. This approach is compared to alternative topologies for a task parallel divide and conquer skeleton with respect to their aptitude of solving streams of divide and conquer problems. Based on experimental results for matrix chain multiplication problems, we show that our new approach enables a better processor load and memory utilization of the engaged solvers, and reduces communication costs.

 
Title:  
AN EXTENSION OF PUBLISH/SUBSCRIBE FOR MOBILE SENSOR NETWORKS
Author(s):  
Hiroki Saito
Abstract:  
The miniaturization of computing, sensing and wireless communication devices enable the development of wireless sensor networks (WSNs). One of interesting research in sensor networks is utilizing moving nodes. The benefit of the moving sensor nodes is to measure wide-ranging area by small number of nodes. Despite of the rapid development of the network protocols in mobile sensor nodes, the application platforms for moving sensor nodes have not been much discussed. In this context, Publish/subscribe model is one of reasonable solution with sensor networks. Publish/subscribe model has become a prevalent paradigm for delivering data/events from publishers (data/event producers) to subscribers (data/event consumers) across large-scale distributed network. In sensor networks, a user who is interested in the specific location and attributes can send subscription to the system to receive all desired events. This paper proposes a novel schema that allows us to control sensor nodes for location-based publish/subscribe system. In our schema, sensor nodes can be deployed to the most effective location for event delivery.

 
Title:  
TOWARDS A MULTI-AGENT ARCHITECTURE FOR WEB APPLICATIONS
Author(s):  
Tiago Garcia and Luís Morgado
Abstract:  
In this paper we propose an approach that integrates multi-agent system architectures and service oriented architectures to address web application modelling and implementation. An adaptation of the common three layer architecture is used, with the intervening entities being agents and multi-agent societies. To address the specificity of web applications subsystems, three distinct agent types are proposed, each with specific concerns..A model driven approach is proposed to concretize the mapping between agent based and service based layers.

 
Title:  
ALIGNING AGENT COMMUNICATION PROTOCOLS - A Pragmatic Approach
Author(s):  
Maricela Bravo and Martha Coronel
Abstract:  
Nowadays, there is a clear trend in using common ontologies for supporting communication interoperability between multiple heterogeneous agents over Internet. An important task that must be solved before implementing ontology-based solutions, is the identification of semantical relations to establish alignments between communication primitives. A frequent methodology for aligning different communication primitives consists of processing definitions provided by human developers based on syntactical classification algorithms and semantical enhancement of concepts. We think that the information provided by human developers represents an important source for classification. However, to obtain real semantics, we belive that a better approach would analyze the pragamatical usage of communication primitives in the communication protocol. In this paper we present a pragmatical approach for alingning communication primitives, considering their usage in the protocol. To evaluate our solution we compare the resulting relations and show that our approach provides more accuracy for relating communication primitives.

 
Title:  
JAVA NIO FRAMEWORK - Introducing a High-performance I/O Framework for Java
Author(s):  
Ronny Standtke and Ulrich Ultes-Nitsche
Abstract:  
A new input/output (NIO) library that provides block-oriented I/O was introduced with Java v1.4. Because of its complexity, creating network applications with the Java NIO library has been very difficult and build-in support for high-performance, distributed and parallel systems was missing. Parallel architectures are now becoming the standard in computing and Java network application programmers need a framework to build upon. In this paper, we introduce the Java NIO Framework, an extensible programming library that hides most of the NIO library details and at the same time provides support for secure and high-perfomance network applications. The Java NIO Framework is already used by well-known organisations, e.g. the U.S. National Institute of Standards and Technology, and is running successfully in a distributed computing framework that has more than 1000 nodes.

 
Title:  
A SURVEY OF SENSOR NETWORK AND RELATED ROUTING PROTOCOLS
Author(s):  
O. P. Vyas, M. K. Tiwari and Chandresh Pratap Singh
Abstract:  
Recent advances in wireless sensor networks have led to many new protocols specifically designed for sensor networks where energy awareness is an essential consideration. Most of the attention, however, has been given to the routing protocols since they might differ depending on the application and network architecture. Tiny sensor nodes create sensor network. These nodes are severely constrained by energy, storage capacity and computing power. The prominent task of this network is to design proficient routing protocols for to make the node’s life last longer. In this paper, we first analyze the requirements of sensor networks and its architecture. Then, we enlighten the existing routing protocols for sensor networks and present a critical analysis of these protocols. The paper concludes with open research issues. At the end of this paper, we compare and contrast these protocols.

 
Title:  
BUILDING SCALABLE DATA MINING GRID APPLICATIONS - An Application Description Schema and Associated Grid Services
Author(s):  
Vlado Stankovski and Dennis Wegener
Abstract:  
Grid-enabling existing stand-alone data mining programs and other resources, such as data and computational servers, is motivated by the possibility for their sharing via local and wide area networks. Grid-enabled resource sharing may actually facilitate novel, powerful, distributed data mining applications. Benefits from grid-enabled resource sharing may be improved effectiveness, efficiency, wider access or better use of existing resources. In this paper, we investigate the problem of how to grid enable existing data mining programs. The presented solution is a simple procedure, which was developed by the XYZ project. The actual data mining program (i.e. a batch-style executable) is uploaded on a grid server and an XML document that describes the program is prepared and registered with the underlying grid information services. The XML document conforms to an Application Description Schema, and is used to facilitate discovery and execution of the program in the grid environment. Over 20 stand-alone data mining programs have already been grid enabled by using the XYZ system. By using Triana, a workflow editor and manager, which represents the end-user interface to the system, it is possible to combine grid enabled data mining programs into complex distributed data mining applications.

 
Title:  
THE FUTURE OF MULTIMEDIA DISTRIBUTED SYSTEMS - IPTV Frameworks
Author(s):  
Oscar Martinez Bonastre, Lucas Lopez and Antonio Peñalver
Abstract:  
As distributed systems scale up and are deployed into increasingly sensitive settings, demand is rising for a new generation of communications middleware in support of application-level computing. Thus, knowledge of distributed systems and communications middleware has become essential in today's computing environment. Additionally, multimedia distributed systems are confronting a wide range of challenges associated with limits of the prevailing service oriented architectures and platforms. In this position paper, authors seek that the future of multimedia distributed systems will pass through new techniques to stream data at high rates to groups of recipients, e.g., collaboration systems, computer gaming, embedded control systems and other media delivery systems. In order to be more specific with promising applications for the future of multimedia distributed systems, authors have selected a hot topic like Internet Protocol Television (IPTV). Authors present Tele-ed IPTV, i.e., an author’s tool they have developed to distribute multimedia content to TV sets. This position paper argues that future of multimedia distributed systems will pass through IPTV frameworks which interconnect systems that previously have been relatively incompatible.

 
Title:  
NORMATIVE VIRTUAL ENVIRONMENTS - Integrating Physical and Virtual under the One Umbrella
Author(s):  
Anton Bogdanovych, Simeon Simoff and Marc Esteva
Abstract:  
The paper outlines a normative approach to the design of distributed applications that must simultaneously engage a number of environments (i.e form-based interfaces, 3D Virtual Worlds, physical world). The application of the described ideas is illustrated on the example of fish market, which is an application that can simultaneously be accessed by the people from the physical world, people using form-based interfaces and people embodied in a 3D Virtual World in terms of avatars.

 
Area 4 - Information Systems and Data Management  
Title:  
A TOOL FOR MANAGING DOMAIN KNOWLEDGE IN INTELLIGENT TUTORING SYSTEMS
Author(s):  
Panayiotis Kyriakou, Ioannis Hatzilygeroudis and John Garofalakis
Abstract:  
Domain knowledge (DM) is a basic part of an intelligent tutoring system (ITS). DM usually includes information about the concepts the ITS is dealing with and the teaching material itself, which can be considered as a set of learning objects (LOs). LOs are described by a data set called learning object metadata. Concepts are usually organized in a network, called a concept network or map. Each concept is associated with a number of LOs. In this paper, we present a tool for managing both types of information in DM: creating and editing (a) a concept network and (b) learning object metadata. Additionally, the tool can produce corresponding XML descriptions for each learning object metadata. Finally, it provides facilities for helping tutors in organizing and composing their lessons. Existing tools do not offer all the above capabilities.

 
Title:  
GENERATION OF ERP SYSTEMS FROM REA SPECIFICATIONS
Author(s):  
Nicholas Poul Schultz-Møller, Christian Hølmer and Michael R. Hansen
Abstract:  
We present an approach to the construction of Enterprise Resource Planning (ERP) Systems, which is based on the Resources, Events and Agents (REA) ontology. Though this framework deals with processes involving exchange and flow of resources, the conceptual models have high-level graphical representations describing what the major entities are rather than how they engage in computations. We show how to develop a declarative, domain-specific language on the basis of REA, and for this language we have developed a tool which automatically can generate running web-applications. A main contribution is a proof-of-concept result showing that business-domain experts can, using a declarative, REA-based domain-specific language, generate their own applications without worrying about implementation details. In order to have a well-defined domain-specific language, a formal model of REA has been developed using the specification language Object-Z. This formalization led to clarifications as well as the introduction of new concepts. The compiler for our language is written in Objective CAML and as implementation platform we used Ruby on Rails. The aim of this paper is to give an overview of whole construction of a running application on the basis of a REA specification.

 
Title:  
A STRATEGIC ANALYTICS METHODOLOGY
Author(s):  
Marcel van Rooyen and Simeon J. Simoff
Abstract:  
Businesses are experiencing difficulties with integrating data mining analytics with decision-making and action. At present, two data mining methodologies play a central role in enabling data mining as a process. However, the results of reflecting on the application of these methodologies in real-world business cases against specific criteria indicate that both methodologies provide limited integration with business decision-making and action. In this paper we demonstrate the impact of these limitations on a Telco customer retention management project for a global mobile phone company. We also introduce a data mining and analytics project methodology with improved business integration – the Strategic Analytics Methodology (SAM). The advantage of the methodology is demonstrated through its application in the same project, and comparison of the results.

 
Title:  
A PROCESS ENGINEERING METHOD BASED ON ONTOLOGY AND PATTERNS
Author(s):  
Charlotte Hug, Agnès Front and Dominique Rieu
Abstract:  
Many different process meta-models offer different viewpoints of a same process: activity oriented, product oriented, decision oriented, context oriented and strategy oriented, however, the complementarity between their concepts is not explicit and there is no consensus about the concepts themselves. This leads to inadequate process meta-models with organization needs, so the instantiated models do not correspond to the specific demands and constraints of the organizations or projects. Nevertheless, method engineers should be able to build process meta-models according to the specific organization needs. We propose a method to build unified, fitted and multi-viewpoints process meta-models. A process domain ontology and patterns are the bases of The Process Engineering Method, described as a pattern system to standardize the different solutions proposed.

 
Title:  
COMPRESSED DATABASE STRUCTURE TO MANAGE LARGE SCALE DATA IN A DISTRIBUTED ENVIRONMENT
Author(s):  
B. M. Monjurul Alom, Frans Henskens and Michael Hannaford
Abstract:  
Loss-less data compression is attractive in database systems as it may facilitate query performance improvement and storage reduction. Although there are many compression techniques which handle the whole database in main memory, problems arise when the amount of data increases gradually over time, and also when the data has high cardinality. Management of a rapidly evolving large volume of data in a scalable way is very challenging. This paper describes a disk based single vector large data cardinality approach, incorporating data compression in a distributed environment. The approach provides substantial storage performance improvement compared to other high performance database systems. The compressed database structure presented provides direct addressability in a distributed environment, thereby reducing retrieval latency when handling large volumes of data.

 
Title:  
RELAXING CORRECTNESS CRITERIA IN DATABASE REPLICATION WITH SI REPLICAS
Author(s):  
J. E. Armendáriz-Íñigo, J. R. González de Mendívil, J. R. Garitagoitia, J. R. Juárez-Rodríguez, F. D. Muñoz-Escoí and L. Irún-Briz
Abstract:  
The concept of Generalized Snapshot Isolation (GSI) has been recently proposed as a suitable extension of conventional Snapshot Isolation (SI) for replicated databases. In GSI, transactions may use older snapshots instead of the latest snapshot required in SI, being able to provide better performance without significantly increasing the abortion rate when write/write conflicts among transactions are low. We study and formally proof a sufficient condition that replication protocols with SI replicas following the deferred update technique must obey to achieve GSI. They must provide global atomicity and commit update transactions in the very same order at all sites. However, as this is a sufficient condition, it is possible to obtain GSI by relaxing certain assumptions about the commit ordering of certain update transactions.

 
Title:  
ITAIPU DATA STREAM MANAGEMENT SYSTEM - A Stream Processing System with Business Users in Mind
Author(s):  
Azza Abouzied, Jacob Slonim and Michael McAllister
Abstract:  
Business Intelligence (BI) provides enterprise decision makers with reliable and holistic business information. Data Warehousing systems typically provide accurate and summarized reports of the enterprise’s operation. While this information is valuable to decision makers, it remains an after-the-fact analysis. Just-in-time, finer grained information is necessary to enable decision makers to detect opportunities or problems as they occur. Business Activity Monitoring is the technology that provides right-time analysis of business data. The purpose of this paper is to describe the requirements of a BAM system, establish the relation of BAM to a Data Stream Management System (DSMS) and describe the architecture and design challenges we faced building the Itaipu system: a DSMS developed for BAM end-users.

 
Title:  
FORMATIVE USER-CENTERED USABILITY EVALUATION OF AN AUGMENTED REALITY EDUCATIONAL SYSTEM
Author(s):  
Costin Pribeanu, Alexandru Balog and Dragoş Daniel Iordache
Abstract:  
The mix of real and virtual requires appropriate appropriate interaction techniques that have to be evaluated with users in order to avoid usability problems. Formative usability aims at finding usability problems as early as possible in the development life cycle and is suitable to support the development of novel interactive systems. This work presents an approach to user-centered evaluation of a Biology scenario developed on an Augmented Reality educational platform. The evaluation has been carried on during and before a summer school held within the ARiSE research project. The basic idea was to perform usability evaluation twice. In this respect, we conducted user testing with a small number of students during a summer school in order to get a fast feedback from users having good knowledge in Biology. Then, we repeated the user testing in different conditions and with a relatively larger number of representative users. In this paper we describe both experiments and compare the usability evaluation results.

 
Title:  
EVALTOOL - A Flexible Environment for the Capability Assessment of Software Processes
Author(s):  
Tomás Martínez-Ruiz, Eduardo León-Pavón, Félix García, Mario Piattini and Francisco J. Pino
Abstract:  
Software process improvement is an important aspect in achieving capable processes, and so organizations are obviously concerned about it. However, to improve software process it is necessary to assess it in order to check its weaknesses and strengths. The assessment can be performed according to a given assessment process or any other and the processes of the organization can also use one particular process model or any other. The goal of this work is to provide an environment that allows us to carry out assessments that are in accord with various different process assessment models, on several process reference models. We have developed an environment composed of two components; one of these generates the database schema for storing the process reference model and assessment information and the other one assesses the process with reference to this information, generating results in several formats, to make it possible to interpret data. With this environment, assessment of software process is an easy task, whichever assessment process is used, and regardless of the process model used in the organization.

 
Title:  
DATABASE VERSION CONTROL - A Software Configuration Management Approach to Database Version Control
Author(s):  
Stephen Mc Kearney and Konstantina Lepinioti
Abstract:  
This paper introduces a database configuration management tool, called DBVersion, that provides database developers with many of the benefits of source code control systems and integrates with software configuration tools such as Subversion. While there has been a lot of research in software configuration management and source code control, little of the work has investigated database configuration issues. DBVersion’s main contribution is to allow database developers to use working practices such as version control, branching and concurrent working that have proved successful in traditional software development.

 
Title:  
FUZZY TIME REPRESENTATION AND HANDLING IN A RELATIONAL DB
Author(s):  
N. Marín, J. M. Medina, O. Pons and M. C. Garrido
Abstract:  
Temporal Databases (TDB) have as a primary aim to offer a common framework to those DB applications that need to store or handle temporal data of different nature or source, since they allow to unify the concept of time from the point of view of its meaning, its representation and its manipulation. At first sight, it may seem that incorporation of time to a DB is a direct and even simple task, but, on the contrary, it is a quite complex aim because, not only new structures and specific operators must be included, but also the semantics of classical manipulation sentences (insert, update or delete) must change when temporal data are present. That is the case, for example, of the update operation which, from now on, does not consist in changing the contents of a specified tuple to substitute the old one, but in closing it by means of a special timestamp and adding a new tuple with the updated data. In this paper we deal with the problem of updating a tuple when the time is expressed by means of a fuzzy interval of dates. Along the text, we will see that delete and insert operations are particular cases of the update process and, therefore, will be also exposed in the paper.

 
Title:  
NOTES ON THE ARCHITECTURAL DESIGN OF TMINER - Design and Use of a Component-based Data Mining Framework
Author(s):  
Fernando Berzal, Juan Carlos Cubero and Aída Jiménez
Abstract:  
This paper describes the rationale behind some of the key design decisions that guided the development of the TMiner component-based data mining framework. TMiner is a flexible framework that can be used as a stand-alone tool or integrated into larger Business Intelligence (BI) solutions. TMiner is a general-purpose component-based system designed to support the whole KDD process into a single framework and thus facilitate the implementation of complex data mining scenarios.

 
Title:  
ASPY - An Access-Logging Tool for JDBC Applications
Author(s):  
A. Torrentí-Román, L. Pascual-Miret, L. Irún-Briz, S. Beyer and F. D. Muñoz-Escoí
Abstract:  
When different developer teams collaborate in the design and implementation of a large –and distributed– application, some care should be taken regarding the access to persistent data since different components might use their own transactions and they might collide quite often, generating undesired blocking intervals. Additionally, when third-party libraries are used, they can provide unclear descriptions of their functionality and programmers might mistakingly use some of their operations. An access logger can be useful in both cases, registering which have been the sentences –and their results– actually sent to the database. Aspy is a tool of this kind, developed as a JDBC-driver wrapper for Java applications. It is able to save in a file the list of calls received by the JDBC driver, registering their parameters, starting time, completion time and either their obtained results or their raised exceptions. With such an information, it is easy to identify common errors in database accesses and the set of transactions involved in blocking situations due to a wrong application design. We discuss three different techniques that were used for implementing Aspy, comparing their pros and cons.

 
Title:  
AUTONOMOUS DATA QUALITY MONITORING AS A PROPERTY OF CODA ENTERPRISE ARCHITECTURE
Author(s):  
Tereska Karran
Abstract:  
Organisations are driven to meet data quality standards by both external pressures and by internal quality processes. However, it is difficult to add these to existing systems as part of enterprise architecture. Moreover, where business intelligence is delivered autonomously, it is difficult to implement efficient data quality processes. We discuss the value of making data quality part of enterprise architecture using CODA and outline how this can be implemented by adding autonomous processing to specific architectural elements.

 
Title:  
STORING SEMISTRUCTURED DATA INTO RELATIONAL DATABASE USING REFERENCE RELATIONSHIP SCHEME
Author(s):  
B. M. Monjurul Alom, Frans Henskens and Michael Hannaford
Abstract:  
The most dominant data format for data processing on the Internet is the semistructured data form termed XML. XML data has no fixed schema; it evolved, and is self describing which results in management difficulties compared to, for example, relational data. This paper presents a reference relationship scheme that encompasses parent and child reference relations to store XML data in a relational view, and that provides improved of storage performance. We present an analytical analysis that compares the scheme with other standard methods of conversion from XML to relational forms. A relational to XML data conversion algorithm that translates the relational data into original XML data form is also presented.

 
Title:  
DETERMINING SEVERITY AND RECOMMENDATIONS IN PROCESS NON-CONFORMANCE INSTANCES
Author(s):  
Sean Thompson and Torab Torabi
Abstract:  
We have seen a variety of frameworks and methodologies aimed at dealing with non-conformance in processes presented in the literature. These methodologies seek to find discrepancies between process reference models and data returned from instances of process enactments. These range from methodologies aimed at preventing deviations and inconsistencies involved in workflow and process support systems to the mining and comparison of observed and recorded process data. What has not been presented in the literature thus far is a methodology for explicitly discerning the severity of instances of non-conformance once they are detected. Knowing how severe an instance of non-conformance might be, and therefore an awareness of the possible consequences this may have on the process outcome can be helpful in maintaining and protecting the process quality. Subsequently, a mechanism for using this information to provide some kind of recommendation or suggested remedial actions relating to the non-conformance for process improvement has also not been explored. In this paper we present a framework to address both these issues. A case study is also presented to evaluate the feasibility of this framework.

 
Title:  
TOWARDS A COMBINED APPROACH TO FEATURE SELECTION
Author(s):  
Camelia Vidrighin Bratu and Rodica Potolea
Abstract:  
Feature selection is an important step in any data mining process, for many reasons. In this paper we consider the improvement of the prediction accuracy as the main goal of a feature selection method. We focus on an existing 3-step formalism, including a generation procedure, evaluation function and validation procedure. The performance evaluations have yielded that no individual 3-tuple (generation, evaluation and validation procedure) can be identified such that it achieves best performance on any dataset, with any learning algorithm. Moreover, the experimental results suggest the possibility of tackling a combined approach to the feature selection problem. So far we have experienced with the combination of several generation procedures, but we believe that the evaluation functions can also be successfully combined.

 
Title:  
A COOPERATIVE AND DISTRIBUTED CONTENT MANAGEMENT SYSTEM
Author(s):  
C. Noviello, M. Mango Furnari and P. Acampa
Abstract:  
In this paper the authors address some methodological and technical issues on managing collections of digital documents published on the web by many different stakeholders.
To cope with this kind of problems the notions of document, cooperative knowledge community and content knowledge authority are introduced.
Then the architecture of a distributed and cooperative content management system is presented.
A set of methodologies and tools for organizing the documents space around the notion of contents community was developed.
Each content provider will publish a set of data model interpreters to collect, organize and publish through a set of cooperative content management system nodes glued together by a web semantic oriented middleware.
These methodologies and software were deployed setting up a prototype to connect about 100 museums spread on the territory of Campania (Italy).

 
Title:  
INTERTRASM - A Depth First Search Algorithm for Mining Intertransaction Association Rules
Author(s):  
Dan Ungureanu and Alexandru Boicea
Abstract:  
In this paper we propose an efficient method for mining frequent intertransaction itemsets. Our approach consists in mining maximal frequent itemsets (MFI) by extending the SmartMiner algorithm for the intertransaction case. We have called the new algorithm InterTraSM (Inter Transaction Smart Miner). Because it uses depth first search the memory needed by the algorithm is reduced; a strategy for passing tail information for a node combined with a dynamic reordering heuristic lead to improved speed. Experiments comparing InterTraSM to other existing algorithms for mining frequent intertransaction itemsets have revealed a significant gain in performance. Further development ideas are also discussed.

 
Title:  
MODELLING KNOWLEDGE FOR DYNAMIC SERVICE DEPLOYMENT - Autonomic Networks Modelling
Author(s):  
Gladys Diaz
Abstract:  
We are interested in the notion of knowledge level of autonomic networks. We treat in this paper the knowledge modelling for the dynamic deployment objectives. We present an overview about modelling aspects in autonomic networks, in special to describe the knowledge level. We treat here the specification and modelling of different notions in the context the dynamic deployment of services. We propose a new and extensible object-oriented information model to represent the various concepts involved in this context. Our information model enables to represent the notions of service, resource and profile, and also to treat the management of networks resources. Our model is representing by using UML. We define, with this model, a first view about the knowledge that are needed to manager the resources allocations for distribution of services.

 
Title:  
A CASE STUDY ON DOMAIN ANALYSIS OF SEMANTIC WEB MULTI-AGENT RECOMMENDER SYSTEMS
Author(s):  
Roberval Mariano, Rosario Girardi, Adriana Leite, Lucas Drumond and Djefferson Maranhão
Abstract:  
The huge amount of data available on the Web and its dynamic nature is the source of an increasing demand of information filtering applications such as recommender systems. The lack of semantic structure of Web data is a barrier for improving the effectiveness of this kind of applications. This paper introduces ONTOSERS-DM, a domain model that specifies the common and variable requirements of Recommender Systems based on the ontology technology of the Semantic Web, using three information filtering approaches: content-based, collaborative and hybrid filtering. ONTOSERS-DM was modeled under the guidelines of MADEM, a methodology for Multi-Agent Domain Engineering, using the ONTOMADEM tool.

 
Title:  
WORKING TIME USAGE AND TRACKING IN A SMALL SOFTWARE DEVELOPMENT ORGANIZATION
Author(s):  
Lasse Harjumaa, Tytti Pokka, Heidi Moisanen and Jukka Sirviö
Abstract:  
This paper represents a study of working time usage in a small software development organization. The purpose of the study was twofold. First, we wanted to understand how software developers in the organization work and second, we wanted to explore the attitudes they had toward different types of time tracking approaches. The aim was to provide practical suggestions of appropriate methods and tools for monitoring the developers’ time. According to the results, working with computer tools occupies the overwhelming majority of the working time even though manual tasks and interruptions take some of the time. Attitudes toward time tracking vary. Even though the developers in the case company do not feel threatened by time monitoring, they do not either feel that monitoring is necessary, which is interesting and challenging from the project management viewpoint. As a result, we suggest that the case company establish a lightweight, tool-based time tracking process and trains the developers to use the system and report their working time accurately.

 
Title:  
TOWARDS COMPACT OPERATOR TREES FOR QUERY INDEXING
Author(s):  
Hagen Höpfner and Erik Buchmann
Abstract:  
Application areas like semantic caches or update relevancy checks require query based indexing: They use an algebra representation of the query operator tree to identify reusable fragments of former query results. This requires compact query representations, where semantically equivalent (sub-)queries are expressed with identical terms. Thus, rewriting queries for indexing is essentially different from well-researched algebraic query optimization. It is challenging to obtain query representations for indexing: Attributes and relations can be renamed, there are numerous ways to formulate equivalent selection predicates, and query languages like SQL allow a wide range of alternatives for joins and nested queries. In this short paper we present our first steps towards optimizing SQL-based query trees for indexing. In particular, we use both existing equivalence rules and new transformations to normalize the sub-tree structure of query trees. We optimize selection and join predicates, and we present an approach to obtain generic names for attributes and table aliases. Finally, we discuss the benefits and limitations of our intermediate results and give directions for future research.

 
Title:  
EFFICIENT SUPPORT COUNTING OF CANDIDATE ITEMSETS FOR ASSOCIATION RULE MINING
Author(s):  
Li-Xuan Lin, Don-Lin Yang, Chia-Han Yang and Jungpin Wu
Abstract:  
Association rule mining has gathered great attention in recent years due to its broad applications. Some influential algorithms have been developed in two categories: (1) candidate-generation-and-test approach such as Apriori, (2) pattern-growth approach such as FP-growth. However, they all suffer from the problems of multiple database scans and setting minimum support threshold to prune infrequent candidates for process efficiency. Reading the database multiple times is a critical problem for distributed data mining. Although more new methods are proposed, like the FSE algorithm that still has the problem of taking too much space. We propose an efficient approach by using a transformation method to perform support count of candidate itemsets. We record all the itemsets which appear at least one time in the transaction database. Thus users do not need to determine the minimum support in advance. Our approach can reach the same goal as the FSE algorithm does with better space utilization. The experiments show that our approach is effective and efficient on various datasets.

 
Title:  
USING THE STOCHASTIC APPROACH FRAMEWORK TO MODEL LARGE SCALE MANUFACTURING PROCESSES
Author(s):  
Benayadi Nabil, Le Goc Marc and Bouché Philippe
Abstract:  
Modeling manufacturing process of complex products like electronic ships is crucial to maximize the quality of the production. The Process Mining methods developed since a decade aims at modeling such manufacturing process from the timed messages contained in the database of the supervision system of the manufacturing process. Such models can contain hundreds of manufacturing steps making difficult to apply the usual Process Mining algorithms. This paper proposes a method and a set of algorithms to model large scale manufacturing processes producing complex products. The method applies the Stochastic Approach framework of knowledge discovering to Process Mining. A series of timed messages is considered as a sequence of discrete event class occurrences and is represented with a Markov chain from which models is deduced with an abductive reasoning. Because sequences can be very long, a notion of process phase based on a concept of class of equivalence is defined to cut up the sequences so that a model of a phase can be locally produced.
The model of the whole manufacturing process is then obtained with the concatenation of the model of the different phases. The paper presents the application of this method to model the electronics chips manufacturing process of the Rousset (France) plant of the STMicroelectronics Company.

 
Title:  
ONTOLOGY-BASED MEDIATION OF OGC CATALOGUE SERVICE FOR THE WEB - A Virtual Solution for Integrating Coastal Web Atlases
Author(s):  
Yassine Lassoued, DawnWright, Luis Bermudez and Omar Boucelma
Abstract:  
In recent years significant momentum has occurred in the development of Internet resources for managers, decision makers, scientists and members of the public interested in the coast. Chief among these has been the development of coastal web atlases (CWAs), based on web-enabled geographic information systems (GIS). While multiple benefits are derived from these tailor-made atlases (e.g., speedy access to multiple sources of coastal data and information; economic use of time by avoiding individual contact with different data holders), the potential exists to derive added value from the integration of disparate CWAs, to optimize decision making at a variety of levels and across themes. This paper describes the development of a semantic mediator prototype to provide a common access point to coastal data, maps and information from distributed CWAs. The prototype showcases how ontologies and ontology mappings can be used to integrate different heterogeneous and autonomous atlases (or information systems), using the Open Geospatial Consortium's Catalogue Services for the Web. Lessons learned from this prototype will help build regional atlases and improve decision support systems as part of a new International Coastal Atlas Network (ICAN).

 
Title:  
USING BITSTREAM SEGMENT GRAPHS FOR COMPLETE DESCRIPTION OF DATA FORMAT INSTANCES
Author(s):  
Michael Hartle, Friedrich-Daniel Möller, Slaven Travar, Benno Kröger and Max Mühlhäuser
Abstract:  
Manual development of format-compliant software components is complex, time-consuming and thus error-prone and expensive, as data formats are defined in semi-formal, textual specifications for human engineers. Existing approaches on a formal description of data formats remain at high-level descriptions and fail to describe phenomena such as compression or fragmentation that are especially common in Multimedia file formats. As a step-stone towards the description of data formats as a whole, this paper presents Bitstream Segment Graphs as a complete model on data format and presents an example PNG where a complete model on data format instances is required.

 
Title:  
H-INDEX CALCULATION IN ENRON CORPUS
Author(s):  
Anton Timofieiev, Václav Snášel and Jiří Dvorský
Abstract:  
Development of modern technologies is expanded with communications possibilities. Electronic systems of communications make possible overcoming of traditional barriers of communication, for example, such as distance. On their basis there are new types of communities which any more have no geographical restrictions. Increasing popularity of electronic communities among which projects LiveJournal, LiveInternet, and also projects popular in Russian-speaking part Internet Mamba, MirTesen, VKontakte, Odnoklassniki, etc., makes as never earlier actual questions on working out of techniques of research of similar social networks. However communications of elements of such communities only on means of electronic communications create difficulties at definition of such communities. There is some question of the specific communities connected with identification and their properties. Nevertheless such communities can be presented graphs, i.e. objects - vertices, and communications channels between them - edges. Thus a number of problems can be solved by means of use of methods from the theory of graphs. As for such structures probably application of methods effective on representations of a similar sort.
In this paper application for such networks of the methods offered for measurement of the importance of scientists work, for the purpose of definition of popularity of objects in community is described and shown. Approach demonstration is made on Enron corpus.

 
Title:  
MODEL FOR PEDAGOGICAL INDEXATION OF TEXTS FOR LANGUAGE TEACHING
Author(s):  
Mathieu Loiseau, Georges Antoniadis and Claude Ponton
Abstract:  
In this communication we propose to expose the main pedagogical ressource description standards limitations for the description of raw ressources, through the scope of pedagogical indexation of texts for language teaching. To do so we will resort to the testimony of language teachers reagarding their practices. We will then propose a model supposed to exceed these limitations. This model is articulated around the notion of text facet, which we introduce here.

 
Title:  
TOOL OF THE INTELLIGENCE ECONOMIC: RECOGNITION FUNCTION OF REVIEWS CRITICS - Extraction and Linguistic Analysis of Sentiments
Author(s):  
Grzegorz Dziczkowski and Katarzyna Wegrzyn-Wolska
Abstract:  
This paper describes the part of recommender system designed for movies' critics recognition. Such a system allows the automatic collection, evaluation and rating of critics and opinions of the movies. First the system search and retrieve texts supposed to be movies' reviews from the Internet. Subsequently the system carries out an evaluation and rating of movies' critics. Finally the system automatically associates a numerical mark to each critic. The goal of system is to give the score of critics associated to the users' who wrote them. All of this data are the input to the cognitive engine. Data from our base allow making correspondences which are required for cognitive algorithms to improve advanced recommending functionalities for e-business and e-purchases websites. Our system uses three different methods for classifying opinions from reviews critics. In this paper we describe the part of system which is based on automatically identifying opinions using natural language processing knowledge.

 
Title:  
EVALUATING SCHEDULES OF ITERATIVE/INCREMENTAL SOFTWARE PROJECTS FROM A REAL OPTIONS PERSPECTIVE
Author(s):  
Vassilis C. Gerogiannis, Androklis Mavridis, Pandelis G. Ipsilandis and Ioannis Stamelos
Abstract:  
In iterative/incremental software projects, software is built in a sequence of iterations with each of them providing certain parts of the required functionality. To better manage an incremental delivery plan, iterations are usually performed during pre-specified time boxes. In a previous work, we addressed the problem of optimizing the schedule of incremental software projects which follow an iterative, timeboxing process model (TB projects). We approached scheduling as a multi criteria decision problem that can be formulated by a linear programming model aimed to overcome some “rigid” simplifications of conventional timeboxing, where duration of each time box/stage is a priori fixed. In this paper, we move this decision making process one step forward by applying real options theory to analyze investment risks associated with each alternative scheduling decision. We identify two options in a TB project. The first is to stall (abandon) the development at a pre-defined iteration, while the second is to continue (expand) development and deliver the full functionality. Thus, we provide the manager the flexibility to decide the most profitable (valued) combination of delivered functionalities at a certain iteration, under favourable or unfavourable conditions.

 
Title:  
A COMPLETENESS-AWARE DATA QUALITY PROCESSING APPROACH FOR WEB QUERIES
Author(s):  
Sandra de F. Mendes Sampaio and Pedro R. Falcone Sampaio
Abstract:  
Internet Query Systems (IQS) are information systems used to query the World Wide Web by finding data sources relevant to a given query and retrieving data from the identified data sources. They differ from traditional database management systems in that data to be processed need to be found by a search engine, fetched from remote data sources and processed taking into account issues such as the unpredictability of access and transfer rates, infinite streams of data, and the ability to produce partial results. Despite the powerful query functionality provided by internet query systems when compared to traditional search engines, their uptake has been slow partly due to the difficulty of assessing and filtering low quality data resulting from internet queries. In this paper we investigate how an internet query system can be extended to support data quality aware query processing. In particular, we illustrate the metadata support, XML-based data quality measurement method, algebraic query processing operators, and query plan structures of a query processing framework aimed at helping users to identify, assess, and filter out data regarded as of low completeness data quality for the intended use.

 
Title:  
EXTENSIONS TO THE OLAP FRAMEWORK FOR BUSINESS ANALYSIS
Author(s):  
Emiel Caron and Hennie Daniels
Abstract:  
In this paper, we describe extensions to the OnLine Analytical Processing (OLAP) framework for business analysis. This paper is part of our continued work on extending multi-dimensional databases with novel functionality for diagnostic support and sensitivity analysis. Diagnostic support offers the manager the possibility to automatically generate explanations for exceptional cell values in an OLAP database. This functionality can be built into conventional OLAP databases using a generic explanation formalism, which supports the work of managers in diagnostic processes. The objective is the identification of specific knowledge structures and reasoning methods required to construct computerized explanations from multi-dimensional data and business models. Moreover, we study the consistency and solvability of OLAP systems. These issues are important for sensitivity analysis in OLAP databases. Often the analyst wants to know how some aggregated variable in the cube would have been changed if a certain underlying variable is increased ceteris paribus (c.p.) with one extra unit or one percent in the business model or dimension hierarchy. For such analysis it is important that the system of OLAP aggregations remains consistent after a change is induced in some variable. For instance, missing data, dependency relations, and the presence of non-linear relations in the business model can cause a system to become inconsistent.

 
Title:  
NBU ADVANCED E-LEARNING SYSTEM
Author(s):  
Petar Atanasov
Abstract:  
This paper introduces the design and implementation of information solution especially designed for the purposes of e-Learning at New Bulgarian University. The described architecture combines the best of modern technologies and theoretical foundations. In addition is examined the close future plans on researching for creation of data repository with semantic file system and its relation with the system.

 
Title:  
COMPARISION OF K-MEANS AND PAM ALGORITHMS USING CANCER DATASETS
Author(s):  
Parvesh Kumar and Siri Krishan Wasan
Abstract:  
Data mining is a search for relationship and patterns that exist in large database. Clustering is an important datamining technique . Because of the complexity and the high dimensionality of gene expression data , classification of a disease samples remains a challenge. Hierarchical clustering and partitioning clustering is used to identify patterns of gene expression useful for classification of samples. In this paper , we make a comparative study of two partitioning methods namely k-means and PAM to classify the cancer dataset.

 
Title:  
FINE-GRAINED PERFORMANCE EVALUATION AND MONITORING USING ASPECTS - A Case Study on the Development of Data Mining Techniques
Author(s):  
Fernando Berzal, Juan-Carlos Cubero and Aída Jiménez
Abstract:  
This paper illustrates how aspect-oriented programming techniques support some tasks whose implementation using conventional object-oriented programming would be extremely time-consuming and error-prone. In particular, we have successfully employed aspects to evaluate and monitor the I/O performance of alternative data mining techniques. Without having to modify the source code of the system under analysis, aspects provide an unintrusive mechanism to perform this kind of performance analysis. In fact, aspects let us probe a system implementation so that we can identify potential bottlenecks, detect redundant computations, and characterize system behavior.

 
Title:  
AUTONOMOUS NEWS PERSONALISATION (ANP)
Author(s):  
Mohammedsharaf Alzebdi and Tereska Karran
Abstract:  
This research explores some of the directions for improving the performance of personalised web usage mining applications. The study uses ANP (Autonomous News Personalisation) to provide personalised news to online newsreaders according to their interests. This is achieved within an intelligent web browser which monitors users' behaviour while browsing. Web usage mining techniques are applied at the site's access log files. These are first pre-processed, and then data-mined using specific algorithms to extract the interests of each user. User profiles are created and maintained to store users' interests. User interests within the profile are ranked according to their reading frequency of news items ranked according to category and location. Profiles are refined continuously and adapt to users' behaviour. Besides being adaptive and completely autonomous, the system is expected to improve on existing performance in news retrieval and to provide higher level personalisation. A system prototype has been implemented and tested using SQL Server 2005 to pre-process logs, data-mine cleaned data, and maintain user profiles. The main system tasks can be demonstrated with further work to address all the issues.

 
Title:  
A MEDICAL INFORMATION SYSTEM TO MANAGE A CANCER DATABASE
Author(s):  
André Cid Ferrizzi, Toni Jardini, Leandro Rincon Costa, Jucimara Colombo, Paula Rahal, Carlos Roberto Valêncio, Edmundo Carvalho Mauad, Lígia Maria Kerr and Geraldo Santiago Hidalgo
Abstract:  
Cancer is the second main cause of death in Brazil and according to statistics disclosed by INCA – National Cancer Institute 472,050 new cases of the disease were forecast for 2006. The storage and analysis of tumour tissues of various types and patients' clinical data, genetic profiles, characteristics of diseases and epidemiological data may provide more precise diagnoses, providing more effective treatments with higher chances for the cure of cancer. In this paper we present a Web system with a client-server architecture, which manages a relational database containing all information relating to the tumour tissue and their location in freezers, patients, medical forms, physicians, users, and others. Furthermore, it is also discussed the software engineering used to developing the system.

 
Title:  
GLOBAL OBJECT INTEGRATION INFRASTRUCTURE SUPPLEMENTING SOA WITH DATA MANAGEMENT - Perspectives of Creation
Author(s):  
Vladimir V. Ovchinnikov, Yuri V. Vakhromeev and Pavel A. Pyatih
Abstract:  
The paper considers a way of data-centric object integration supplementing SOA with data management. The key aim of the proposed approach is to provide unlimited scalability of object integration into one object space, having the classic typification mechanism, the general security and transaction management. Declarative publishing of objects in the space make it possible to read, refer and modify any of them in the general way, in secure and transactional manner in spite of where they are stored actually (RDBMS, XML, etc.).

 
Title:  
COOPERATIVE NEGOTIATION FOR THE PROVISIONS BALANCING IN A MULTI-AGENT SUPPLY CHAIN SYSTEM FOR THE CRISIS MANAGEMENT
Author(s):  
Hayfa Zgaya, David Tang and Slim Hammadi
Abstract:  
Since a few years, logistics has become a performance criterion for the organizations success. So the Supply Chain (SC) study is adopted more and more for the competitiveness of companies development. In previous works we proposed an approach, which aims to reduce an emerging phenomenon of the demand amplification, called the Bullwhip Effect. In this paper, we present a model, based on the proposed approach, for a Cooperative Negotiation for the Provision Balancing in a SC system. The studied SC is a hierarchical system dedicated to the Crisis Management. A Multi-Agent architecture is then proposed to design this distributed chain through interactive software agents. The results of simulation, presented in this paper, prove the importance of the interaction between the SC entities for the Ammunition Balancing.

 
Title:  
VEGETATION INDEX MAPS OF ASIA TEMPORALLY SPLINED FOR CONSISTENCY THROUGH A HIGH PERFORMANCE AND GRID SYSTEM
Author(s):  
Shamim Akhter, Kento Aida and Yann Chemin
Abstract:  
Vegetation Index Map provides the crop density information over a precise region. Remote Sensing (RS) images are at the basis of creating such map, while the decision-maker requirement stands for Vegetation Index Maps at various in-country administrative levels.
However, RS image includes data noises due to influence of haze or cloud especially in the rainy season. Temporally Splined procedure such as Local Maximum Fitting (LMF) can be applied on RS images for ensuring the data consistency. Running the LMF procedure with single computer, takes impractical amount of processing time (approx. 150 days) for Asia regional RS image (46 bands/dates, 3932 rows, 11652 columns). Importing the LMF on High Performance Computing (HPC) platforms provides with a time optimization mechanism, LMF has been implemented in cluster computers for this very purpose. A single cluster LMF processing timing still did not perform within an acceptable time range. In this paper, LMF processing methodology is improved to an acceptable processing time by combining the parallelization of data and task together on multi-cluster Grids.

 
Title:  
DEVELOPING AND DEPLOYING DYNAMIC APPLICATIONS - An Architectural Prototype
Author(s):  
Georgios Voulalas and Georgios Evangelidis
Abstract:  
In our previous research we have presented a framework for the development and deployment of web-based applications. This paper elaborates on the core components (functional and data) that implement the generic, reusable functionality. Code segments of the components are presented, along with a short sample application. In addition, we introduce some changes that are mainly driven by usability and performance improvements, and are in adherence with the principal rules of the framework’s operation. Those changes enable us to extend the framework to other families of applications, apart from web-based business applications.

 
Title:  
A SENSE-MAKING APPROACH TO AGILE METHOD ADOPTION
Author(s):  
Ian Owens, Dave Sammon and John McAvoy
Abstract:  
As is often argued in the diffusion of innovation literature, the adoption of innovations can be hindered by the learning required to successfully deploy the technology or methodology. This paper reports on a research in progress to develop a novel approach to Agile method adoption and introduces the use of sense-making workshops to facilitate improved understanding of the issues concerning Agile adoption.

 
Title:  
APPLYING PROBABILISTIC MODELS TO DATA QUALITY CHANGE MANAGEMENT
Author(s):  
Adriana Marotta and Raúl Ruggia
Abstract:  
This work focuses on the problem of managing quality changes in Data Integration Systems, in particular those which are generated due to quality changes at the sources. Our approach is to model the behaviour of sources and system data quality, and use these models as a basic input for DIS quality maintenance. In this paper we present techniques for the construction of quality behaviour models, as well as an overview of the general mechanism for DIS quality maintenance.

 
Title:  
TOWARDS ONLINE COMPOSITION OF PMML PREDICTION MODELS
Author(s):  
Diana Gorea
Abstract:  
The paper presents a general context in which composition of prediction models can be achieved within the boundaries of an online scoring system called DeVisa. The system provides its functionality via web services and stores the prediction models represented in PMML in a native XML database. A language called PMQL is defined, whose purpose is to process the PMML models and to express consumers' goals and the answers to the goals. The composition of prediction models can occur either implicitly within the process of online scoring, or explicitly, in which the consumer builds or trains a new model based on the existing ones in the DeVisa repository. The main scenarios that involve composition are adapted to the types of composition allowed in the PMML specification, i.e sequencing and selection.

 
Title:  
HYBRID SYSTEM FOR DATA CLASSIFICATION OF DNA MICROARRAYS WITH GA AND SVM
Author(s):  
Mónica Miguélez, Juan Luis Pérez, Juan R. Rabuñal and Julián Dorado
Abstract:  
This paper proposes a Genetic Algorithm (GA) combined with Support Vector Machine (SVM) for selecting and classifying data from DNA microarrays, with the aim of differentiate healthy from cancerous tissue samples. The proposed GA, by using a SVM fitness function, enables the selection of a group of genes that represent the absence or the presence of cancerous tissue. The proposed method is tested with a group data related to a widely known cancer disease, the breast cancer. The comparison shows that the results obtained with these combined techniques are better that when using other techniques.

 
Title:  
RBF NETWORK COMBINED WITH WAVELET DENOISING FOR SARDINE CATCHES FORECASTING
Author(s):  
Nibaldo Rodriguez, Broderick Crawford and Eleuterio Yañez
Abstract:  
This paper deals with time series of monthly sardine catches in the north area of Chile. The proposed method combined radial basis function neural network (RBFNN) with wavelet denosing algorithm. Wavelet denoising is based on stationary wavelet transform with hard thresholding rule and the RBFNN architecture is composed of linear and nonlinear weights, which are estimates by using the separable nonlinear least square method. The performance evaluation of the proposed forecasting model shown that a 93% of the explained variance was captured with a reduced parsimony.

 
Title:  
A NOVEL METADATA BASED META-SEARCH ENGINE
Author(s):  
Jianhan Zhu, Dawei Song, Marc Eisenstadt and Cristi Barladeanu
Abstract:  
We present a novel meta-search engine called DYNIQX for metadata based cross search in order to study the effect of metadata in collection fusion. DYNIQX exploits the availability of metadata in academic search services such as PubMed and Google Scholar etc for fusing search results from heterogeneous search engines. Furthermore, metadata from these search engines are used for generating dynamic query controls such as sliders and tick boxes etc for users to filter search results.

 
Title:  
A DESCRIPTION METHOD FOR MULTI-AGENT SIMULATION MODEL UTILIZING TYPICAL ACTION PATTERNS OF AGENTS
Author(s):  
Taiki Enomoto, Gou Hatakeyama, Masanori Akiyoshi and Norihisa Komoda
Abstract:  
Recently, there are various proposals on tool for multi-agent simulation. However, in every simulation tool, analysts who do not have programming skill spend a lot of time to develop programs because notation of simulation models is not defined sufficiently and programming language is varied on tools. To solve this problem, a programming environment that defines the notation of simulation model based on graph representation is proposed. However, in this environment, we still need to write programs about a flow of event and contents of agents’ action and effect. So, we propose a description method for multi-agent simulation model utilizing typical action patterns of agents. In this method, users write about designs of contents of event based on typical action patterns which are “interrogative (4W1H) and verbs”, and designs of a flow of event. In this research, we executed experiments that compare time needed for examinees to generate programs by a conventional method and our programming environment. Experimental result shows the time to generate programs by utilizing our programming environment less than that by utilizing a conventional one.

 
Title:  
TURKISH QUESTION ANSWERING - Question Answering for Distance Education Students
Author(s):  
Burcu Yurekli, Ahmet Arslan, Hakan G. Senel and Ozgur Yilmazel
Abstract:  
This paper reports on our progress towards building a Turkish Question Answering System to be used by distance education students of Anadolu University. We have converted 205 electronic books in PDF format to text, extracted metadata from these documents. Currently we provide search capabilities over these documents. Details on problems we faced, and solutions we came up with are provided. We also outline our plan for continuing and implementing a full Question Answering system for this domain. We also present ways to automatically evaluate our system.

 
Area 5 - Knowledge Engineering  
Title:  
IMPROVING QUALITY OF RULE SETS BY INCREASING INCOMPLETENESS OF DATA SETS - A Rough Set Approach
Author(s):  
Jerzy W. Grzymala-Busse and Witold J. Grzymala-Busse
Abstract:  
This paper presents a new methodology to improve the quality of rule sets. We performed a series of data mining experiments on completely specified data sets. In these experiments we removed some specified attribute values, or, in different words, replaced such specified values by symbols of missing attribute values, and used these data for rule induction while original, complete data sets were used for testing. In our experiments we used the MLEM2 rule induction algorithm of the LERS data mining system, based on rough sets. Our approach to missing attribute values was based on rough set theory as well. Results of our experiments show that for some data sets and some interpretation of missing attribute values, the error rate was smaller than for the original, complete data sets. Thus, rule sets induced from some data sets may be improved by increasing incompleteness of data sets. It appears that by removing some attribute values, the rule induction system, forced to induce rules from remaining information, may induce better rule sets.

 
Title:  
MODELING PROCESSES FROM TIMED OBSERVATIONS
Author(s):  
Marc Le Goc, Emilie Masse and Corinne Curt
Abstract:  
The purpose of this paper is to present the basis of a multimodeling approach to represent dynamic physical systems. The motivating idea is to use the same level of abstraction that an expert uses when diagnosing dynamic process. Indeed, we make the hypothesis that experts implicitly reason according to implicit models based on heuristic and theoretical knowledge to have an efficient diagnosis. This paper explains how to represent knowledge using four models to make them compatible to Reiter theory: first, a perception model defining the process with a set of relations between variables; second, a structural model describing components of the process and their connections. It describes relations between components and variables. Third, a functional model provides relations between values of variables (i.e. the functions). Fourth, a behavioral model describing states of the process and the discrete events from state transitions. According to the Stochastic Approach, a discrete event is defined as the assignment of a value to a variable. The variable concept is a common intrinsic characteristic of these models that link them. The proposed methodology is applied to a hydraulic dam of Cublize (France).

 
Title:  
ONTOLOGY FOR SOFTWARE CONFIGURATION MANAGEMENT - A Knowledge Management Framework for Software Configuration Management
Author(s):  
Nikiforos Ploskas, Michael Berger, Jiang Zhang, Lars Dittmann and Gert-Joachim Wintterle
Abstract:  
This paper describes a Knowledge Management Framework for Software Configuration Management, which will enable efficient engineering, deployment, and run-time management of reconfigurable ambient intelligent services. Software Configuration Management (SCM) procedures are commonly initiated by device agents located in the users gateways. The Knowledge Management Framework makes use of Ontologies to represent knowledge required to perform SCM and to perform knowledge inference based on Description Logic reasoning. The work has been carried out within the European project COMANCHE that will utilize ontology models to support SCM. The COMANCHE ontology has been developed to provide a standard data model for the information that relates to SCM, and determine (infer) which SW Services need to be installed on the devices of users.

 
Title:  
ON THE DETECTION OF NOVEL ATTACKS USING BEHAVIORAL APPROACHES
Author(s):  
Benferhat Salem and Tabia Karim
Abstract:  
During last years, behavioral approaches, representing normal/abnormal activities, have been widely used in intrusion detection. However, they are ineffective for detecting novel attacks involving new behaviors. This paper first analyzes and explains this recurring problem due on one hand to inadequate handling of anomalous and unusual audit events and on other hand to insufficient decision rules which do not meet behavioral approach objectives. We then propose to enhance the standard classification rules in order to fit behavioral approach requirements and detect novel attacks. Experimental studies carried out on real and simulated http traffic show that these enhanced decision rules allow to detect most novel attacks without triggering higher false alarm rates.

 
Title:  
PRUNING SEARCH SPACE BY DOMINANCE RULES IN BEST FIRST SEARCH FOR THE JOB SHOP SCHEDULING PROBLEM
Author(s):  
María R. Sierra and Ramiro Varela
Abstract:  
Best-first graph search is a classic problem solving paradigm capable of obtaining exact solutions to optimization problems. As it usually requires a large amount of memory to store the effective search space, in practice it is only suitable for small instances. In this paper, we propose a pruning method, based on dominance relations among states, for reducing the search space. We apply this method to an A* algorithm that explores the space of active schedules for the Job Shop Scheduling Problem with makespan minimization. The A* algorithm is guided by a consistent heuristic and it is combined with a greedy algorithm to obtain upper bounds during the search process. We conducted an experimental study over a conventional benchmark. The results show that the proposed ethod is able to reduce both the space and the time in searching for optimal schedules so as it is able to solve instances with 20 jobs and 5 machines or 9 jobs and 9 machines. Also, the A* is exploited with heuristic weighting to obtain sub-optimal solutions for larger instances.

 
Title:  
A COLLABORATIVE FRAMEWORK TO SUPPORT SOFTWARE PROCESS IMPROVEMENT BASED ON THE REUSE OF PROCESS ASSETS
Author(s):  
Fuensanta Medina-Domínguez, Javier Saldaña-Ramos, Arturo Mora-Soto, Ana Sanz-Esteban and Maria-Isabel Sanchez-Segura
Abstract:  
In order to allow software organizations to reuse their know-how, the authors have defined product patterns artefact. This know-how can be used in combination with software engineering best practices to improve the quality and productivity of their software projects, as well as reduce projects cost. This paper describes the structure of the Process Assets Library (PAL), and the framework developed to encapsulate the know-how in organizations. The PIBOK-PB (Process improvement based on knowledge - pattern based) tool uses the proposed PAL to access the knowledge encapsulated in the product patterns, and to execute software projects more efficiently. This paper also describes PIBOK-PB’s features and compares similar tools in the market.

 
Title:  
KNOWLEDGE REPRESENTATIONS OF CONSTRAINTS FOR PATIENT SPECIFIC IOL-DESTINATION
Author(s):  
K. P. Scherer
Abstract:  
A knowledge based system is a computer program, which simulates the problem solving process of a human, who is an expert in this discipline. In the medical area there exist a lot of expert system components for diagnosis of different diseases. In the ophthalmology, after a cataract surgical incision, the human lens is removed and an artificial intraocular lens (IOL) is implanted. Because of many preconditions (patient situation, operation technologies and IOL specifics) a knowledge based system is developed to support the decision process of the IOL destination under the related (sometimes contradictory) constraints. A computer aided IOL destination and decision support for a patient related individual optimized lens system can help to enhance the life style of the bresbyope human. In comparison to classical software systems the heuristic knowledge from the surgeons (aggregated over many years by research and clinical praxis) can be regarded and user specific communication with the software system and an explanation part is available for the decision making process. Useful representations of the situation are formalised knowledge representation methods.

 
Title:  
APPLICATION OF GENETIC PROGRAMMING IN SOFTWARE ENGINEERING EMPIRICAL DATA MODELLING
Author(s):  
Athanasios Tsakonas and Georgios Dounias
Abstract:  
Research in software engineering data analysis has only recently incorporated computational intelligence methodologies. Among these approaches, genetic programming retains a remarkable position, facilitating symbolic regression tasks. In this paper, we demonstrate the effectiveness of the genetic programming paradigm, in two major software engineering duties, effort estimation and defect prediction. We examine data domains from both the commercial and the scientific sector, for each task. The proposed model is proved superior to past literature works.

 
Title:  
TERM KNOWLEDGE ACQUISITION USING THE STRUCTURE OF HEADLINE SENTENCES FROM INFORMATION EQUIPMENTS OPERATING MANUALS
Author(s):  
Makoto Imamura, Yasuhiro Takayama Masanori Akiyoshi and Norihisa Komoda
Abstract:  
This paper proposes a method for automatically extracting term knowledge such as case relations and IS-A relations between words in the headline sentences of the operating manuals for information equipments. The proposed method acquires term knowledge by the following iterative processing: the case relation extraction using correspondence relations between the surface cases and the deep cases; the case and IS-A relation extraction using the compound word structures; the IS-A relation extraction using correspondence between the case structures in the hierarchical headline sentences. The distinctive feature of our method is to extract new case relations and IS-A relations by comparison and matching the case relations extracting from the super and sub headline sentences using the headline hierarchy. We have confirmed that the proposed method to achieve 92.4% recall and 96.8% precision for extracting case relations, and 93.9% recall and 89.9% precision for extracting IS-A relations from an operating manual of a car navigation system.

 
Title:  
APLYING THE KoFI METHODOLOGY TO IMPROVE KNOWLEDGE FLOWS IN A MANUFACTURING PROCESS
Author(s):  
Oscar M. Rodríguez-Elias, Alberto L. Morán, Jaqueline I. Lavandera, Aurora Vizcaíno and Juan Pablo Soto
Abstract:  
Integrating Knowledge Management (KM) in organizational processes has become an important concern in the KM community. Development of methods to accomplish this is still being, however, an open issue. KM should facilitate the flow of knowledge from where it is created or stored, to where it is needed to be applied. Therefore, an initial step towards the integration of KM in organizational processes should be the analysis of the way in which knowledge is actually flowing in these processes, taking into account the mechanisms that could be affecting (positively or negatively) such a flow, and then, to propose alternatives to improve the knowledge flow in the analyzed processes. This paper presents the use of the Knowledge Flow Identification (KoFI) methodology as a means to improve a manufacturing process knowledge flow. Since KoFI was initially developed to analyze software processes, in this paper we illustrate how it can also be used in a manufacturing domain. The results of the application of KoFI are also presented, which include the design of a knowledge portal and an initial evaluation from its potential users.

 
Title:  
MINING ASSOCIATION - Correlations Among Demographic Health Indicators
Author(s):  
Subhagata Chattopadhyay, Pradeep Ray and Lesley Land
Abstract:  
Demographic health indicators such as Crude birth rate, Crude death rate, Maternal mortality rate, Infant mortality rate (IMR), Adult literacy rate and many others are usually considered to measure a country’s health status. These health indicators are often seen in an isolated manner rather than as a group of associated events. Reasons could be invisible associations and correlations that are difficult to be extracted from the available demographic data using conventional statistical techniques. This paper focuses on mining association-correlations among various demographic health indicators under child immunization program, skilled obstetric practice, and IMR using both statistical and Quantitative Association Rule (QAR) mining techniques and their comparisons. Relevant archived data from 10 countries located in the Asia-Pacific region are used for this study. Finally the paper concludes that association mining with QAR is more informative than that of statistical techniques. The reason may lie in its capability to generate the association rules using a 2-D grid-based flexible approach. Moreover, by applying association rules, correlations among the attributes are also engineered. This study could be pioneering for the demographic study involving multiple indicators.

 
Title:  
A PROTOPTYPE TO RECOMMEND TRUSTWORTHY KNOWLEDGE IN COMMUNITIES OF PRACTICE
Author(s):  
Juan Pablo Soto, Aurora Vizcaíno, Javier Portillo-Rodríguez, Mario Piattini and Oscar M. Rodríguez-Elias
Abstract:  
Knowledge Management is a key factor in companies which have, therefore, started using strategies and systems to take advantage of its intellectual capital. However, employees frequently do not take advantage of the means to manage knowledge that companies offer them. For instance, employees often complain that knowledge management systems overload them with more work since they have to introduce information into these systems, or that this kind of tools floods them with too much knowledge which is not always relevant to them. In order to avoid these problems we have implemented a tool to recommend trustworthy knowledge sources in communities of practice. This tool is based on a multi-agent architecture in which agents attempt to help users to find the information which is most relevant to them. In order to do this, the agents use a trust model to evaluate how trustworthy a knowledge source (which may even be another agent) is.

 
Title:  
HOW TO SUPPORT SCENARIOS-BASED INSTRUCTIONAL DESIGN - A Domain-Specific-Modeling Approach
Author(s):  
Pierre Laforcade, Boubekeur Zendagui and Vincent Barré
Abstract:  
Over recent years, Model-Driven-Engineering has attracted growing interest as much as a research domain as an industrial process that can be applied to various educational domains. This article aims to discuss and propose such an application for learning-scenario-centered instructional design processes. Our proposition is based on a 3-domain categorization for learning scenarios. We also discuss and explain why we think Domain-Specific Modeling techniques are the future new trend in order to support the emergence of communities of practices for scenario-based instructional design. The originality resides in the support we propose to help communities of practitioners in building specific Visual Instructional Design Languages with dedicated editors instead of providing them with yet another language or editor.

 
Title:  
INVERSE SIMULATION FOR RECOMMENDATION OF BUSINESS SCENARIO WITH QUALITATIVE AND QUANTITATIVE HYBRID MODEL
Author(s):  
Keisuke Negoro, Takeshi Nakazaki, Susumu Takeuchi and Masanori Akiyoshi
Abstract:  
In order to decide an effective management plan, managers often draw up and evaluate business scenarios. As a way of the evaluation, a simulation method on the qualitative and quantitative hybrid model represented as causal graph has been proposed. There is a strong need to get optimal input values for the target outputs in the simulation, but exhaustive search can not be realistically applied to it considering the processing time. Therefore, we propose a quick search method for optimal input values cencerning the qualitative and quantitative hybrid simulation. Our approach is to get optimal values of input nodes by inverse propagation of effects from the value of target output nodes on the simulation model. However, it generates the contradiction that the value of separated node in the causal graph decided from one of destination nodes is different from the value of the other destination nodes. Therefore, we re-execute the inverse propagation repeatedly from the nearest qualitative node connecting to a quantitative node for solving the contradiction. By experimental results about the proposed method, time could be reduced for reaching the solution. We also could confirm a certain level of accuracy about the solution.

 
Title:  
TRAINING BELIEVABLE AGENTS IN 3D ELECTRONIC BUSINESS ENVIRONMENTS USING RECURSIVE-ARC GRAPHS
Author(s):  
Anton Bogdanovych, Simeon Simoff and Marc Esteva
Abstract:  
Using 3D Virtual Worlds for commercial activities on the Web and the development of human-like sales assistants operating in such environments are ongoing trends of E-Commerce. The majority of the existing approaches oriented towards the development of such assistants are agent-based and are focused on explicit programming of the agents’ decision making apparatus. Being effective in some specific situations, this method is often platform and application dependent and is not generic enough to be used for scenarios that are significantly different from the original application. Furthermore, the agents created following this approach are often unable to adapt to the changes in the environment and learn new behaviors. In this paper we propose an implicit training method that can address the aforementioned drawbacks. In this method we formalize the virtual environment using the Electronic Institutions technique and make the agent use these formalizations in observing a human principle and learning believable behaviors from the human. The training of the agent can be conducted implicitly using the specific data structures called recursive-arc graphs.

 
Title:  
RELATIONSHIP BETWEEN FRACTAL DIMENSION AND SENSITIVITY INDEX OF PRODUCT PACKAGING
Author(s):  
Mayumi Oyama-Higa and Tiejun Miao
Abstract:  
Until now, the evaluation of product packaging has been performed subjectively since no other way existed. Previous research has also shown that people tend to prefer images with high fractal dimension. If so, then the fractal dimension of product package images should enable a determination of how preferable product packages would be, or function as an index to estimate whether product packages would attract attention. In this study, we calculated the fractal dimension for packages of 45 types of canned beer. We performed a comparative analysis using the standard deviation method to determine the degree to which the product packages influenced the potential customer’s impression of the product. The results showed that the fractal dimension is highly important to an objective evaluation.

 
Title:  
ON THE USE OF SYNTACTIC POSSIBILISTIC FUSION FOR COMPUTING POSSIBILISTIC QUALITATIVE DECISION
Author(s):  
Salem Benferhat, Faiza Haned-Khellaf, Aicha Mokhtari and Ismahane Zeddigha
Abstract:  
This paper describes the use of syntactical data fusion to computing possibilistic qualitative decisions. More precisely qualitative possibilistic decisions can be viewed as a data fusion problem of two particular possibility distributions (or possibilistic knowledge bases): the first one representing the beliefs of an agent and the second one representing the qualitative utility.
The proposed algorithm computes a pessimistic optimal decisions based on data fusion techniques. We show that the computation of optimal decisions is equivalent to computing an inconsistency degree of possibilistic bases representing the fusion of agent's beliefs and agent's preferences. This allows us to give an alternative approach to the solution proposed by Dubois, Le Berre, Prade and Sabaddin without using ATMS (Assumption-based Truth Maintenance System).

 
Title:  
A COMPARISON STUDY OF TWO KNOWLEDGE ACQUISITION TECHNIQUES APPLIED TO THYROID MEDICAL DIAGNOSIS DOMAIN
Author(s):  
Abdulhamed Mohamed Abdulkafi and Aiman Subhi Gannous
Abstract:  
This study compares the performance of two famous methods used in knowledge acquisition and machine learning; the C4.5 (Quinlan 1986) algorithm for building the decision tree and the Back propagation algorithm for training Multi layer feed forward neural network. This comparison will be based on the task of classifying thyroid diagnosis dataset. We will apply both methods on the same data set and then study and discuss the results obtained from the experiments.

 
Title:  
INTEGRATION OF ARCHITECTURAL DESIGN AND IMPLEMENTATION DECISIONS INTO THE MDA FRAMEWORK
Author(s):  
Alti Adel and Smeda Adel
Abstract:  
Model Driven Development (MDD) is typically based on models which heavily lead the quality production of application’s architecture. This is because architectural decisions are often implicitly embedded in software engineering, therefore lacks first-class consideration. Architecture has been established as a key to develop software systems that meet quality expectations of their stakeholders. The explicit definition of architectural decisions, aims to well control the quality on the software development process. In this paper, we propose to extend the MDA framework by integrating decision aspects. We propose also an approach to use architectural decisions as a meta-model for the MDD process. Integration of architectural decisions allows architectural design to be defined explicitly and guides architects in creating systems with desirable qualities; and for MDA it extends the approach by integrating true decisional concerns into MDD process.

 
Title:  
AN ONTOLOGY IN ORTHOPEDIC MEDICAL FIELD
Author(s):  
Harold Vasquez, Ana Aguilera and Leonid Tineo
Abstract:  
At present time, Ontology is a powerful Knowledge Representation tool for Web Information Retrieval and Mining. In particular, lots of medical applications in the field of diagnosis and tele-health would take advantage of information sharing and publication on the Web with the use of Ontology. We are especially interested in the field of orthopedic, pathologies of human march. There are several laboratories in the world working in this topic but they are not exploiting potential of the Web in theirs works, they are almost isolate in information management. We have already made mining in database form the Venezuelan Hospital Ortopédico Infantil, nevertheless, it is necessary to build an ontology in order to query and mining information form different laboratories in the world. We would like in the near future to apply fuzzy logic techniques and fuzzy querying over such information. In this paper we present the building of an orthopedic medical ontology, up to our knowledge there is none in this specific field that we are interested.

 
Title:  
KNOWLEDGE SHARING IN TRADITIONAL AND AGILE SOFTWARE PROCESSES
Author(s):  
Broderick Crawford, Claudio León de la Barra and José Miguel Rubio León
Abstract:  
The software development community has a wide spectrum of methodologies when it decides to implement a software project. If the extremes of this spectrum are observed like a dichotomy, one side is represented by the more Traditional deterministic software development derived from Tayloristic management practices, and in the other side, are the Agile software development approaches in the Internet age: unpredictable, nonlinear, fast, and with unstable requirements. The Agile processes are focused on code rather than on documentation, unlike the traditional processes, they are adaptable and not rigid. Software development is a knowledge intensive activity and the knowledge creation and sharing are crucial parts of the software development processes. This paper presents a comparative analysis between knowledge sharing approaches of Agile and Tayloristic software development teams sprinkled with concerns about Knowledge sharing.

 
Title:  
CONSTRAINT PROGRAMMING CAN HELP ANTS SOLVING HIGHLY CONSTRAINTED COMBINATORIAL PROBLEMS
Author(s):  
Broderick Crawford, Carlos Castro and Eric Monfroy
Abstract:  
In this paper, we focus on the resolution of Crew Pairing Optimization problem. This problem is very visible and economically significant and its objective is to find the best schedule, i.e., a collection of crew rotations such that each airline flight is covered by exactly one rotation and the costs are reduced to the minimum. We try to solve it with Ant Colony Optimization algorithms and Hybridizations of Ant Colony Optimization with Constraint Programming techniques. We recognize the difficulties of pure Ant Algorithms solving strongly constrained problems. Therefore, we explore the addition of Constraint Programming mechanisms in the construction phase of the ants so they can complete their solutions. Computational results solving some test instances of Airline Flight Crew Scheduling taken from NorthWest Airlines database are presented showing the advantages to use this kind of hybridization.

 
Title:  
ARABIC TEXT CATEGORIZATION SYSTEM - Using Ant Colony Optimization-based Feature Selection
Author(s):  
Abdelwadood Moh’d A. Mesleh and Ghassan Kanaan
Abstract:  
Feature subset selection (FSS) is an important step for effective text classification (TC) systems. This paper describes a novel FSS method based on Ant Colony Optimization (ACO) and CHI statistics. The proposed method adapted CHI square statistics as heuristic information and the effectiveness of SVMs classifier as a guidance to better selecting features for each category. Compared to six classical FS methods, our proposed ACO based FS algorithm achieved similar or better TC effectiveness. Evaluation used an in-house Arabic TC corpus (consists of 1445 document and Nine categories). The experimental results are presented in terms of macro-averaging precision, macro-averaging recall and macro-averaging F1 measure.

 
Title:  
CONTROLLED EXPERIMENT ON SEARCH ENGINE KNOWLEDGE EXTRACTION CAPABILITIES
Author(s):  
Pasquale Ardimento, Danilo Caivano, Teresa Baldassarre, Marta Cimitile and Giuseppe Visaggio
Abstract:  
Continuous pressure on behalf of enterprises leads to a constant need for innovation. This involves exchanging results of knowledge and innovation among research groups and enterprises in accordance to the Open Innovation paradigm. The technologies that seem to be apparently attractive for exchanging knowledge are Internet and its search engines. Literature provides many discordant opinions on their efficacy, and to our best knowledge, no empirical evidence on the topic. This work starts from the definition of a Knowledge Acquisition Process, and presents a rigorous empirical investigation that evaluates the efficacy of the previous technologies within the Exploratory Search of Knowledge and of Relevant Knowledge according to specific requirements. The investigation has pointed out that these technologies are not effective for Explorative Search. The paper concludes with a brief analysis of other technologies to develop and analyze in order to overcome the weaknesses that this investigation has pointed out within the Knowledge Acquisition Process.

 
Title:  
HUMAN RANDOM GENERATION AND ITS APPLICATIONS
Author(s):  
Mieko Tanaka-Yamawaki
Abstract:  
Human Random Generation (HURG) is a psychological test meant to detect the degree of mental fatigue, or the level of concentration of individual subject, by testing the flexibility of thinking, without relying on any equipment. In early days, HURG was practiced in clinical psychology in order to detect advanced level of schizophrenia. Later, the development of powerful computers made us possible to detect subtle irregularity hidden in HURG taken from normal subjects. We have been studying the possibility of utilizing HURG for self-detection of dementia at early stage, by using various information theoretical techniques over several years including the pattern classification by means of hidden Markov model (HMM), correlation dimension frequently used to identify chaotic time series, and selection of index suitable to characterize short sequences. In this paper, we report our recent progress in developing a novel method of HURG by using the pattern recognition and the randomness measured in the data taken from the Inverse-Ten-Key on the mobile phone keyboards (MPK).

 
     
EHST - Workshop on e-Health Services and Technologies  
Title:  
A MENTAL HEALTH SELF-CHECK SYSTEM USING NONLINEAR ANALYSIS OF PULSE WAVES
Author(s):  
Mayumi Oyama-HIga, Kazuo Satoh, Kazuyoshi Tanaka and Takahiro Miyagi
Abstract:  
Previously, we demonstrated that simple, low-cost measurement of an individual’s mental health is possible using nonlinear analysis of pulse waves. Here we introduce a trial system that records and assesses the relationship between mental health and lifestyle habits. Our goal was to develop a system that allows individuaddls to decide which steps to take for recovery when they de-velop worrying mental health symptoms, by making comparisons to their past lifestyle habits. This system also allows records of multiple individuals to be entered into a database and analyzed. Such analysis should allow for the creation of indicators for general levels of mental health that may require intervention, as well as the creation of more concrete, practical advice to aid re-covery when worrying symptoms appear.

 
Title:  
VERIFICATION OF EFFECT OF MUSIC AND ANIMAL THERAPY ON PSYCHIATRIC CARE BY USING A NONLINEAR ANALYSIS OF PULSE WAVES
Author(s):  
Junji Kojima and Mayumi Oyama-Higa
Abstract:  
This study examines the psychiatric effectiveness of music therapy and animal therapy. Unlike previous research into these modalities, the present study relies on scientifically valid measurements of actual somatic reaction rather than on subjective reports. Earlier work by the current authors defined fluctuation in plethysmogram readings in terms of a Lyapunov exponent derived from activity in the sympathetic nervous system related to the preservation of mental health. Drawing on the previous findings, this study measured changes in the Lyapunov exponent as a function of therapy. Results demonstrated that the Lyapunov exponent reflected the therapeutic effect of these treatments. Specifically, an increase in the lyapunov exponent indicated nerve activation within the sympathetic nervous system. On this basis, the authors recommend that traditional formulation regarding a “healing effect” (i.e., therapeutic benefit) be reconsidered.

 
Title:  
PLAYMANCER: A EUROPEAN SERIOUS GAMING 3D ENVIRONMENT
Author(s):  
Elias Kalapanidas, Hikari Watanabe, Costas Davarakis, Hannes Kaufmann, Fernando Fernandez Aranda, Tony Lam, Todor Ganchev and Dimitri Konstantas
Abstract:  
Serious games are about to enter the medical sector to give people with behavioural or addictive disorders the ability to use them as part of health promotion and disease prevention. The PlayMancer framework will support physical rehabilitations and psycho-education programs thru a modular multiplayer networked 3D game based on the Universally Accessible Games (UAG) guidelines. This project is part of the ICT FP7 program of the European Union (FP7-215839).

 
Title:  
WIRELESS SENSOR NETWORKS WITH QoS FOR e-HEALTH AND e-EMERGENCY APPLICATIONS
Author(s):  
Óscar Gama, Paulo Carvalho, J. A. Afonso and P. M. Mendes
Abstract:  
Most body sensor networks (BSN) only offer best-effort service delivery, which may compromise the successful operation of emergency healthcare (e-emergency) applications. Due to its real-time nature, e-emergency systems must provide quality of service (QoS) support, in order to provide a pervasive, valuable and fully reliable assistance to patients with risk abnormalities. But what is the real meaning of QoS support within the e-emergency context? What benefits can QoS mechanisms bring to the e-emergency systems, and how is it being deployed? In order to answer these questions, this paper firstly discusses the need of QoS in personal wireless healthcare systems, and then presents an overview of such systems with QoS. A case-study requiring QoS support, intended to be deployed in a healthcare unit, is presented, as well as a new asynchronous medium access TDMA-based model.

 
Title:  
NUMERICAL COMPUTATION FOR MRE(MAGNETIC RESONANCE ELASTICITY) BY APPLYING NUMERICAL DIFFERENTIATION METHOD
Author(s):  
Kazuaki Nakane
Abstract:  
"Numerical differentiation" is a numerical method to determine the derivatives of an unknown function from the given noisy values of the unknown function at the scattered points. It provides a method for identifying the transmission boundary of tissue for elastographical measurement. To carry out the numerical computation in the case of Tikhonov regularization parameter is very small, we introduce multiple-precision arithmetic method "Exflib". Exflib is a simple software for scientific multiple-precision arithmetic for C++ and Fortran 90/95. In this talk, we will introduce numerical results concerned with a numerical differentiation method by using Exflib.

 
ACT4SOC - Workshop on Architectures, Concepts and Technologies for Service Oriented Computing  
Title:  
A SYSTEMIC, ONTOLOGY-DRIVEN APPROACH TO e-SERVICES GOVERNANCE
Author(s):  
Bill Karakostas and Yannis Zorgios
Abstract:  
This paper proposes an ontology driven, systemic approach to e-services governance. Eservice governance refers to frameworks and policies for controlling the development and provision of e-services within an organisation. Given the level of complexity of current eservices, it is necessary to think of them as systems of interconnected elements (people, processes, and IT) i.e. of entities that are more complex than the sum of their parts. Our IDEF0 based, system theory inspired, modelling approach, captures the essence of governing systems of services contained (recursively) within higher order systems. We use ontologies to represent explicitly systemic properties of services such as context, control/constraints and feedback. Governance rules constrain the syntactical, semantic, and behavioural properties of service ontologies and, due to the hierarchical ordering of service systems, can be applied to e-service portfolio management, architectural design compliance, and runtime SLA enforcement. Ontology mapping capabilities allow governance rules to be described using concepts appropriate for the different levels of service. Additionally, the paper presents an algorithm for the run-time governance of e-services.

 
Title:  
EFFICIENT GRID SERVICE DESIGN TO INTEGRATE PARALLEL APPLICATIONS
Author(s):  
Al. Archip, M. Craus and S. Arustei
Abstract:  
Although grid systems and grid computing have greatly evolved during the past few years, parallel application support remains somewhat limited. A new method for integrating parallel applications as grid services is presented. This method assumes that underlying parallel applications are resources for grid services; also, it implies that service resources may be clients for some predefined helper grid services. The design of the grid service is based on a Factory Service / Instance Service architecture, aiming to offer support for managing multiple resources. The tests were performed on the GRAI Grid (Academic Grid for Complex Applications), using Globus Toolkit 4 – versions 4.0.3 and 4.0.5 – as the base middleware.

 
Title:  
TOWARDS A GENERIC FRAMEWORK FOR DEPLOYING APPLICATIONS AS GRID SERVICES
Author(s):  
Simona Arustei, Mitica Craus and Alexandru Archip
Abstract:  
Exploiting the power of the Grid very often involves transforming existing or new applications into Grid services. In this paper we present a generic framework based on a service oriented architecture developed in order to simplify the task of deploying applications as Grid services. Our work consists of a configurable grid service that provides application developers with a high level programming model, hiding the complexity of dealing with web services and Grid technologies. The architectural design of the framework allows custom functionality to be plugged into an adaptive grid service in a simple manner, thus attracting more non-expert users to the Grid. A prototype implementation of the framework has been built and a case study has been developed to illustrate the concept.

 
Title:  
SIMULATION AND EXECUTION OF SERVICE MODELS USING ISDL
Author(s):  
Dick Quartel
Abstract:  
This paper presents a technique and tool to simulate and execute service models specified in the Interaction System Design Language (ISDL). This language allows one to model the interacting behaviour of a service, at successive abstraction levels, and from the perspective of the different roles a system can play in the service. A distinction is made between basic and composite modelling concepts. Simulation is performed on the basic concepts of ISDL. In this way, any composite concept that is defined as a composition of the basic concepts can be simulated. Composite concepts can be added as shorthands to ISDL. An example is the operation concept. In addition, ISDL allows model elements to be stereotyped, such that they can be handled differently by the simulator. The paper shows how web-service operations can be modelled in this way, and be executed as part of the simulation of a web-service composition.

 
Title:  
CONTEXT HANDLING IN A SOA INFRASTRUCTURE FOR MOBILE APPLICATIONS
Author(s):  
Laura Daniele, Luís Ferreira Pires and Marten van Sinderen
Abstract:  
Context-aware mobile applications can dynamically adapt their behaviour to changes in the user context and provide their users with relevant services anywhere and at anytime. In order to develop such applications, a flexible infrastructure is necessary that supports several tasks, such as context information gathering and services provisioning according to this information. This paper presents a SOA-based infrastructure that divides these tasks among several components. In order to reduce the responsibility of some components and decrease the load of the network, it is in principle possible to introduce additional components within this infrastructure that are dedicated to specific parts of the application logic. However, this entails a further effort for the developer to integrate these additional components in the application infrastructure. In this paper, we present a generic component, the context expression evaluator, which has been defined to facilitate the handling of context conditions, and we illustrate how this component has been integrated with other components of our infrastructure by using context models.

 
Title:  
A GOAL-BASED FRAMEWORK FOR DYNAMIC SERVICE DISCOVERY AND COMPOSITION
Author(s):  
Luiz Olavo Bonino da Silva Santos, Luís Ferreira Pires and Marten van Sinderen
Abstract:  
Service-oriented computing allows new applications to be developed by using and/or combining services offered by different organizations. Service composition can be applied when a client request cannot be satisfied by any in-dividual service. In this case the creation of a composite service from a number of available services is pursued. This composite service should with the client’s request in terms of functionality and expected results. In this paper, we present a goal-based framework for dynamic service discovery and composition. Our framework consists of a set of design principles and guidelines for service plat-forms to realize dynamic service discovery and composition.

 
Title:  
DEFINING AND PROTOTYPING A LIFE-CYCLE FOR DYNAMIC SERVICE COMPOSITION
Author(s):  
Eduardo Silva, Jorge Martínez López, Luís Ferreira Pires and Marten van Sinderen
Abstract:  
Since the Internet has become a commodity in both wired and wireless environments, new applications and paradigms have emerged to explore this highly distributed and widespread system. One such paradigm is service-orientation, which enables the provision of software functionality as services, allowing in this way the construction of distributed systems with loosely coupled parts. The Service-Oriented Architecture (SOA) provides a set of principles to create service-oriented systems, by defining how services can be created, composed, published, discovered and invoked. In accordance with these principles, in this paper we address the challenge of performing dynamic service composition. The composition process and its associated tasks have to be precisely defined so that the different problems of dynamic service composition can be identified and tackled. To achieve this, this paper defines a life-cycle for dynamic service composition, which defines the required phases and stakeholders. Furthermore, we present our prototype in which the different phases of the dynamic service composition life-cycle are being implemented. This prototype is being used to experiment with and validate our initial ideas on dynamic service composition.

 
Special Session on Metamodelling - Utilization in Software Engineering (MUSE 2008)  
Title:  
SUPPORTING SOFTWARE PROCESS MEASUREMENT BY USING METAMODELS - A DSL and a Framework
Author(s):  
Beatriz Mora, Felix Garcia, Francisco Ruiz and Mario Piattini
Abstract:  
At present the objective of obtaining quality software products has led to the necessity of carrying out good software processes management in which measurement is a fundamental factor. Due to the great diversity of entities involved in software measurement, a consistent framework is necessary to integrate the different entities in the measurement process. In this work a Software Measurement Framework (SMF) is presented to measure any type of software entity. In this framework, any software entity in any domain could be measured with a common Software Measurement metamodel and QVT transformations. Besides, we present a Software Measurement Modelling Language (SMML) in order to define the measurement models with take part in the measurement process. Furthermore an example which illustrates the framework’s application to a concrete domain is furthermore shown.

 
Title:  
MATHS VS (META)MODELLING - ARE WE REINVENTING THE WHEEL?
Author(s):  
Klaus McDonald-Maier, David Akehurst, B. Bordbar and Gareth Howells
Abstract:  
In the past, specification of languages and data structures has traditionally been formally achieved using mathematical notations. This is very precise and unambiguous, however it does not map easily to modern programming languages and many engineers are put off by mathematical notation. Recent developments in graphical specification of structures, drawing from Object-Oriented programming languages, has lead to the development of Class Diagrams as a well-used means to define data structures. We show in this paper that there are strong parallels between the two techniques, but that also there are some surprising differences!

 
Title:  
JOINING SOFTWARE TECHNOLOGIES - A Model Driven Approach for Interactive Groupware Application Development
Author(s):  
William Joseph Giraldo, Ana Isabel Molina, Manuel Ortega Cantero and Cesar Alberto Collazos
Abstract:  
This paper proposes a methodological approach for Model Based User Interface Development of Collaborative Applications. We introduce a notation integration proposal. This proposal support the interface design of groupware applications enabling integration with software processes through UML notation. We use our methodological approach to deal with the conceptual design of applications for supporting work groups, called CIAM. In summary, we describe the integration process of two notations: CIAN, which involves collaboration and human-computer interaction aspects; and UML, specifying groupware systems functionality. Such integration process is developed using a software tool called CIAT.

 
Title:  
INCORPORATING SEMANTIC ALGEBRA IN THE MDA FRAMEWORK
Author(s):  
Paulo E. S. Barbosa, Franklin Ramalho, Jorge C. A. de Figueiredo and Antonio D. dos S. Junior
Abstract:  
Denotational semantics is commonly used to precisely define the meaning of a programming language. This meaning is given by functions that map syntactic elements to mathematically well defined sets called semantic algebra. Models in semantic algebra need to be processed through reductions towards a normal-form in order to allow the verification of semantics properties. MDA is a current trend that shifts the focus and effort from implementation to models, metamodels and transformations during the development process. In order to put forward denotational semantics in the MDA vision, we turn semantic algebra into an useful domain-specific language. In this context, this paper describes our proposed MOF metamodel and ATL reductions between the generated models. The metamodel serves as abstract syntax for semantic algebra. It is useful for static semantics verifications. The reductions enable processing towards a normal-form to compare semantics. This process can be guided by using some rewrite system.

 
Special Session on Global Software Development: Challenges and Advances  
Title:  
MERLIN COLLABORATION HANDBOOK - Challenges and Solutions in Global Collaborative Product Development
Author(s):  
Päivi Parviainen, Juho Eskeli, Tanja Kynkäänniemi and Maarit Tihinen
Abstract:  
Global, collaborative and distributed development is increasingly common in software development. However, the traditional product and software development technologies do not support this way of working well, e.g., time and cultural differences cause new requirements for these technologies. In this paper, we introduce a public web-based handbook, collecting the challenges encountered in global collaborative development by the companies, and also a large number of solutions that help in tackling these challenges. The handbook was implemented using an ontology editor and generating HTML pages. In the final phase of the development the handbook was validated by several external testers, with main feedback being that the handbook was found useful, but more practical solutions would be welcome. Handbook was also updated based on the feedback.


 
Title:  
COMPETENCIES DESIRABLE FOR A REQUIREMENTS ELICITATION SPECIALIST IN A GLOBAL SOFTWARE DEVELOPMENT
Author(s):  
Miguel Romero, Aurora Vizcaíno and Mario Piattini
Abstract:  
The global software development poses several challenges in software engineering, particularly in the elicitation stage, owing to the problems of communication and coordination which are caused when teams are geographically distributed. For a successful requirements elicitation in a global development environment, it is necessary to rely on professionals who are capable of confronting the challenges that arise in this environments such as: cultural differences, distributed communication and coordination. In order to develop in software engineers the skills suitable to face these challenges it is first necessary to discover which competencies or skills they should have or develop. In this work we describe an analysis carried out with this goal, therefore we propose a list of competencies desirable by a requirements elicitation specialist, which have been obtained from a review of the related literature. We also comment on certain useful strategies in the teaching of these competencies and propose the usage of a simulation environment for their development.

 
Title:  
EVALUATING FACTORS THAT CHALLENGE GLOBAL SOFTWARE DEVELOPMENT
Author(s):  
Gabriela N. Aranda, Aurora Vizcaíno, Alejandra Cechich and Mario Piattini
Abstract:  
Usually, some aspects of any Global Software Development (GSD) project strongly impact the requirements elicitation activities because of the importance of communication to reach a common understanding about the system under construction. For example, cultural diversity and the impossibility of running face-to-face meetings dominate the scenario where communication must be done. In this paper, we analyze aspects that might be a source of communication problems and suggest strategies to reduce misunderstandings among stakeholders, aiming to help achieve more committed requirements.

 
Special Session on Applications in Banking and Finances  
Title:  
DATA QUALITY IN FINANCES AND ITS IMPACT ON CREDIT RISK MANAGEMENT AND CRM INTEGRATION
Author(s):  
Berislav Nadinic and Damir Kalpic
Abstract:  
Basel II Capital Accord and increased competition on the financial market are responsible for creation of repositories of aggregated, customer centric historical data used for internal credit risk model development and CRM initiatives in banks. This paper discusses the effect of data quality on development of internal rating models and customer churn models, as well as potential improvements in this regard. A comprehensive framework for data quality improvement and monitoring in financial institutions is proposed, taking into account the Basel II requirements for data quality as well as requirements of customer centric retention campaigns.

 
Title:  
"SELLING" REORGANIZATION TO SME’S FINANCIAL MANAGEMENT THROUGH ERP IMPLEMENTATION
Author(s):  
Krešimir Fertalj and Igor Nazor
Abstract:  
Small and medium enterprises (SMEs) have been receiving less focus from the software vendors than large enterprises (LEs). Research on the implementation of ERP in certain European countries shows that the job of implementing an Enterprise Resource Planning package (ERP) is a riskier business for SMEs than for LEs. This paper presents a methodology for efficient implementation of ERP solutions by delegating the main project leadership roles to an experienced consultant. Finally, a case study is presented with proposed implementation steps. The steps taken in this project, if proven on some additional projects with SMEs could become basis for a more formalized set of activities, or even a methodology.

 
Title:  
ROLE OF ERP IN MANAGEMENT OF HIGHER EDUCATION FINANCING
Author(s):  
Ljerka Luić and Damir Kalpić
Abstract:  
Despite all the talk about new economy, higher education institutions still live by the old rules. Budgets are lean, yet agile enough to reflect changing requirements. With these realities, higher education institutions need proven solutions, the kind that only an ERP solution that integrates all financial processes: funds, financial accounting and managerial accounting can offer. The Ministry of Science, Education and Sports of the Republic of Croatia was implementing an integrated financial information system of 6 universities based on lump-sum principles and supported by SAP ERP solution. Some experiences regarding this project will be presented in the paper.

 
Title:  
DEPLOYMENT OF E-INVOICE IN CROATIA
Author(s):  
Zvonimir Vanjak, Vedran Mornar and Ivan Magdalenić
Abstract:  
Deployment of e-Invoice infrastructure promises great savings in business transactions costs. But, due to slow transition from socialist economy, Croatia has just begun planning deployment of e-Invoice infrastructure in accordance with recently released national strategy for development of e-Business. Unfortunately, existence of globally competing standards makes decision making much harder. Technical sophistication of ebXML standard hasn't prevailed in struggle for global dominance with much broadly implemented technology of web services. Some compromise will therefore be necessary. Paper presents overview of different standards considered as candidates for deployment of e-Invoice infrastructure in Croatia, as well as details regarding particularities of Croatia's legal and business environment.

 
Title:  
INFORMATION SYSTEM QUALITY ASSURANCE IN FINANCES - Building the Quality Assurance into Information System Architecture
Author(s):  
Dragutin Vukovic and Krešimir Fertalj
Abstract:  
Key goals in assuring information system quality are continual improvement of IT performance, deliver optimum business value and ensure regulatory compliance. Practices that support these goals are strategic alignment, asset and resource management, investment and portfolio management, risk management and sustained operational excellence. These are all about governance. While most organizations select a specific framework and apply it on the existing architecture, this may hinder them in taking a more holistic approach to IT governance. This paper discusses governance reference model and frameworks, and proposes a holistic approach in which prerequisites for quality assurance are built-in into information system architecture.

 
Title:  
OBSERVABILITY OF INFORMATION IN DATABASES - New Spins in Data Warehousing for Credit Risk Management
Author(s):  
Stjepan Pavlek and Damir Kalpić
Abstract:  
In this paper the observability of information in modern databases is investigated. Observable information is the one that is explicitly stored in a database. Unobservable information is information hidden in various coding schemes, transaction streams or free-text-description fields. Nowadays credit risk management tends to employ cutting edge technologies and approaches from the fields of statistics and machine learning for achieving their goals. Often it is forgotten that machine learning schemes can only use completely observable information. The issue is eventually addressed – but instead when using the data it should be addressed during the data warehouse design phase.

 
Title:  
WHAT, HOW AND WHEN - The Story of e-Banking in Croatia
Author(s):  
Zoran Bohacek
Abstract:  
This paper will present the reasons for success of e-banking (including actual results) in Croatia both among the individuals and companies. The main reason is that up to early 2002 all companies' payments were done through a centralized system. Only then banks started to do payment services for companies which were faced with a rather easy choice – continue using paper payment orders and receive service that is more expensive and slower than before or switch to e-banking and get faster and cheaper service.

 
Doctoral Consortium  
Title:  
SOFTWARE DEVELOPMENT METHODOLOGY FOR BUILDING INTELLIGENT DECISION SUPPORT SYSTEMS
Author(s):  
Natheer Gharaibeh and Saleh Abu Soud
Abstract:  
Decision Support technologies are changing rapidly and a great deal of innovation is occurring related to these types of technologies. Recently many improvements have been witnessed in the DSS field, with the inclusion of AI techniques and methods. Due to the importance of these kinds of systems there is a need to establish a software development methodology for them. This paper outlines the software development methodologies used in building DSS and IDSS, and aims to produce an appropriate software development methodology for IDSS.

 
Title:  
MONITORING OF SERVICES IN DISTRIBUTED WORKFLOWS
Author(s):  
Nicolas Repp
Abstract:  
In order to implement cross-organisational workflows and to realise collaborations between enterprises the use of Web service technology and the Service-oriented Architecture paradigm has become state of the art. Here, as in every distributed system, several challenges arise in order to ensure the quality of service delivery. Especially, the monitoring of workflows and the related services at runtime is crucial in order to fulfil business requirements, e.g., as described in Service Level Agreements (SLA).
In my research, I am working towards an integrated monitoring approach for distributed service-based workflows, supporting the detection of SLA violations as well as their correction. The following research questions are subject to my work: "what?" to monitor, "where?" to monitor in a service-based environment as well as "how to react?" in case of SLA violations.

 
Title:  
AN ARCHITECTURE FOR NON FUNCTIONAL PROPERTIES MANAGEMENT IN DISTRIBUTED COMPUTING
Author(s):  
Pierre de Leusse, Panos Periorellis, Theo Dimitrakos and Paul Watson
Abstract:  
One of the primary benefits of Service Oriented Architecture (SOA) [1] is the ability to compose applications, processes or more complex services from other services. As the complexity and sophistication of theses structures increases, so does the need for adaptability of each component. In recent years, a lot of effort has been put into improving the flexibility of these systems so that totally loose services can be integrated dynamically without imposing any architectural restrictions. This proposal presents a project to devise a novel model that aims at increasing the adaptability of the resources exposed through it by dynamically managing their non-functional requirements. To manage these non-functional properties, we aggregate the services required into what we define as a profile.

 
Title:  
RESOURCE PLANNING FOR DISTRIBUTED SERVICE-ORIENTED WORKFLOWS
Author(s):  
Julian Eckert
Abstract:  
Collaborations between enterprises cause the need for cross-organizational workflows that can be realized by adopting the Service-oriented Architecture paradigm. In order to cope with the challenge to ensure several Quality of Service (QoS) demands during workflow composition, performance evaluation and execution management of service-oriented workflows became quite important in order to avoid performance degradation. In a distributed workflow scenario with services from external partners, resource planning of services is crucial to ensure that the workflow execution remains feasible and that Service Level Agreement (SLA) violations due to overload are avoided.
My research focuses on the development of a holistic resource planning approach that facilitates optimal compositions of services to workflows depending on various customer demands and request priorities, different pricing models, and several QoS requirements. Besides a worst-case and an average-case performance analysis including optimization models, I am working in my research towards a detailed resource planning approach in order to ensure that all incoming execution requests to a distributed service-oriented workflow can be served at minimal costs.

 
Title:  
SAFETY AND SECURITY ARCHITECTURES FOR AVIONICS
Author(s):  
Youssef Laarouchi, Yves Deswarte, David Powell and Jean Arlat
Abstract:  
In computer systems, commercial off-the-shelf (COTS) components offer extended functionalities for a reasonable cost, and consequently have an important economic advantage. However, such components are hard to integrate into critical systems because of the integrity requirements placed on such systems. To alleviate this problem, we consider the use of Totel’s integrity model (Totel et al., 1998), a model for managing multiple levels of integrity and allowing the use of fault tolerance techniques to validate information flow from low integrity components to high integrity ones. We propose the use of virtualization as a means to diversify COTS components running on the same physical machine and to control information flow on this machine.

 
Title:  
BUSINESS MODEL DRIVEN DESIGN OF SERVICE ARCHITECTURES FOR ENTERPRISE APPLICATIONS INTEGRATION: A Pattern-based Approach
Author(s):  
Veronica Gacitua-Decar and Claus Pahl
Abstract:  
Service-oriented architecture (SOA) is an increasingly adopted architectural approach for solving the Enterprise Applications Integration (EAI) problem originated by business process automation requirements. Despite SOA is ever more adopted, systematic and formally based methodologies are still maturing. Even less explored is the incorporation of architectural abstractions, such as patterns, as integral part of these methodologies. Patterns provide a medium to improve the changeability characteristics of architectures and to reuse expert design knowledge. This PhD project propose a pattern-based software archi-tecture approach to design service architectures for EAI. The approach provides a framework based on a layered architecture structuring the EAI problem. The framework's activities incrementally transform a business model into a service architecture. The transformation is supported by pattern-based techniques, which are used for service identi cation, for aiding model to model transformations and for architecture modifications, fundamental for maintainability of architectures.

 
Title:  
AUTOMATED AND COLLABORATIVE ANNOTATION OF DIGITAL VIDEO TO ENABLE SEMANTIC AND PERSONALIZED SEARCH
Author(s):  
Jörg Waitelonis
Abstract:  
This paper proposes annotation of multimedia data such as academic lecture recordings to realize content based and semantic search within the multimedia data. This can be achieved through automated multimedia retrieval methods as well as manual collaborative annotation. Furthermore, the paper proposes to use social networking information of a user community to obtain semantically enriched annotations.

 
Title:  
REPLICATION OF BUSINESS OBJECTS IN SERVICE ORIENTED ARCHITECTURES
Author(s):  
Michael Ameling
Abstract:  
Software on demand products require efficient solutions to achieve an acceptable performance for users. Multitier architectures are the main building block in these service-oriented architecture solutions. They contain the business logic and manage the persistent data of entire business applications. Replicating reusable software components and the application dependent business data seems to be promising to provide fast local access and high availability to the customer needs.
However, using replication, the replicated data have to be kept consistent which results in a large overhead due to transmission. Here, entire business objects are replicated between application servers using Web Service interfaces. Database servers just represent the backend tier. Well explored replication technologies of distributed databases need to be adopted.
In this thesis, we introduce possible replication strategy for multi-tier architectures, and identify the parameters influencing the performance of these different design alternatives. In the fist step, we analyze the properties of business objects. Therefore, we give a classification of business objects and propose a suitable replication procedure. Finally, we present a simulation prototype that is suitable to integrate and compare different replication solutions.

 
Title:  
CONTEXT CENTRALISED METHOD FOR SOFTWARE ARCHITECTURE: A Pattern Evolutionary Approach
Author(s):  
Ziyad Alshaikh and Clive Boughton
Abstract:  
Context plays an important role in various analysis and design methods in software engineering, and is typically exemplified within data flow diagrams and (sometimes) design patterns. However, we believe that context within software engineering has been largely limited to identifying system boundaries for the scoping of work. In this paper we introduce an approach where the notion of context plays a central role during the analysis, architecture and design phases of software development, providing greater meaning and understanding. Accordingly, we provide a definition of context and how it relates to requirements, architecture and design and then propose a method of requirements elicitation/analysis based on context and its inherent properties for reducing ambiguity, increasing understanding and enabling greater communication. We extend the ideas to include the building of architectures and designs based on context-pattern evolution.

 
Copyright © INSTICC

Page updated on 21/12/09