Abstract
The Cloud offers enhanced flexibility in the management of resources for any kind of application while it promises the reduction of its cost as well as its infinite scalability. In this way, due to these advantages, there is a recent move towards migrating business processes (BPs) in the Cloud. Such a move is currently performed in a manual manner and only in the context of one Cloud. However, a multi- & cross-Cloud configuration of a BP can be beneficial as it can allow exploiting the best possible offers from multiple Clouds and enable to avoid the lock-in effect by also having the ability to deploy different instances of the BP in different Clouds close to the locations of BP customers. In this respect, this article presents a novel architecture of an environment which realises the vision of multi-Cloud BP provisioning. This environment involves innovative components which support the cross-level orchestration of cloud services as well as the cross-level monitoring and adaptation of BPs. It also relies on a certain language called CAMEL which has been extended to support the adaptive provisioning of multi-Cloud BPs.
Abstract
Cloud computing offers a flexible pay-as-you-go model for provisioning application resources, which enables applications to scale on-demand based on the current workload. In many cases, though, users face the single vendor lock-in effect, missing opportunities for optimal and adaptive application deployment across multiple clouds. Several cloud modelling languages have been developed to support multi-cloud resource management, but still they lack holistic cloud management of all aspects and phases. This work defines the Cloud Application Modelling and Execution Language (CAMEL), which (i) allows users to specify the full set of design time aspects for multi-cloud applications, and (ii) supports the models@runtime paradigm that enables capturing an application’s current state facilitating its adaptive provisioning. CAMEL has been already used in many projects, domains and use cases due to its wide coverage of cloud management features. Finally, CAMEL has been positively evaluated in this work in terms of its usability and applicability in several domains (e.g., data farming, flight scheduling, financial services) based on the technology acceptance model (TAM).
Abstract
Currently, the data to be explored and exploited by computing systems increases at an exponential rate. The massive amount of data or so-called “Big Data” put pressure on existing technologies for providing scalable, fast and efficient support. Recent applications and the current user support from multi-domain computing, assisted in migrating from data-centric to knowledge-centric computing. However, it remains a challenge to optimally store and place or migrate such huge data sets across data centers (DCs). In particular, due to the frequent change of application and DC behaviour (i.e., resources or latencies), data access or usage patterns need to be analyzed as well. Primarily, the main objective is to find a better data storage location that improves the overall data placement cost as well as the application performance (such as throughput). In this survey paper, we are providing a state of the art overview of Cloud-centric Big Data placement together with the data storage methodologies. It is an attempt to highlight the actual correlation between these two in terms of better supporting Big Data management. Our focus is on management aspects which are seen under the prism of non-functional properties. In the end, the readers can appreciate the deep analysis of respective technologies related to the management of Big Data and be guided towards their selection in the context of satisfying their non-functional application requirements. Furthermore, challenges are supplied highlighting the current gaps in Big Data management marking down the way it needs to evolve in the near future.
Abstract
Effective and accurate service discovery and composition rely on complete specifications of service behaviour, containing inputs and preconditions that are required before service execution, outputs, effects and ramifications of a successful execution and explanations for unsuccessful executions. The previously defined Web Service Specification Language (WSSL) relies on the fluent calculus formalism to produce such rich specifications for atomic and composite services. In this work, we propose further extensions that focus on the specification of QoS profiles, as well as partially observable service states. Additionally, a design framework for service-based applications is implemented based on WSSL, advancing state of the art by being the first service framework to simultaneously provide several desirable capabilities, such as supporting ramifications and partial observability, as well as non-determinism in composition schemas using heuristic encodings; providing explanations for unexpected behaviour; and QoS-awareness through goal-based techniques. These capabilities are illustrated through a comparative evaluation against prominent state-of-the-art approaches based on a typical SBA design scenario.
Abstract
Multi-cloud adaptive application provisioning can solve the vendor lock-in problem and allows optimising user requirements by selecting the best from the multitude of services offered by different cloud providers. To this end, such provisioning type is increasingly supported by new or existing research prototypes and platforms. One major concern, actually preventing users from moving to the cloud, comes with respect to security, which becomes more complex in multi-cloud settings. Such a concern spans two main aspects: (a) suitable access control on user personal data, VMs and platform services and (b) planning and adapting application deployments based on security requirements. As such, this paper addresses both security aspects by proposing a novel model-driven approach and architecture which secures multi-cloud platforms, enables users to have their own private space and guarantees that application deployments are not only constructed based on but can also maintain a certain user-required security level. Such a solution exploits state-of-the-art security standards, security software and secure model management technology. Moreover, it covers different access control scenarios involving external, web-based and programmatic user authentication.
Abstract
This White Paper reports the outcome of a Workshop on “Research Data Service Discoverability” held in the island of Santorini (GR) on 21–22 April 2016 and organized in the context of the EU funded Project “RDA-E3”. The Workshop addressed the main technical problems that hamper an efficient and effective discovery of Research Data Services (RDSs) based on appropriate semantic descriptions of their functional and non-functional aspects. In the context of this White Paper, by RDSs are meant those data services that manipulate/transform research datasets for the purpose of gaining insight into complicated issues. In this White Paper, the main concepts involved in the discovery process of RDSs are defined; the RDS discovery process is illustrated; the main technologies that enable the discovery of RDSs are described; and a number of recommendations are formulated for indicating future research directions and making an automatic RDS discovery feasible.
Abstract
Cloud computing promises to transform applications and services on the web into elastic and fault-tolerant software.
To aid at this target, various research prototypes and products have been already proposed. However, especially with
respect to the design phase of cloud-based applications, such prototypes do not enable the appropriate composition of
cloud services at different levels to realise not only the functionality but also the underlying infrastructure support for
such applications. Moreover, most existing prototypes and products lack the appropriate semantics to guarantee that
the respective design product is the most suitable and accurate one according to the various types of user
requirements posed. To this end, this article proposes a semantic cloud application management framework that
addresses the aforementioned issues by relying on ontologies to semantically describe cloud service capabilities and
application requirements, on semantic cloud service matchmakers considering both functional and non-functional
aspects as well as on a novel cloud service composition approach which is able to perform concurrently service
concretisation and deployment plan reasoning, thus catering for the different levels involved in a cloud environment
and their respective dependencies by also satisfying all types of user requirements posed. The service composition
approach is experimentally evaluated deriving quite promising results indicating that the state-of-the-art is advanced.
Abstract
The Web has been evolving to a sink of disparate informa- tion sources which are totally isolated from each other. The technology of Linked Data (LD) promises to connect such information sources in order to enable their better exploitation by humans or automated pro- grams. While various LD management systems have been proposed, only few of them are able to handle geospatial data which are becoming quite popular nowadays and lead to the creation of large geospatial footprints. However, none of the few systems that support Linked Open Geospa- tial Data is able to scale well to handle the increasing load from user queries. In addition, the publishing of geospatial LD also becomes quite advantageous due to complexity reasons. To this end, this article pro- poses a novel, cloud-based geospatial LD management system which can scale out or scale in according to the incoming load in order to serve the respective user requests with the appropriate service level. On top of this system lies a LD-as-a-service offering which abstracts away the user from any LD publishing complexities and provides all the appro- priate functionality for enabling a full LD management. We also study and propose architectural solutions for the distributed update problem. The proposed system is evaluated under heavy load scenarios and the results show that the respective improvement in performance incurred is quite satisfactory and that the scaling actions are performed at the appropriate time points.
Abstract
The Service-Oriented Computing (SOC) paradigm is currently being adopted by many developers, as it promises the construction of applications through reuse of existing Web Services (WSs). However, current SOC tools produce applications that interact with users in a limited way. This limitation is overcome by model-based Human-Computer Interaction (HCI) approaches that support the development of applications whose functionality is realized with WSs and whose User Interface (UI) is adapted to the user's context. Typically, such approaches do not consider various functional issues, such as the applications' semantics and their syntactic robustness in terms of the WSs selected to implement their functionality and the automation of the service discovery and selection processes. To this end, we propose a model-driven design method for interactive service-based applications that is able to consider the functional issues and their implications for the UI. This method is realized by a semiautomatic environment that can be integrated into current model-based HCI tools to complete the development of interactive service front-ends. The proposed method takes as input an HCI task model, which includes the user's view of the interactive system, and produces a concrete service model that describes how existing services can be combined to realize the application's functionality. To achieve its goal, our method first transforms system tasks into semantic service queries by mapping the task objects onto domain ontology concepts; then it sends each resulting query to a semantic service engine so as to discover the corresponding services. In the end, only one service from those associated with a system task is selected, through the execution of a novel service concretization algorithm that ensures message compatibility between the selected services.
Abstract
Service-orientation paves the way for the Internet of Services (IoS), where millions of services will be available to realize the everyday user applications or tasks. Consequently, as a great number of functionally equivalent services will be available for a specific user task, the service nonfunctional aspect should be considered for filtering and selecting among these services. The state-of-the-art approaches in nonfunctional service discovery exploit constraint solving techniques to optimize the matchmaking time between a service offer and demand pair. However, they do not scale well, as matchmaking time is proportional to the offer number, so they are not yet suitable for the IoS. To this end, this article proposes three novel alternative techniques that intelligently organize the service offer space to improve the overall matchmaking time. These techniques are theoretically and experimentally evaluated. The results show that all techniques optimize the matchmaking time without sacrificing accuracy and that each technique is better in different circumstances.
Abstract
Quality of service (QoS) can be a critical element for achieving the business goals of a service provider, for the acceptance of a service by the user, or for guaranteeing service characteristics in a composition of services, where a service is defined as either a software or a software-support (i.e., infrastructural) service which is available on any type of network or electronic channel. The goal of this article is to compare the approaches to QoS description in the literature, where several models and metamodels are included. consider a large spectrum of models and metamodels to describe service quality, ranging from ontological approaches to define quality measures, metrics, and dimensions, to metamodels enabling the specification of quality-based service requirements and capabilities as well as of SLAs (Service-Level Agreements) and SLA templates for service provisioning. Our survey is performed by inspecting the characteristics of the available approaches to reveal which are the consolidated ones and which are the ones specific to given aspects and to analyze where the need for further research and investigation lies. The approaches here illustrated have been selected based on a systematic review of conference proceedings and journals spanning various research areas in computer science and engineering, including: distributed, information, and telecommunication systems, networks and security, and service-oriented and grid computing.
Abstract
As organizations operate under a highly dynamic business world, they can only survive by optimizing their business processes (BPs) and outsourcing complementary functionality to their core business. To this end, they adopt service-orientation as the underlying mechanism enabling BP optimization and evolution. BPs are now seen as business services (BSs) that span organization boundaries and ought to satisfy cross-organizational objectives. As such, various BS design approaches have been proposed. However, these approaches cannot re-use existing business and software services (SSs) to realize the required BS functionality. Moreover, non-functional requirements and their impact on BS design are not considered. This research gap is covered by a novel, goal-oriented method able to discover those BS and SS compositions fulfilling the required BS functional and non-functional goals at both the business and IT level. This method coherently integrates the design steps involved and properly handles the lack of required BS components. It also advances the state-of-the-art in service composition by being able to both select the best composition plan and the best services realizing the plan tasks based on novel plan and service selection criteria.
Abstract
The goal of service oriented architectures (SOAs) is to enable the creation of business applications through the automatic discovery and composition of independently developed and deployed (Web) services. Automatic discovery of Web services (WSs) can be achieved by incorporating semantics into a richer WS description model (WSDM) and by the use of semantic Web (SW) technologies in the WS matchmaking and selection (i.e., discovery) process. A sufficiently rich WSDM should encompass not only functional but also nonfunctional aspects like quality of service (QoS). QoS is a set of performance and domain-dependent attributes that has a substantial impact on WS requesters' expectations. Thus, it can be used for distinguishing between many functionally equivalent WSs that are available nowadays. This paper starts by defining QoS in the context of WSs. Its main contribution is the analysis of the requirements for a semantically rich QoS-based WSDM and an accurate, effective QoS-based WS Discovery (WSDi) process. In addition, a road map of extending current WS standard technologies for realizing semantic, functional, and QoS-based WSDi, respecting the above requirements, is presented.
Abstract
QoS-based Web service (WS) discovery has been recognized as the main solution for filtering and selecting between functionally equivalent WSs stored in registries or other types of repositories. There are two main techniques for QoS-based WS matchmaking (filtering): ontology-based and constraint programming (CP)-based. Unfortunately, the first technique is not efficient as it is based on the rather immature technology of ontology reasoning, while the second one is not accurate as it is based on syntactic QoS-based descriptions and faulty matchmaking metrics. In our previous work, we have developed an extensible and rich ontology language for QoS-based WS description. Moreover, we have devised a semantic alignment algorithm for aligning QoS-based WS descriptions so as to increase the accuracy of QoS-based WS matchmaking algorithms. Finally, we have developed two alternative CP-based QoS-based WS matchmaking algorithms: a unary-constrained and n-ary-constrained one. In this paper, we claim that mixed-integer programming (MIP) should be used as a matchmaking technique instead of CP and we provide experimental results proving it. In addition, we analyze and experimentally evaluate our matchmaking algorithms against a competing techniques one in order to demonstrate their efficiency and accuracy.
Abstract
The ARION system provides basic e-services of search and retrieval of objects in scientific collections, such as datasets, simulation models and tools necessary for statistical and/or visualization processing. These collections may represent application software of scientific areas, they reside in geographically disperse organizations and constitute the system content. The user may invoke on-line computations of scientific datasets when the latter are not found into the system. The underlying grid used consists of hardware and software resources in the various participating organizations. ARION manages these resources providing a computing framework that produces datasets required by the user. In addition, the system offers semantic description of its content in terms of scientific ontologies and metadata information. Thus, ARION provides the basic infrastructure for accessing and deriving scientific information in an open, distributed and federated system.
Copyright Notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted or mass reproduced without the explicit permission of the copyright holder.
Abstract
Multi-cloud computing promises to deliver certain benefits, like performance optimisation and cost reduction. However, most cloud application modelling languages are tight to one cloud platform and do not have the right expressiveness to cover all application lifecycle phases. This survey attempts to review the most important from these languages, which facilitate application provisioning in commercial cloud platforms. The main review goals are to: (a) highlight those languages already or nearly multi-cloud enabled; (b) determine those parts of the remaining languages that must be extended to support multi-cloud application modelling. This review results also lead to drawing some future work directions towards producing an ideal multi-cloud application specification language.
Abstract
Most organisations follow an open-source strategy when developing software products like cloud platforms. However, integrating various software pieces in a wrong way can lead to generating platforms that do not feature suitable non-functional properties and easily scale to handle an increase in customer base. Such platforms would thus not properly face the fierce competition in the cloud world. This paper proposes a novel methodology for selecting the best possible integration method and strategy for cloud platforms which enables addressing the previous challenges and producing platforms that can go beyond current competition. This methodology relies on investigating the current open-source platform components to be integrated and selecting the right integration method and strategy that leads to the best possible integration result by still keeping the integration effort to the minimum possible. This paper demonstrates its application on the MELODIC multi-cloud management platform, currently uptaken in the cloud market.
Abstract
Clouds offer significant advantages over traditional cluster computing architectures including flexibility, high-availability, ease of deployment, and on-demand resource allocation -- all packed up in an attractive pay-as-you-go economic model for the users. However, cloud users are often forced into vendor lock-in due to the use of incompatible APIs, cloud-specific services, and complex pricing models of the cloud service providers (CSPs). Cloud management platforms (CMPs), supporting hybrid and multi-cloud deployment, offer an answer by providing a unified abstract interface to multiple cloud platforms. Nonetheless, modelling applications to use multi-clouds, automated resource selection based on user requirements from various available CSPs, cost optimization, security, and runtime adaptation of deployed applications still remain a challenge. In this tutorial, we provide a practical introduction to multi-cloud application modelling, configuration, deployment, and adaptation. We survey existing CMPs, compare their features and modelling methods. Finally, we provide a practical hands-on training for getting your applications ready for multi-cloud using selected tools. By the end of this tutorial, attendees should be able to understand the benefits of the multi-cloud approach, and prepared to deploy their first managed multi-cloud application.
Abstract
Security is considered as one of the top embedding factor for migrating SME services and applications in the cloud. However, various technological advancements in cloud security actually raise the security level with respect to traditional, on-premise deployment models. Nevertheless, SMEs, while being aware of these advancements, do not apply them as early as possible during the design of their cloud products and services. In other words, they neglect the main benefits that security-by-design offers. Further, SMEs actually employ one or more open-source security tools without properly configuring them to fit the current context. This creates three main issues: (a) a waste of resources can occur; (b) the protection level achieved can be unsuitable; (c) improper accuracy in vulnerability and security event detection could lead to taking wrong actions or to not reacting in critical security events. This paper proposes a security-by-design solution which focuses on vulnerability assessment and attempts to deal with the first and third from the aforementioned issues. These issues are addressed through: (a) the supply of a configuration meta-model enabling to properly configure the vulnerability assessment to have the right accuracy and performance level without impacting the precious resources available for the proper functioning of the SME's applications; (b) the orchestration of various kinds of vulnerability scanning tools which enable increasing the scanning accuracy.
Abstract
imulations are the greatest means for evaluating systems and producing knowledge related to their optimal configuration for production. Simulation systems support the execution of simulations. These can be installed and executed internally to an organisation or can be offered as a service in the cloud. Current simulation-as-a-service (SimaaS) offerings rely on VM or container-based deployments which lead to additional costs due to the charging in an hourly basis. Further, such offerings cannot be easily adapted at runtime to still be able to sustain their promised service level. To resolve these issues, this paper proposes a novel SimaaS architecture and solution which exploits the serverless computing paradigm for reducing the simulation cost based on the actual usage of resources as well as accelerating the simulation time through the limitless, parallelised invocation of functions. Further, this solution relies on the MELODIC/Functionizer multi-cloud platform which enables adapting the simulation execution at runtime in order to sustain the right service level according to the user requirements and preferences. For the validation of our solution, a real business application provided by AI Investments has been used. It aims to optimise investment portfolio using the most advanced AI-based methods and requires heavy computational power to accomplish the respective tasks.
Abstract
This paper advances the state-of-the-art by enhancing an existing provider-independent modelling language towards the complete specification of both serverless and hybrid multi-cloud applications. This extension has been validated by a use case developed in the context of the Functionizer project.
Abstract
Serverless computing is currently taking a momentum due to the main benefits it introduces which include zero administration and reduced operation cost for applications. However, not all application components can be made serverless in sight also of certain limitations with respect to the deployment of such components in corresponding serverless platforms. In this respect, there is currently a great need for managing hybrid applications, i.e., applications comprising both normal and serverless components. Such a need is covered in this paper through extending the Melodic platform in order to support the deployment and adaptive provisioning of hybrid, cross-cloud applications. Apart from analysing the architecture of the extended platform, we also explain what are the relevant challenges for supporting the management of serverless components and how we intend to confront them. One use case is also utilised in order to showcase the main benefits of the proposed platform.
Abstract
Currently, there is a move towards adopting multi-clouds due to their main benefits, including vendor lock-in avoidance and optimal application realisation via different cloud services. However, such multi-cloud applications face a new challenge related to the dynamicity and uncertainty that even a single cloud environment exhibits. As such, they cannot deliver a suitable service level to their customers, resulting in SLA penalty costs and application provider reputation reduction. To this end, we have previously proposed a cross-level and multi-cloud application adaptation architecture. Towards realising this architecture, this paper proposes two extensions of the CAMEL language allowing to specify advanced adaptation rules and histories. Such extensions not only enable to cover cross-level application adaptation by executing adaptation workflows but also to progress such an adaptation to address both the application and exploited cloud services evolution.
Abstract
Security solutions for cloud applications usually exploit security tools as is by utilising their default configuration. On one hand, this can lead to a waste of resources. On the other hand, it can also lead to not properly protecting the different application components based on their diverse security requirements. To this end, this paper proposes a security solution for cross-cloud applications which is configurable according to the flexible configuration specification given by the devops. Such a specification conforms to a certain UML-based meta-model and is independent of the underlying security tools exploited. In this way, devops can enable to produce a varied security level per each application component that better suits its security requirements. We demonstrate the suitability of our solution through an evaluation showcasing that it can lead to reduced resource consumption without compromising the security of the components that it protects.
Abstract
Serverless computing is a new computing paradigm that promises to revolutionize the way applications are built and provisioned. In this computing kind, small pieces of software called functions are deployed in the cloud with zero administration and minimal costs for the software developer. Further, this computing kind has various applications in areas like image processing and scientific computing. Due to the above advantages, the current uptake of serverless computing is being addressed by traditional big cloud providers like Amazon, who offer serverless platforms for serverless application deployment and provisioning. However, as in the case of cloud computing, such providers attempt to lock-in their customers with the supply of complementary services which provide added-value support to serverless applications. To this end, to resolve this issue, serverless frameworks have been recently developed. Such frameworks either abstract away from serverless platform specificities, or they enable the production of a mini serverless platform on top of existing clouds. However, these frameworks differ in various features that do have an impact on the serverless application lifecycle. To this end, to assist the developers in selecting the most suitable framework, this paper attempts to review these frameworks according to a certain set of criteria that directly map to the application lifecycle. Further, based on the review results, some remaining challenges are supplied, which when confronted will make serverless frameworks highly usable and suitable for the handling of both serverless as well as mixed application kinds.
Abstract
Cloud services operate in a highly dynamic environment. This means that they need to be assorted with dynamic SLAs which explicate how a rich set of QoS guarantees evolves over time. Only in this way, cloud users will trust and thus migrate their processes to the cloud. Research-wise, SLAs are assumed to include single states while they are managed mainly in a centralised manner. This paper proposes a framework to manage dynamic SLAs in a distributed manner by relying on a rich and dynamic SLA formalism which is transformed into a smart contract. This contract is then handled via the blockchain which exploits an oracle-based interface to retrieve the off-chain cloud service context sensed and enforce the right SLA management/modification functions. The proposed framework can change the current shape of the cloud market by catering for the notion of an open distributed cloud which offers manageable and dynamic services to cloud customers enabling them to reduce costs and increase the flexibility in resource management.
Abstract
Cloud computing is a paradigm that has revolutionized the way service-based applications are developed and provisioned due to the main benefits that it introduces, including more flexible pricing and resource management. The most widely used kind of cloud service is the Infrastructure-as-a-Service (IaaS) one. In this service kind, an infrastructure in the form of a VM is offered over which users can create the suitable environment for provisioning their application components. By following the micro-service paradigm, not just one but multiple cloud services are required to provision an application. This leads to requiring to solve an optimisation problem for selecting the right IaaS services according to the user requirements. The current techniques employed to solve this problem are either exhaustive, so not scalable, or adopt heuristics, sacrificing optimality with a reduced solving time. In this respect, this paper proposes a novel technique which involves the modelling of an optimisation problem in a different form than the most common one. In particular, this form enables the use of exhaustive techniques, like constraint programming (CP), such that both an optimal solution is delivered in a much more scalable manner. The main benefits of this technique are highlighted through conducting an experimental evaluation against a classical CP-based exhaustive approach.
Abstract
Nowadays, data are being produced at a very fast pace. This leads to the generation of big data that need to be properly managed, especially due to the increased complexity that their size introduces. Such data are usually subject to further processing to obtain added-value knowledge out of them. Current systems seem to focus more on how to more optimally perform this processing while they neglect that data placement can have a tremendous effect on the processing performance. In this respect, big data placement algorithms have been already proposed. However, most of them are either suggested in isolation to the big data processing system or are not dynamic to deal with required big data placement changes at runtime. As such, this paper proposes a novel, dynamic big data placement algorithm which can more optimally find the best placement solution by considering multiple optimisation objectives and solving in a more precise manner the big data placement problem with respect to the state-of-the-art. Further, a novel suggestion for optimally combining such an algorithm with a big data application management system is proposed so as to have the ability to address in conjunction both big data placement, processing and resource management issues. Respective experimental evaluation results showcase the efficiency of our algorithm in producing optimal big data placement solutions.
Abstract
In this paper we present a security meta-model for describing security requirements of cloud applications as well as a platform architecture to drive perimeter security and continuous risk assessment tools and processes supporting application deployments across regions and clouds. We demonstrate a case study of a geo-distributed cloud deployment with a specifically configured intrusion detection solution to handle DDoS attacks via cloud resource elasticity actions.
Abstract
A modern service-based application (SBA) operates in a cross-cloud, highly dynamic environment while comprises various components at different abstraction levels that might fail. To support cross-level SBA adaptation,
a cross-cloud Service Level Objective (SLO) monitoring and evaluation system is required, able to produce
the right events that must trigger suitable adaptation actions. While most research focuses on SBA monitoring,
SLO evaluation is usually restricted in a centralised, single-cloud form, not amenable to heavy workloads that
could incur in a complex SBA system. Thus, a fast and scalable event generation and processing system is
needed, able to scale well to handle such a load. Such a system must address the cross-level event composition,
suitable for detecting complex problematic situations. This paper closes this gap by proposing a novel complex
event processing framework, scalable and distributable across the whole SBA architecture. This framework
can cover any kind of event combination, no matter how complex it is. It also supports event pattern management while exploits a publish-subscribe mechanism to: (a) synchronise with the modification of adaptation
rules directly involving these event patterns; (b) enable the decoupling from an SBA management system.
Abstract
Modern service-based applications (SBAs) operate in highly dynamic environments where both underlying resources and the application demand can be constantly changing which external SBA components might fail. Thus, they need to be rapidly modified to address such changes. Such a rapid updating should be performed across multiple levels to better deal, in an orchestrated and globally-consistent manner, with the current problematic situation. First of all, this means that a fast and scalable event generation and detection mechanism should exist to rapidly trigger the adaptation workflow to be performed. Such a mechanism needs to handle all kinds of events occurring at different abstraction levels and to compose them so as to detect more advanced situations. To this end, this paper introduces a new complex event processing framework able to realise the respective features mentioned (processing speed, scalability) and have the flexibility to capture and sense any kind of event or event combination occurring in the SBA system. Such a framework is wrapped in the form of a REST service enabling to manage the event patterns that need to be rapidly detected. It is also well connected to other main components of the SBA management system, via a publish-subscribe mechanism, including monitoring and the adaptation engines.
Abstract
Cloud computing offers a great opportunity for business process (BP) flexibility, adaptability and reduced costs. This leads to realising the notion of business process as a service (BPaaS), i.e., BPs offered on-demand in the cloud. This paper introduces a novel architecture focusing on BPaaS design that includes the integration of existing state-of-the-art components as well as new ones which take the form of a business and a syntactic matchmaker. The end result is an environment enabling to transform domain-specific BPs into executable workflows which can then be made deployable in the cloud so as to become real BPaaSes.
Abstract
The notion of a BPaaS is currently taking a momentum as many organisations attempt to move and offer their business processes (BPs) in the cloud. Such BPs need to be adaptively provisioned so as to sustain the service level promised in the respective SLA. However, current cloud-based adaptation frameworks cannot cover all possible abstraction levels and usually rely on simplistic adaptation rules. As such, this paper proposes a novel BPaaS adaptation framework able to orchestrate actions on different abstraction levels so as to better address the current problematic situation. This framework can support the dynamic generation of adaptation workflows as well as the recording of the adaptation history for analysis purposes. It is also coupled with the CAMEL language which has been extended to support the specification of cross-level adaptation workflows.
Abstract
Cloud computing proved to offer flexible IT solutions. Although large enterprises may benefit from this technology by educating their IT departments, SMEs face the risk to dramatically falling behind in cloud usage and hence lose the ability to efficiently adapt their IT to their business needs. This chapter presents the vision and the outcome of the H2020 project CloudSocket. The foundation lays the idea of Business Processes as a Service, where concept models and semantics are applied to align business processes with Multi-Cloud deployed workflows. The proposed CloudSocket platform consist of four architectural building blocks: (i) design, (ii) allocation, (iii) execution, and (iv) evaluation. These are organised as environments that cope with specific tasks and research questions. An overview of each environment is given along with main prototypes that were developed to push the state-of-the-art in the respective field. We show the success of the achievements in current research and how we will pursue the open questions.
Abstract
Linked Data (LD) represent a great mechanism towards integrating information across disparate sources. The respective technology can also be exploited to perform inferencing for deriving added-value knowledge. As such, LD technology can really assist in performing various analysis tasks over information related to business process execution. In the context of Business Process as a Service (BPaaS), the first real challenge is to collect and link information originating from different systems by following a certain structure. As such, this paper proposes two main ontologies that serve this purpose: a KPI and a Dependency one. Based on these well-connected ontologies, an innovative Key Performance Indicator (KPI) analysis system is then built which exhibits two main analysis capabilities: KPI assessment and drill-down, where the second can be exploited to find root causes of KPI violations. Compared to other KPI analysis systems, LD usage enables the flexible construction and assessment of any KPI kind allowing experts to better explore the possible KPI space.
Abstract
Several SLA languages have been proposed, some specifically for the cloud domain. However, after extensively analysing the domain's requirements considering the SLA lifecycle, we conclude that none of them covers the necessary aspects for application in diverse real-world scenarios. In this paper, we propose SSLAC, where we combine the capabilities of two prominent service specification and SLA languages: OWL-Q and SLAC. These languages have different scopes but complementary features. SLAC is domain specific with validation and verification capabilities. OWL-Q is a higher level language based on ontologies and well defined semantics. Their combination advances the state of the art in many perspectives. It enables the SLA's semantic verification and inference and, at the same time, its constraint-based modelling and enforcement. It also provides a complete formal approach for defining non-functional terms and an enforcement framework covering real-world scenarios. The advantages of SSLAC, in terms of expressiveness and features, are then shown in a use case modelled by it.
Abstract
In order to implement cross-organizational workflows and to realize collaborations between small and medium enterprises (SMEs), the use ofWeb service technology, Service-Oriented Architecture and Infrastructure-as-a-Service (IaaS) has become a necessity. Based on these technologies, the need for monitoring the quality of (a) the acquired resources, (b) the services offered to the final users and (c) the workflow-based procedures used by SMEs in order to use services, has come to the fore. To tackle this need, we propose four metric Quality Models that cover quality terms for the Workflow, Service and Infrastructure layers and an additional one for expressing the equality and inter-dependency relations between the previous ones. To support these models we have implemented a cross-layer monitoring system, whose main advantages are the layer-specific metric aggregators and an event pattern discoverer for processing the monitoring log. Our evaluation is based on the performance and accuracy aspects of the proposed cross-layer monitoring system.
Abstract
Current PaaS platforms enable single or hybrid cloud deployments. However, such deployment types cannot best cover the user application requirements as they do not consider the great variety of services offered by different cloud providers and the effects of vendor lock-in. On the other hand, multi-cloud deployment enables selecting the best possible service among equivalent ones providing the best trade-off between performance and cost. In addition, it avoids cases of service level deterioration due to service under-performance as main effects of vendor lock-in. While many multi-cloud application deployment research prototypes have been proposed, such prototypes do not examine the effect that deployment decisions have on application performance. As such, they blindly attempt to satisfy low-level hardware requirements by neglecting the impact of allocation decisions on higher-level requirements at the component or application level. To this end, this paper proposes a new IaaS selection algorithm which, apart from being able to satisfy both low and high level requirements of different types, it also exploits deployment knowledge offered via reasoning over previous application execution histories to take the best possible allocation decisions. The experimental evaluation clearly shows that by considering this extra knowledge, more optimal deployment solutions are derived, able to maintain the service levels requested by users, in less solving time.
Abstract
This paper reports the re-engineering efforts for OWL-Q, a prominent semantic quality-based service description language. These efforts have focused on making OWL-Q more compact without reducing its level of expressiveness as well as enriching it with semantic rules towards semantic validation of quality specifications and new knowledge derivation. It also presents a new OWL-Q extension called Q-SLA advancing the state-of-the-art by covering all possible information aspects needed which along with the semantic rules enable proper and automatic support to all service management activities. A particular use-case is also provided to highlight the main benefits of Q-SLA.
Abstract
Model-driven engineering (MDE) promises to automate the cloud application management phases, including deployment and adaptive provisioning. However, most MDE approaches neglect the security aspect even if it is considered the number one factor for not migrating to the cloud. As such, this paper proposes a security meta-model that a cloud-based MDE approach can exploit to become security-aware. This meta-model captures both high- and low-level security requirements and capabilities to drive application deployment as well security-oriented scalability rules to guide application re-configuration. It is also coupled with OCL constraints enforcing the security domain semantics. A method for creating re-usable security elements facilitating rapid security model specification conforming to the meta-model is also proposed, to reduce the designer's modelling effort.
Abstract
Service-orientation has revolutionized the way applications are constructed and provisioned. To this end, a proliferation of web services is being increasingly available. To exploit such services, an accurate service discovery process is required with a suitable performance focusing both on functional and quality of service (QoS) aspects. In fact, QoS is the main distinguishing factor for the plethora of functionally-equivalent services available in the internet. Accuracy in service discovery comes via exploiting formal techniques and ontologies in particular. Satisfactory performance levels can be reached via using smart methods that intelligently organise the service advertisement space. In this paper, we propose smart ontology-based QoS-aware service discovery algorithms that exploit ontology subsumption as a means of matching QoS requests and offers. These algorithms exploit a variety of methods to structure the service advertisement space. Based on the empirical evaluation conducted, our proposed algorithms outperform the state-of-the-art in certain circumstances. To this end, ontology-based subsumption is indeed a promising technique to realise QoS-based service matchmaking.
Abstract
Service-orientation is increasingly adopted by application and service developers, leading to a plethora of services becoming increasingly available. To enable constructing applications from such services, respective service description and discovery must be supported by considering both functional and non-functional aspects as they play a significant role in the service management lifecycle. However, research in service discovery has mainly focused on one aspect and not both of them. As such, this paper investigates the issues involved in considering both functional and non-functional aspects in service discovery. In particular, it proposes different ways via which aspect-specific algorithms can be combined to generate a complete service discovery system. It also proposes a specific unified service discovery architecture. Finally, it evaluates the proposed algorithms’ performance to give valuable insights to the reader.
Abstract
This paper presents an extension to OWL-Q, a prominent semantic quality-based service description language, called Q-SLA, enabling to specify SLAs. This extension advances the state-of-the-art by covering all possible information aspects needed to enable proper and automatic support to all service management activities. A particular use-case is also provided highlighting Q-SLA's main benefits.
Abstract
Business processes can benefit from cloud offerings, but bridging the gap between business requirements and technical solutions is still a big challenge. We propose Business Process as a Service (BPaaS) as a main concept for the alignment of business process with IT in the cloud. The mechanisms described in this paper provide modelling facilities for both business and IT levels: (a) a graphical modelling environment for processes, workflows and service requirements, (b) an extension of an enterprise ontology with cloud-specific concepts, (c) semantic lifting of graphical models and (d) SPARQL querying and inferencing for semantic alignment of business and cloud IT.
Abstract
Domain-specific languages (DSLs) are high-level software languages representing concepts in a particular domain. In real-world scenarios, it is common to adopt multiple DSLs to solve different aspects of a specific problem. As any other software artefact, DSLs evolve independently in response to changing requirements, which leads to two challenges. First, the concepts from the DSLs have to be integrated into a single language. Second, models that conform to an old version of the language have to be migrated to conform to its current version. In this paper, we discuss how we tackled the challenge of integrating the DSLs that comprise the Cloud Application Modelling and Execution Language (CAMEL) by leveraging upon Eclipse Modeling Framework (EMF) and Object Constraint Language (OCL). Moreover, we propose a solution to the challenge of persisting and automatically migrating CAMEL models based on Connected Data Objects (CDO) and Edapt.
Abstract
Multi-cloud adaptive application provisioning promises to solve the vendor lock-in problem and lead to optimizing the user re- quirements through the selection of the best from the great variety of services offered by cloud providers. As such, various research prototypes and platforms attempt to support this provisioning type. One major concern in using such platforms comes with respect to security in terms of improper access to user personal data and VMs as well as to platform services. To successfully address this concern, this paper proposes a novel model-driven approach and architecture able to secure multi-cloud platforms as well as enable users to have their own private space. Such a solution exploits state-of-the-art security standards and secure model manage- ment technology. This solution is able to cover different security scenarios involving external, web-based and programmatic user authentication.
Abstract
Edge processing in IoT networks offers the ability to enforce privacy at the point of data collection. However, such enforcement requires extra processing in terms of data filtering and the ability to configure the device with knowledge of policy. Supporting this processing with Cloud resources can reduce the burden this extra processing places on edge processing nodes and provide a route to enable user defined policy. Research from the PaaSage project [12] on Cloud modelling language is applied to IoT networks to support IoT and Cloud integration linking the worlds of Cloud and IoT in a privacy protecting way.
Abstract
Multi-cloud application management can optimize the provisioning of cloud-based applications by exploiting whole variety of services offered by cloud providers and avoiding vendor lock-in. To enable such management, model-driven approaches promise to partially automate the provisioning process. However, such approaches tend to neglect security aspects and focus only on low-level infrastructure details or quality of service aspects. As such, our previous work proposed a security meta-model, bridging the gap between high- and low-level security requirements and capabilities, able to express security models exploited by a planning algorithm to derive an optimal application deployment plan by considering both types of security requirements. This work goes one step further by focusing on runtime adaptation of multi-cloud applications based on security aspects. It advocates using adaptation rules, expressed in the event-condition-action form, which drive application adaptation behaviour and enable assuring a more-or-less stable security level. Firing such rules relies on deploying security metrics and adaptation code in the cloud to continuously monitor rule event conditions and fire adaptation actions for applications when the need arises.
Abstract
While various platforms are offering facilities for single-cloud application design, deployment and provisioning, there is a need to move to multiple clouds in order to achieve cost-effectiveness and avoid vendor lock-in. Apart from not supporting multi-cloud application management, many platforms usually focus on the deployment and provisioning phases of the cloud-based application lifecycle by neglecting the design phase. However, the design selection of the best possible cloud service composition affects the provisioning phase, as the more distant from optimality is the selected solution, the more adaptation actions will be enacted. To this end, there is a high need for cloud application design tools and methods which can select the best possible cloud service composition based on user requirements. This paper satisfies this need by proposing a cloud service composition approach able to optimally compose different types of cloud services by simultaneously satisfying various types of user requirements. These types, not concurrently supported by any cloud application design tool, include quality, deployment, security, placement and cost requirements. Moreover, the proposed approach addresses a particular design choice type not currently considered in literature.
Abstract
The benefits of cloud computing have led to a proliferation of infrastructures and platforms covering the provisioning and deployment requirements of many cloud-based applications. However, the requirements of an application may change during its life cycle. Therefore, its provisioning and deployment should be adapted so that the application can deliver its target quality of service throughout its entire life cycle. Existing solutions typically support only simple adaptation scenarios, whereby scalability rules map conditions on fixed metrics to a single scaling action targeting a single cloud environment (e.g., Scale out an application component). However, these solutions fail to support complex adaptation scenarios, whereby scalability rules could map conditions on custom metrics to multiple scaling actions targeting multi-cloud environments. In this paper, we propose the Scalability Rule Language (SRL), a language for specifying scalability rules that support such complex adaptation scenarios of multi-cloud applications. SRL provides Eclipse-based tool support, thus allowing modellers not only to specify scalability rules but also to syntactically and semantically validate them. Moreover, SRL is well integrated with the Cloud Modelling Language (Cloud ML), thus allowing modellers to associate their scalability rules with the components and virtual machines of provisioning and deployment models.
Abstract
As Cloud computing becomes a widely accepted service delivery platform, developers usually resort in multi-cloud setups to optimize their application deployment. In such heterogeneous environments, during application execution, various events are produced by several layers (Cloud and SOA specific), leading to or indicating Service Level Objective (SLO) violations. To this end, this paper proposes a meta-model to describe the components of multi-cloud Service-based Applications (SBAs) and an event pattern discovery algorithm to discover valid event patterns causing specific SLO violations. The proposed approach is empirically evaluated based on a real-world application.
Abstract
The PaaSage project aims at facilitating the specification and execution of cloud-based applications by leveraging upon model-driven engineering (MDE) techniques and methods, and by exploiting multiple cloud infrastructures and platforms. Models are frequently specified using domain-specific languages (DSLs), which are tailored to a specific domain of concern. In order to cover the necessary aspects of the specification and execution of multi-cloud applications, PaaSage encompasses a family of DSLs called Cloud Application Modelling and Execution Language (CAMEL). In this paper, we present one DSL within this family, namely the Scalability Rules Language (SRL), which can be regarded as a first step towards a generic language for specifying scalability rules for multi-cloud applications.
Abstract
Cloud computing is becoming a popular platform to deliver service-based applications (SBAs) based on service-oriented architecture (SOA) principles. Monitoring the performance and functionality of SBAs deployed on multiple Cloud providers (in what is also known as Multi-Cloud setups) and adapting them to variations/events produced by several layers (infrastructure, platform, application, service, etc.) in a coordinated manner are challenges for the research community. This paper proposes a monitoring framework for Multi-Cloud SBAs with two main objectives: (a) perform cross-layer (Cloud and SOA) monitoring enabling concerted adaptation actions; (b) address new challenges raised in Multi-Cloud SBA deployment. The proposed framework is empirically evaluated on a real-world Multi-Cloud setup.
Abstract
The need to better integrate and link various isolated data sources on the web has been widely recognized and is tackled by the Linked Open Data (LOD) initiative. One of the problems to address is the issue of publishing and subsequently exploiting the data as LOD, due to reasons of data size and performance of the respective queries and to the publication complexity. This work addresses the size and performance issues by adapting the cloud as a hosting platform for LOD publication services so as to exploit its scalability and elasticity capabilities. The publication complexity issue is addressed by proposing a Linked Open Data-as-a-Service approach offering an integrated service based API for (semi)automatic publication of relational data as LOD and subsequent querying and updating capabilities.
Abstract
Service-Based Applications (SBAs) enable the automation of business processes. Therefore it is crucial to monitor their non-functional properties and take adaptation actions when QoS violations occur, accross all functional layers. In this paper we propose a framework for the proactive cross-layer adaptation of SBAs. We exploit a cross-layer monitoring mechanism to detect a wide range of events, based on which we can both reactively and proactively adapt the system. In particular, the detection of event patterns help us to prevent future faults and failures from happening, by firing specific, dynamically derived rules, that map event patterns to suitable adaptation strategies. Our framework is validated using a traffic management scenario.
Abstract
Service-orientation paves the way for the Internet of Services (IoS), where millions of services will be available for building novel applications. As such, the service non-functional aspect should be considered for filtering and selecting among the great number of functionally-equivalent services that will be available for a specific user task. Until now, the state-of-the-art work in non-functional service discovery has exploited constraint solving techniques to optimize the matchmaking time between a non-functional service offer and demand pair. However, as matchmaking time is proportional to the offer number, this work does not scale well so it is not yet appropriate for the IoS. To this end, two alternative techniques are proposed to improve the overall matchmaking time. Both techniques were theoretically and experimentally evaluated. The results show that both techniques optimize the matchmaking time without sacrificing accuracy, while the second technique is quite scalable.
Abstract
The Internet is moving fast to a new era where million of services and things will be available. In this way, as there will be many functionally-equivalent services for a specific user task, the service non-functional aspect should be considered for filtering and choosing the appropriate services. The related approaches in service discovery mainly concentrate on exploiting constraint solving techniques for inferring if the user non-functional requirements are satisfied by the service nonfunctional capabilities. However, as the matchmaking time is proportional to the number of non-functional service descriptions, these approaches fail to fulfill the user request in a timely manner. To this end, two alternative techniques for improving the non-functional service matchmaking time have been developed. The first one is generic as it can handle non-functional service specifications containing n-ary constraints, while the second is only applicable to unary-constrained specifications. Both techniques were experimentally evaluated. The preliminary evaluation results show that the service matchmaking time is significantly improved without compromising matchmaking accuracy.
Abstract
Although several techniques have been proposed towards monitoring and adaptation of Service-Based Applications (SBAs), few of them deal with cross-layer issues. This paper proposes a framework, able to monitor and adapt SBAs across all functional layers. This is achieved by using techniques, such as event monitoring and logging, event-pattern detection, and mapping between event patterns and appropriate adaptation strategies. In addition, a taxonomy of adaptation-related events and a meta-model describing the dependencies among the SBA layers are introduced in order to “capture” the cross-layer dimension of the framework. Finally, a specific case study is used to illustrate its functionality.
Abstract
Organizations now resort to service-orientation as it enables them to quickly create and offer new business services (BSs) or optimize existing ones. In many cases, organizations must cooperate to offer such services so as to concentrate only on their core business. An initial phase to the design of a novel BS concerns the determination of the BS's functional and non-functional requirements. The respective research approaches exploit goal models to specify and elicit such requirements. However, while it is easy to reach an agreement for the functional requirements, this is not true for the non-functional ones. First, as the involved stakeholders may have different requirements and expertise level for particular non-functional aspects. Second, as a BS's non-functional performance is critical for distinguishing among functionally-equivalent BSs of other competing organizations. Thus, the stakeholders must negotiate over the BS's non-functional requirements. However, such a negotiation may take considerable time and needs the active stakeholder involvement in terms of alternative offers for the conflicting requirements. To this end, this paper proposes a broker-based BS negotiation framework that can automatically determine the non-functional requirements of the required BS. This framework takes as input a functional goal model as well as the stakeholder requirements in terms of utility functions on the non-functional performance of the required BS functional goal and its sub-goals, and can propose an overall solution that is balanced and consistent across the goal model levels and satisfies as much as possible all the stakeholders.
Abstract
As business process optimization and innovation are the only means to survive in such a dynamic business world, organizations are now combining BPM technologies with service-orientation so as to achieve them. Business processes are now considered as business services (BSs) that span the organizational boundaries and have to satisfy cross-organizational objectives. The most promising research approaches on BS design are not only considering what the BS does and how but also the business requirements that it must satisfy. They are also able to perform BS composition. However, they mainly concentrate on the functional aspect. Even if few of them do consider the non-functional aspect, they cannot select the best BS combination alternative in a precise and objective way. To this end, this paper proposes a goal-oriented approach that is able to discover the best possible way a BS can be composed from other BSs according to both functional and non-functional requirements. This approach advances the state-of-the-art in service composition and selection as it is able to propose semantically robust BS combinations even if there is a missing functionality in terms of partially fulfilled or unfulfilled required goals and considers novel optimization criteria such as the number of BSs constituting the proposed solution and the percentage of BSs reused.
Abstract
Mashup platforms and end-user centric composition tools have become increasingly popular. Most tools provide Web interfaces and visual programming languages to create compositions. Much of the previous work has not considered compositions comprising human provided services (HPS) and software-based services (SBS). We introduce a novel HPS aware service mashup model which we call socially oriented mashups (SOM). The inclusion of HPS in service mashups raises many challenges such as a QoS model that must account for human aspects and the need for flexible execution of mashups. We propose human quality attributes, for example delegation, and a context model capturing various information including location and time. The QoS and context model is used at design-time and for runtime adaptation of mashups. In this paper, we show how to model context-aware SOMs that include HPS and SBS and demonstrate the first results of our working prototype.
Abstract
We propose an approach that takes as input a task model, which includes the user's view of the interactive system, and automatically discovers a set of categorized and ranked service descriptions for each system task of the model. In this way, a set of service operations can be used to implement an application's part or whole functionality so that its development time is significantly reduced.
Abstract
Services are becoming more and more widely used. When designing interactive applications based on services one important issue is how to identify those services most relevant for the application functionalities. The proposed approach takes as input a task model, which includes the user's view of the interactive system, and an ontology capturing the application domain, and automatically discovers a set of ordered service descriptions for each system task of the model. The discovered descriptions can be used in order to invoke a particular service operation that fulfils a task's required functionality. In this way, the whole application functionality can be realized by a set of service operations without writing a single line of code. As a result, the application development time is significantly reduced and it is possible to complete the development of interactive front-ends by integrating our solution in existing model-based HCI approaches.
Abstract
The continuous increase in electrical and computational power in data centers has been driving many research approaches under the Green IT main theme. However, most of this research focuses on reducing energy consumption considering hardware components and data center building features, like servers distribution and cooling flow. On the contrary, this paper points out that energy consumption is also a service quality problem, and presents an energy-aware design approach for building service-based applications. To this effect, techniques are provided to measure service costs combining Quality of Service (QoS) requirements and Green Performance Indicators (GPI) in order to obtain a better tradeoff between energy efficiency and performance for each user.
Abstract
Negotiation is required before invoking a service in order to identify how the invocation must occur in terms of functional and non functional criteria. This process is possible when all the involved parties agree on the same negotiation protocol (e.g., bilateral negotiations). Considering a service oriented architecture (SOA), this negotiation protocol cannot be predefined, but it must be selected by considering the negotiation capabilities of the involved services.In this work, we propose a semantic based framework for supporting the negotiation in SOA. Specifically, the framework allows to express the negotiation capabilities of service requesters and providers and proposes a mechanism for discovering the negotiation protocols that can be enacted when a negotiation is required. To improve the flexibility of the framework, the concept of delegation is introduced to deal with the situation in which a party, that is not able to support the negotiation protocol, wants to participate in a negotiation. In this case, the negotiation can be fully or partially delegated to one or more other parties that, instead, are able to support the negotiation protocol.
Abstract
Adaptive Service Based Applications (SBAs) can become a reality
with the advent of sophisticated monitoring and adaptation mechanisms. In this
paper, the main focus is on defining quality and how it can be exploited by these
monitoring and adaptation mechanisms. To this end, we propose a quality model
for SBAs and their infrastructure, new techniques for predicting quality and different types of quality-based adaptation actions for SBAs.
Abstract
The goal of Web service (WS) discovery is to select WSs that satisfy both the users’ functional and non functional requirements. Focusing on non functional requirements, a matchmaking algorithm usually takes place to verify if the quality offered by the WS provider overlaps the quality requested by the user. Since quality, in a provider perspective, is costly, a further step, a negotiation, should be performed to identify a mutually agreed quality level. In this work, we join previous work on a semantic-based quality definition model and WS negotiation, to provide a framework enabling semantic-aware automated WS negotiation. More specifically, OWL-Q, a semantic QoS-based WS description language, is extended with appropriate negotiation concepts and properties.
Abstract
Web service (WS) discovery is a prerequisite for achieving WS composition and orchestration. Although a lot of research has been conducted on the functional discovery of WSs, the proposed techniques fall short when faced with the foreseen increase in the number of (potentially functionally-equivalent) WSs. The above situation can be resolved with the addition of non-functional (quality of service (QoS)) discovery mechanisms to WS discovery engines. QoS-based WS matchmaking algorithms have been devised for this reason. However, they are either slow - as they are based on ontology reasoners - or produce inaccurate results. Inaccuracy is caused both by the syntactic matching of QoS concepts and by wrong matchmaking metrics. In this paper, we present two constraint programming (CP) QoS-based WS discovery algorithms for unary constrained WS specifications that produce accurate results with good performance. We also evaluate these algorithms on matchmaking time, precision and recall in different settings in order to demonstrate their efficiency and accuracy.
Abstract
The success of the Web Service (WS) paradigm has led to a proliferation of available WSs, which are advertised in WS registries. While sophisticated semantic WS discovery algorithms are operating on these registries to return matchmaking results with high precision and recall, many functionally-equivalent WSs are returned. The solution to the above problem comes in terms of semantic QoS-based description and discovery of WSs. We have already presented a rich and extensible ontology language for QoS-based WS description called OWL-Q. We have also proposed a semantic QoS metric matching algorithm. Based on this algorithm, we have extended a CSP-based approach for QoS-based WS discovery. In this paper, we firstly analyze the evolution of OWL-Q and its extension with SWRL rules, we propose a modification to the metric matching algorithm and we show the way the metric alignment process takes place. Then we propose two novel semantic QoS-based WS Discovery algorithms that return matches even for over-constrained QoS-based WS requests. The first one deals with unary constraints while the second one is more generic. Finally, implementa- tion aspects of our QoS-based WS discovery system are discussed.
Abstract
Discovery of Web Services (WSs) has gained great research attention due to proliferation of available WSs and the failure of the syntactic approach of UDDI. Semantic discovery mechanisms have been invented in order to provide more precise results. However, many functionally-equivalent WSs are returned by semantic WS registries. Fortunately, the solution is to enforce the semantic QoS-based description and discovery of WSs. We have already presented a rich and extensible ontology language for QoS-based WS description and we have proposed a semantic QoS metric matching algorithm. Based on this algorithm, we have extended a CSP-based approach for QoS-based WS discovery. In this paper, we show an extension of OWL-Q with SWRL rules as OWL alone fails in some aspects of QoS description. We also propose a modification to the metric matching algorithm to make it more feasible. Finally, we propose and analyze an automated approach for semantic QoS-based WS discovery that provides solutions even for over-constrained QoS-based WS demands.
Abstract
The success of the Web Service (WS) paradigm has led to a proliferation of available WSs. Semantic discovery mechanisms have been invented to overcome UDDI’s syntactic discovery solution by providing more precise results. However, the problem remains as many functionally-equivalent WSs are returned. Its solution comes in terms of semantic QoS-based description and discovery of WSs. We have already presented a rich and extensible ontology language for QoS-based WS description that is called OWL-Q and we have proposed a semantic QoS metric matching algorithm. Based on this algorithm, we have extended a Constraint-Programming-based approach for QoS-based WS discovery. In this paper, we show an extension of OWL-Q with SWRL rules and propose a modification to the metric matching algorithm to make it more feasible. Moreover, we propose and analyze an automated approach for semantic QoS-based WS discovery that provides solutions even for over-constrained QoS-based WS demands.
Abstract
As the Web service paradigm gains popularity for its promise to transform the way business is conducted, the number of deployed Web services grows with a fast rate. While sophisticated semantic discovery mechanisms have been invented to overcome the UDDI's syntactic discovery solution in order to provide more recallable and precise results, the amount of functionally equivalent Web services returned is still large. The solution to this problem is the description of the QoS non-functional aspect of Web services. QoS encompasses the performance of Web services and can be used as a discriminator factor for refining Web service advertisement result lists. However, most scientific efforts presented so far are purely syntactic and are not capturing all aspects of QoS-based Web service description leading to imprecise syntactic discovery mechanisms. This paper presents a novel, rich and extensible ontology-based approach for describing QoS of Web services that complements OWL-S. It is shown that, by using this approach and by introducing the concept of semantic QoS metric matching, QoS-based syntactic matchmaking and selection algorithms are transformed to semantic ones leading to better results
Abstract
The ARION system provides basic e-services of search and retrieval of objects in scientific collections, such as, data sets, simulation models and tools necessary for statistical and/or visualization processing. These collections may represent application software of scientific areas, they reside in geographically disperse organizations and constitute the system content. The user, as part of the retrieval mechanism, may dynamically invoke on-line computations of scientific data sets when the latter are not found into the system. Thus, ARION provides the basic infrastructure for accessing and producing scientific information in an open, distributed and federated system. More advanced e-services, which depend on the scientific content of the system, can be built upon this infrastructure, such as decision making and/or policy support using various information brokering techniques.
Copyright Notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted or mass reproduced without the explicit permission of the copyright holder.
Copyright Notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted or mass reproduced without the explicit permission of the copyright holder.
Copyright Notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted or mass reproduced without the explicit permission of the copyright holder.