139,99 €
The main objective of the book is to present state-of-the-art research results and experience reports in the area of quality monitoring for customer experience management, addressing topics which are currently important, such as service-aware future Internet architecture for Quality of Experience (QoE) management on multimedia applications. In recent years, multimedia applications and services have experienced a sudden growth. Today, video display is not limited to the traditional areas of movies and television on TV sets, but these applications are accessed in different environments, with different devices and under different conditions. In addition, the continuous emergence of new services, along with increasing competition, is forcing network operators and service providers to focus all their efforts on customer satisfaction, although determining the QoE is not a trivial task. This book addresses the QoE for improving customer perception when using added value services offered by service providers, from evaluation to monitoring and other management processes.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 408
Veröffentlichungsjahr: 2014
Contents
Preface
1 Challenges For Quality Of Experience Engineering For Added Value Services
1.1. Introduction and challenges
1.2. Contents
1.3. Conclusion
2 An Ecosystem For Customer Experience Management
2.1. Introduction
2.2. Managing customer experience
2.3. Quality of experience ecosystem
2.4. IPNQSIS
2.5. NOTTS
2.6. Conclusions
2.7. Acknowledgments
2.8. Bibliography
3 Measuring Mpeg Frame Loss Rate To Evaluate The Quality Of Experience In Iptv Services
3.1. Introduction
3.2. Related work
3.3. Method description
3.4. QoE prediction models
3.5. Network monitoring tool
3.6. Performance assessment
3.7. Conclusions and future work
3.8. Acknowledgments
3.9. Bibliography
4 Estimating The Effect Of Context On The Qoe Of Audiovisual Services
4.1. Introduction
4.2. Test content
4.3. Subjective tests in laboratory
4.4. Subjective tests at exhibition
4.5. Results
4.6. Conclusions and further work
4.7. Bibliography
5 Iptv Multiservice Qoe Management System
5.1. Introduction
5.2. State of the art
5.3. Multiservice IPTV probe
5.4. QoE management system
5.5. Conclusions
5.6. Acknowledgments
5.7. Bibliography
6 High Speed Multimedia Flow Classification
6.1. Introduction
6.2. The architecture
6.3. Validation
6.4. Conclusions
6.5. Acknowledgments
6.6. Bibliography
7 User Driven Server Selection Algorithm For CDN Architecture
7.1. Introduction
7.2. Multi-armed bandit formalization
7.3. Server selection schemes
7.4. Our proposal for QoE-based server selection method
7.5. Experimental results
7.6. Acknowledgments
7.7. Conclusion
7.8. Bibliography
8 QoE Approaches For Adaptive Transport Of Video Streaming Media
8.1. Introduction
8.2. Adaptive video transport
8.3. Microsoft Smooth Streaming
8.4. Apple HTTP live streaming
8.5. Adobe HTTP dynamic streaming
8.6. MPEG–dynamic adaptive streaming over HTTP
8.7. The goals of adaptive video streaming
8.8. Quality metrics for video streaming
8.9. The role of TCP in adaptive video streaming
8.10. Bibliography
9 QoS And Qoe Effects Of Packet Losses In Multimedia Video Streaming
9.1. Introduction to the overall scenario
9.2. Related work
9.3. Multilayer performance metrics
9.4. QoE multilayer metric and quality assessment mechanism
9.5. Video streaming use case: peer-to-peer television (P2PTV)
9.6. Conclusions and further actions
9.7. Bibliography
10 A Model For Quality Of Experience Estimation
10.1. Introduction
10.2. Presentation of the model
10.3. Application of the model to convergent (3P) services
10.4. Quality evaluation process
10.5. Model testing
10.6. Conclusions and future work
10.7. Acknowledgments
10.8. Bibliography
11 Quality Of Experience Estimators In Networks
11.1. Introduction
11.2. QuEEN terminology and concepts
11.3. Modeling the QoE. The ARCU model
11.4. The QuEEN layered model
11.5. Applications
11.6. Conclusions
11.7. Acknowledgments
11.8. Bibliography
12 QoE-Based Network Selection In Heterogeneous Environments
12.1. Introduction
12.2. Network selection in homogeneous environments: a use case in WLAN
12.3. Related work for network selection in the heterogeneous environment
12.4. QoE-based network selection in heterogeneous networks
12.5. Conclusions and discussions
12.6. Bibliography
List Of Authors
Index
First published 2014 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:
ISTE Ltd27-37 St George’s RoadLondon SW19 4EUUK
www.iste.co.uk
John Wiley & Sons, Inc.111 River StreetHoboken, NJ 07030USA
www.wiley.com
©ISTE Ltd 2014The rights of Abdelhamid Mellouk and Antonio Cuadra-Sanchez to be identified as the authors of this work have been asserted by them in accordance with the Copyright, Designs and Patents Act 1988.
Library of Congress Control Number: 2014938063
British Library Cataloguing-in-Publication DataA CIP record for this book is available from the British LibraryISBN 978-1-84821-672-3
It will be fascinating to look back in the years ahead and note the convergence of two formerly separate technologies, namely telecom technology and information technology. The former one originates from the telephone world and is based on dedicated architectures with circuit-switched point-to-point connections designed for real-time services. The latter comes from computer communication with flexible architectures and packet-based communication. Due to the emergence of different kinds of communication and networking technologies and the foreseen proliferation of different and specific types of services supported by these technologies, both are merged in a mixture of dedicated and flexible architectures, clearly targeting the use of the Internet Protocol suite as the basis communication protocol. Nevertheless, it appears recently that traditional network control strategies developed in the last decade are not sufficient to handle the complexity and diversity of information in use that we see today. Over the years, the continuous technological evolution and the development of new applications and services have steered networking research toward new problems, which have emerged as the network evolves with new features toward what is usually referred to as the future internet, which has become one of the basic infrastructures that supports the world economy nowadays.
In fact, there is a strong need to build a new network scenario, where networked computer devices would proliferate rapidly, supporting new types of services, usages and applications: from wireless sensor networks and new optical network technologies to cloud computing, high-end mobile devices supporting highdefinition media, high-performance computers, peer-to-peer networks and various platforms and applications.
The overall challenge here is to find scalable and sustainable solutions for the ever-growing smart communications field, which supports different kinds of services, for a wide variety of future next-generation network applications.
To address this challenge, new and cross-disciplinary approaches are required to optimize the whole treatment chain of network application. The building of new mechanisms should be based on both estimates of expected demand and the final consumer demands on perceived quality. Also, statistical methods for online estimation of consumer demands will be crucial. Many of these issues are complex and can, in the short term, hardly be solved by any single approach. Researchers need to find solutions to deliver network services in the most efficient way to provide users with the best perception while taking into consideration scarce network resources.
Over the years, network operators have been assessing network performance based only on the traditional Quality of Service (QoS) parameters such as throughput, delay, jitter and loss rate. These QoS metrics take into account the network components' characteristics. Nevertheless, these measurements are not sufficient now and must be completed by user perception, such as emotional, intensity or satisfaction, which is a natural way to interact with the real world, and will serve as a powerful metaphor to interact with online services and perform affective computing paradigm.
After a pioneering book published in Autumn 2013 dedicated to this new paradigm applied to content delivery network, entitled Quality of Experience for multimedia: application to content delivery network architecture, this second book, edited with my colleague Antonio Cuadra-Sanchez, follows the same direction. It focuses on the current state-of-the-art research results and experience reports in the area of quality monitoring for customer experience management, addressing, among others, currently important topics such as Service-aware Future Internet architecture for Quality of Experience (QoE) management on multimedia applications. In particular, it addresses the QoE paradigm for improving customer perception when using Added Value Services offered by service providers, from evaluation to monitoring and other management processes.
This book shows that QoE is a very dynamic area in terms of theory and application. The continuous emergence of new services along with the increasing competition is forcing network operators and service providers to focus all their effort on customer satisfaction, although determining the QoE is not a trivial task. The field of QoE has been growing rapidly, producing a wide variety of mechanisms for different applications.
I am certain that Research and Development investment in QoE will result in added value for network operators and service providers to anticipate future user needs and to adapt continuously to the new global challenges they face.
This book is a start, but also leaves many questions unanswered. I hope that it will inspire a new generation of investigators and investigations.
Abdelhamid MELLOUKMay 2014
Abdelhamid MELLOUK and Antonio CUADRA-SANCHEZ.
In recent years, multimedia applications and services have experienced a sudden growth. Today, the video display is not limited to the traditional areas of movies and television on TV sets, but accesses these applications in different environments, devices and under different conditions.
In addition, the continuous emergence of new services along with increasing competition is forcing network operators and service providers to focus all their effort onto customer satisfaction, although determining the Quality of Experience (QoE) is not a trivial task.
Due to the emergence of different kinds of communication and networking technologies (core networks with intra-domain and inter-domain challenges, access networks, aggregation networks, spontaneous networks, Internet of Things, etc.) and the current and envisaged proliferation of different and specific types of services supported by these technologies (real-time services, IPTV, VoD, social networking, E-Health, multimedia, gaming, smart cities, etc.), traditional network control strategies are not sufficient to handle the complexity and diversity of information in use that we see today. There is a strong need to develop a new paradigm to obtain the continuity of network services based on the new concept of smart communications and user interaction. The user’s perception, such as emotions, intensity or satisfaction, is a natural way to interact with the real world, and will serve as a powerful metaphor to interact with online services, and performs affective computing paradigm. On the other hand, the use of artificial intelligence tools together with biologically inspired techniques is needed to control network behavior in real-time so as to provide users with the quality of service that they request, and to improve network robustness and resilience based on continuous user feedback.
The key idea of this book is to present a new paradigm driven by user perception and based on control theory and machine learning techniques in order to support smart communications to avoid any complete interruption in the whole chain of service treatment. The main goal is to present state-of-the-art research results and experience reports in the area of Quality Monitoring for customer experience management, addressing, amongst others, currently important topics such as Service-aware Future Internet architecture for Quality of Experience (QoE) management on multimedia applications with respect to the following steps:
– to develop, at the theoretical level, an appropriate unified formal model based on bio-inspired modeling and knowledge distribution in order to construct a scalable and robust environment with low complexity for large-scale dynamic networks;
– to study, at the empirical level, why and how user perception can be quantified, analyzed and influenced;
– to create, at the conceptual level, a general framework for protocol stacks dedicated to smart communications featuring multiple technologies, devices and users,
– finally, to build, at the engineering level, the appropriate model of programming abstractions and software architecture to develop a full-scale framework.
This book addresses the QoE for improving customer perception when using Added Value Services offered by service providers, from evaluation to monitoring and other management processes.
This section summarizes the content of the chapters that are gathered in this book.
In this chapter, the authors describe an ecosystem that allows us to manage customer experience in order to guarantee the quality levels delivered to end-users, which has been defined into the Eureka Celtic internet protocol network for quality of service intelligent support (IPNQSIS) project and is being adapted for over-the- top (OTT) services inside the Eureka Celtic next generation over-the-top multimedia service (NOTTS) project. The QoE ecosystem lies on a customer experience architecture formed by data acquisition level, monitoring level and control level. The work proposed in this chapter will settle the basis of next generation Customer Experience Management Systems (CEMS).
The authors present an overview of the CEMS under development within the IPNQSIS and NOTTS projects. On the one hand, a generic overall CEMS architecture is introduced and, on the other hand, it has been specialized for the IPNQSIS and NOTTS scopes, reinforcing specific areas such as network monitoring, as well as having IP television (IPTV) as a main application use case.
The IPNQSIS project, in which 18 companies and institutions from Spain, France, Sweden and Finland have collaborated, developed next generation management architectures to improve the QoE-driven network management. This project ended in April 2013 and its main objectives were accomplished, from the definition of a general Customer Experience Management (CEM) architecture to IPNQSIS prototypes focused on IPTV multimedia services. The results of the project consisted of Quality of Service (QoS) measuring tools, mechanisms to quantify the QoE, its correlation with the QoS parameters, and their influence on QoE. The outcome of the analysis can be applied to the integrated management of network resources to improve the user’s experience. This technology is also going to make it possible to develop tools to enable greater correlation between the quality of the service and the actual experience of the user, thereby ensuring greater customer satisfaction.
Future research in this area will extend its scope to next generation services such as OTT services, specifically the NOTTS project, which is continuing the activities regarding QoE management on a task dedicated to this purpose.
The chapter describes a model to predict the QoE that is a function of the loss of the different types of moving pictures experts group (MPEG) frames, providing a mean opinion score of the delivered service. The authors have implemented this model in a network monitoring tool, which has been validated in both Intel and ARM platforms.
An empirical evaluation of the computational cost of both MPEG frame loss and packet loss ratio (PLR) measurement algorithms has been done with a desktop personal computer (PC) and a low-cost device, providing interesting results to decide which one is better to be used as a network probe. The system running on a PC can be used to measure at the core or access network, whereas the low-cost device can be used at the user’s premises.
The final results show that this model is able to better predict the QoE of such video services than just using the packet loss rate. Based on these results, they have defined a method to measure the QoE by capturing the live video channels, inspecting the packets to detect losses and applying the measured parameters with the obtained model. The authors have implemented a prototype and have also measured its performance, testing its feasibility in both personal computer (PC) and low-cost probes.
As future work to improve the QoE estimation model, the authors will investigate how the amount of movement also influences the perceived QoE. Also, it is interesting to consider how well the obtained model fits with an experiment using a panel of users to get a subjective evaluation of the videos watched.
In order to estimate the effect of context on QoE of audiovisual services, in this work, the authors compared the results of formal subjective audiovisual assessment with more informal assessments performed in actual usage contexts (in this case two public exhibition halls). They observed significant differences in the results, both in terms of the mean opinion score (MOS) values and on the impact of the different quality-affecting factors. Interestingly the results show that the subjects in public places were less tolerant to the quality degradations than the subjects in the laboratory. Specifically, tests separating the effects of contextual factors on the basis of 1) voting behavior and 2) actual experience should be conducted.
In this way, in this chapter, the authors compared the results of a laboratory- based audiovisual assessment campaign with that of two separate (and smaller scale) campaigns carried out in public places, in a completely different context. Besides the explicit goal of comparing the results of subjective assessments in a lab versus non-lab environment, this work provides a first step into developing context-specific bias functions to easily and cheaply adapt quality models, typically trained on laboratory-based data, to new contexts of use. These experiments are the first in a series of experiments with the purpose of understanding the effect the context of use has on QoE.
The authors also demonstrated the viability and limitations of an audiovisual model trained on the laboratory-obtained data, when used in a different context, namely in crowded public places. The performance of the model in the exhibition context was inferior to the performance in the laboratory context. However, the estimations could still provide usable values for quality monitoring purposes, e.g. in public displays.
In addition, the authors are currently working on a model calibration method that uses information derived from lightweight user tests performed in specific context. The idea is to test and model the effects of the dominating influence factors in order to formulate a context specific correction function. To this end, and in order to understand different contexts of use and devices generally, user tests outside the laboratory shall be continued.
In this chapter, the authors justify the claim that suitable solutions to determine the quality of a video sequence based on multiservice probes are needed. This chapter deals with an IPTV multiservice QoE management system developed as part of the SAVAGE project.
The SAVAGE project aims to design and develop an advanced system for IPTV multiservice quality management. This system consists of a multiservice advanced probe embedded in a monitoring platform that can operate automatically and remotely, allowing information on the QoE perceived by the end users. The SAVAGE project started in July 2011 and ended in December 2013; they are now implementing the integration of QoE algorithms inside the multiservice probes.
This chapter describes the concepts and proposals related to the objective assessment of the quality of audiovisual services, which consist of the automatic estimation of the QoE perceived by users. First, the state-of-the-art in relation to multimedia quality metrics and, second, the authors describe the multiservice IPTV probe to be developed during the project. In addition, they present the global QoE management system.
The proposed system is intended to deal with the most common problems on these networks, which are, especially, packet losses and network delays that can cause important degradations of video quality, such as blocking effects (i.e. macroblocking), freezing the video and audio losses. In addition, other impairments could affect the quality perceived by the end users, as coding artifacts or capture distortions. However, in real video delivery systems these degradations are less dramatic since an acceptable quality of service under normal conditions should usually be guaranteed.
Furthermore, the assessment of the video quality would facilitate the work of planning and designing distribution networks, and could allow video distributors to implement user fees based on the final quality that they can enjoy in their homes. In addition, an interesting application is real-time monitoring of the video quality perceived by the end user of delivery networks.
Further activities of this project will both consolidate the management architecture and implement the multiservice probe in terms of a prototype to measure the QoS in IPTV platforms.
This chapter presents a system that unifies the entire process involved in flow classification at high speed. It captures the traffic, builds flows from the received packets and, finally, classifies them inside a graphics processing unit (GPU), at 10 Gbps using commodity hardware.
The authors propose a technique to speed up Deep Packet Inspection (DPI) processing for multimedia protocols using GPUs, and the methodology to integrate it inside a network probe. This shows that DPI with deterministic finite automata (DFA) can be used at very high speed with practically no system overhead. However, other problems apart from high speed traffic classification arise: it is difficult to obtain real high-speed traffic (10 Gbps and over) and build the flows on the fly, which in other context (e.g. below 1 Gbps) could be seen as an obvious thing. The GPU modules process up to 29.7 Gbps, which means about 14.5 mega flows per second.
The tests show how important signatures are when using DPI for flow classification. Signatures define the accuracy of protocol classification. The accuracy of each signature and how it influences false positives and false negatives should be studied during the process of signature creation. An example is the Real-time Transport Protocol (RTP).
Finally, the authors point out that the proposed system has a wide variety of possibilities and configurations, allowing its use in other types of classification, such as hyper text transfer protocol (HTTP), peer-to-peer (P2P) or other non-multimedia protocols. In addition, the high configurability allows us to vary the latency and throughput according to the needs of a given network.
The results show that the achieved performance is very much influenced by the number of protocols to find, and it is limited by the number of network flows. In any case, the system reaches up to 29.7 Gbps (about 14.5 mega flows per second).
This chapter presents a new routing algorithm based on QoE for Content Distribution Network (CDN) architecture. Theoretically, CDN architecture has two main layers: the routing layer and the metarouting layer. The latter is composed of several modules such as server placement, cache organization and server selection. The first two modules belong to the preparation related phase. More precisely, the server placement module tries to place the replica servers in an optimized way to minimize the delivery delay and the bandwidth consumption. Providers use the cache organization module to organize the content stored in replica servers in order to guarantee the availability, freshness and reliability of content.
Besides these two preparation-related modules, authors focus on the server selection module, which plays an important role in launching the operation phase of a CDN. The fundamental objective of the server’s selection is obviously to offer better performance than the origin server. Another added value of this selection process is lowering the costs of network resources. It is not easy to choose an appropriate server to provide service to users. The appropriate server may be neither the closest one in terms of hop-count or end-to-end delay, nor the least loaded server. The best server is the one that makes end user satisfied when using the provided service. So, the server selection process plays a key role in the decisive success of a CDN.
The chapter includes some related research on server selection methods in the context of CDN. Subsequently, the authors explain their motivation to develop a server selection scheme based on machine learning approaches called multi-armed bandits.
In this chapter, authors discuss the different transport approaches for adaptive video streaming media, and how they influence the QoE. These approaches are solely based on the HTTP protocol, and are specially designed for video transportation over the Internet to support the wide range of devices and maximize end user’s perceived quality. The leading groups and companies, e.g. Microsoft, Apple, Adobe and MPEG/3GPP, have introduced their own standard approaches to facilitate the on demand or live adaptive video streaming transport over HTTP.
The main goals of adaptive video streaming are to improve and optimize user’s QoE by changing the video quality according to network parameters, end user’s device properties and other characteristics. There are five main quality metrics of video streaming that affect the user’s engagement during video watching, and influence user’s QoE. The adaptive video streaming approaches use transmission control protocol (TCP) as a transport protocol. Based on network conditions, TCP parameters provide the client with vital information, and streaming is managed by the end user.
This chapter analyzes the effect in a multimedia video streaming, i.e. peer-to- peer Television (P2PTV), of a common traffic metric, i.e. packet losses, on the quality parameters, i.e. QoS and QoE. Traditionally, QoS has been used to asses and guarantee the compliance of the deployed Service Level Agreements (SLAs). However, most of the network performance metrics used to estimate the QoS, are only limited to certain aspects of traffic without considering the end user’s subjective perception.
In this context, with the increasing presence of multimedia traffic, the user’s perception (QoE) of networked (multimedia) services has become a major concern for content providers and network operators. While a plethora of works propose solutions for QoS and QoE, authors put the focus, in this chapter, on the relationship between a usual traffic metric and the QoS and QoE assessment.
8 Quality of Experience Engineering for Customer Added Value Services
In this chapter, the authors present a model for the estimation of user perceived quality, or QoE, from network and/or service performance and/or quality parameters (QoS) in multimedia services, and specifically in triple-play (3P) services: television (TV), telephony and data services, managed and offered by a single operator as a single package. In particular, it focuses on 3P convergent services (deployed over a common, IP-based transport network), and the relationship between the quality perceived by the users of such services and the performance parameters of the underlying network. Specifically, it contributes to the on-line estimation of such quality (i.e. during service delivery, in real or near-real time).
This way the chapter presents a model for the estimation of quality as perceived by the users (i.e. the user QoE) in 3P services. The model is based on a matrix framework defined in terms of user types, service components and user perceptions on the user side, and agents, agent capabilities, and performance indicators on the network side. A quality evaluation process, based on several layers of evaluation functions, has been described, that allows us to estimate the overall quality of a set of convergent services, as perceived by the users, from a set of performance and/or QoS parameters of the convergent IP transport network.
The full sets of services, user perceptions, valuation factors, agents, agent capabilities and performance indicators, have been provided. The full matrix of matching points between agent capabilities and user perceptions has been developed for the particular case of residential (domestic) users with a specific information flow (contents server external to the Internet service provider (ISP), no contents caching outside the contents provider). Valuation and parameterization functions for all services have been provided. For global service quality evaluation, weights for the final services, derived from service usage statistics, have been provided, as well as an example of the use of the analytic hierarchy process (AHP) method for deriving the weights of the elementary services of a final service (Internet access) as well as the weights of the perceptions of an elementary service (digital video broadcast in IPTV). Statistical results for the quality model of a representative service (video quality in IPTV) are presented.
In the summary, the chapter shows the applicability of the proposed model to the estimation of perceived quality (QoE) in convergent 3P services.
The Celtic Plus Quality of Experience Estimators in Networks (QuEEN) project was conceived to create a suitable conceptual framework for QoE, and make it operational by means of a suitable software architecture and quality models for different services, covering the full stack from the infrastructure on which a service runs, to the user who experiences it. In this chapter, we will present some of the conceptual results produced so far within QuEEN (and other related activities, such as COST Action IC1003 Qualinet), and the proposed mechanisms from making these concepts operational; that is, a way to theoretically model QoE for any type of (online) service, and a way to go from these theoretical models to concrete implementations. Furthermore, we will introduce some applications of this approach and of QoE in general, such as SLA management, and QoE-driven network management. The rest of the chapter is organized as follows. In section 11.2, the authors present an overview of the main concepts related to QoE (in particular state- of-the art concepts such) that have been produced within QuEEN. Section 11.3 presents the Application-Resource-Context-User (ARCU) model which provides the theoretical framework we propose for developing QoE models. section 11.4 details the proposed mechanism for making these models operational, known as the QuEEN layered model. In section 11.5, we introduce applications of QoE, in particular as envisioned within QuEEN. Finally, we conclude the chapter in section 11.6, where we also discuss possible lines for future research in this domain.
In this chapter, the authors have provided an overview of the QuEEN project’s approach to estimating QoE for generic services, and exploiting these estimates in various ways. They propose a conceptual framework for understanding QoE, for different services and in different timescales, as well as a model to make this conceptual framework operational. The QuEEN-Agent provides a flexible distributed implementation of the QuEEN layered model, allowing us to estimate the quality of different services in different locations, and to feed these estimates to QoE-aware applications, such as monitoring, network management, or service level management, to name a few. Moreover, the QuEEN-Agent provides standard Simple Network Management Protocol (SNMP) interfaces so it can be easily integrated into existing monitoring and management tool-chains. The authors expect that these results will enable service and network providers to easily improve their offerings in terms of QoE, leading to better customer satisfaction and lower churn rates.
The chapter presents a new method to take QoE into account (among other metrics) for network selection and it also provides better load balancing between different networks. The method is a user-based and network-assisted approach. In fact, the increasing demand to be connected anywhere, anytime and anyhow has encouraged the deployment of heterogeneous networks having a mix of technologies such as long term evolution (LTE), Wi-Fi and WiMAX. At the same time, most of the new user terminals (e.g. smart phones and tablets) are equipped with multiple interfaces, which allow them to select the access network that offers the best quality. Managing networks in such an environment is challenging. Moreover, nowadays QoE has become a crucial issue due to a phenomenal growth in multimedia traffic. As user satisfaction is the key to the success of any service, the network selection should be centered on QoE or the quality perceived by the end user. In other words, a network selection mechanism should select the network that offers the best QoE, while trying to optimize the network resources.
In this chapter, the authors present, first, a QoE-based network selection mechanism for a homogeneous environment. When several points of attachment are available, the authors proposed to use a QoE-based solution that allows users to select the best one while keeping the load balanced among them. The mechanism presented in this chapter provides a network selection scheme for a heterogeneous environment, which is the main focus for the rest of this chapter.
By providing users with relevant information about the network for the decision-making process, this approach is a good compromise for both user and network operator.
This book sets out to provide comprehensive coverage of QoE aspects for heterogeneous wireless/wired networks and optical networks. It is clear that the integration of end-to-end QoE parameters will increase the complexity of the algorithms used in heterogeneous networks. Thus, there will be QoE relevant technological challenges in today’s emerging heterogeneous networks that include different types of networks (e.g. wired, wireless and mobile).
The book contains 12 chapters and covers a very broad variety of topics. There is a very extensive literature on end-to-end QoS mechanisms, and to give a complete bibliography and a historical account of the research that led to the present form of the subject would have been impossible. It is, thus, inevitable that some topics have been treated in less detail than others. The choices made reflect in part personal taste and expertise, and in part a preference for very promising research and recent developments in the field of end-to-end QoE technologies.
Finally, we thank all contributors of this book for their research and effort.
Antonio CUADRA-SANCHEZ, Mar CUTANDA-RODRIGUEZ, Andreas AURELIUS,Kjell BRUNNSTRÖM, Jorge E. LÓPEZ DE VERGARA, Martin VARELA,Jukka-Pekka LAULAJAINEN, Anderson MORAIS, Ana CAVALLI,Abdelhamid MELLOUK, Brice AUGUSTIN and Ismael PEREZ-MATEOS
The continuous emergence of new services along with the increasing competition is forcing network operators and service providers to focus all their effort on customer satisfaction, although determining the quality of experience (QoE) is not a trivial task. In addition, the evolution from traditional networks toward next generation networks (NGN) is enabling service providers to deploy a wide range of multimedia services such as internet protocol television (IPTV), video on demand (VoD), and multiplayer games services, all on the same underlying Internet protocol (IP) network. However, managing the satisfaction level of customers to provide a good user experience has not been an easy task due to the complexity of orchestrating network and customer data sources. This chapter proposes an ecosystem that allows the management of customer experience in order to guarantee the quality levels delivered to end users, which has been defined into the Eureka Celtic IPNQSIS project and is being adapted for over-the-top (OTT) services inside the Eureka Celtic NOTTS project. The QoE ecosystem lies on a customer experience architecture formed by data acquisition level, monitoring level and control level. The work proposed in this chapter will settle the basis of next generation customer experience management (CEM) systems.
The multimedia landscape offered over the Internet today is very rich and rapidly changing. New and attractive services may be created and spread quickly, with the help of social networks and recommendation mechanisms. It has become increasingly difficult to predict the future in the complex and rapidly changing multimedia eco system.
The fast technological development has led to new habits and new behavior in relation to end user media consumption. More media is consumed over the digital networks, and there are a large number of different terminals on which to consume the media.
This situation creates challenges for the network operators and service providers, in delivering the service to the end users with acceptable quality. Users who are dissatisfied with the perceived quality are likely to switch to other service providers or operators. In light of this development, it is obvious that monitoring and control of service quality is of increasing importance to avoid customer churn.
This challenge is dealt with in the Celtic IPNQSIS project, and this chapter summarizes the CEM architecture proposed in the project in order to face the challenge posed. The CEM is implemented in the business case of IPTV, but its usage can be extended to other services as well.
A paramount importance of the CEM is the QoE component. Such a component contains metrics that quantifies the customer satisfaction with the offered service. One reason for using QoE metrics instead of the traditional Quality of Service (QoS) is the fact that the QoS does not correlate well enough with the actual user experience in the rich media landscape of today. The experience of a single user is naturally subjective, and hence impossible to predict, but it has been shown that the mean experience of a panel of users is quite a stable metric. This gives good hope that QoE may be used for monitoring and controlling user experience of, for example, TV services in operator networks.
The CEM is further described in section 2.2 of this chapter. The individual components of the CEM are data sources, the monitoring system and the management system, all of which are described in section 2.3. The Celtic IPNQSIS project is introduced in section 2.4 and the Celtic NOTTS project is further described in section 2.5.
The CEM approach is designed to focus on procedures and a methodology to satisfy the service quality needs of each end user. Telecom operators are focusing on solutions to maximize the customer experience on audio and video services.
CEM solutions essentially provide a service quality monitoring architecture to manage and optimize end-to-end (e2e) customer experience. In 2009, the TeleManagement Forum launched a working group called managing customer experience (MCE) that constituted the major initiative to establish the links between e2e service quality and customer experience.
The MCE program released three reference deliverables:
– TR 148 [TMF 09a] examines the factors that influence customer experience and also a number of business scenarios for the delivery of digital media services, such as IPTV, Mobile TV, Enterprise IPVPN, and Blackberry, through a chain of co-operating providers.
– TR 149 [TMF 09b] describes the customer experience/SQM (service quality management) framework that has been designed to meet the need for assuring e2e quality of customer experience when services are delivered using a chain of co-operating providers. It aims to support the business scenarios and requirements described in TR 148.
– TR 152 [TMF 09c] captures, at an executive level, the main results of the managing customer experience focus area catalyst presented at Management World Orlando 2008.
As its main input CEM uses the objective QoS parameters that contribute to QoE, i.e. NQoS (network QoS indicators) and AQoS (application QoS indicators). Combining both NQoS and AQoS we can calculate how the QoE is affected by encoding and transporting multimedia services. Nonetheless, QoE is a subjective measure, so subjective assessment is the only reliable method.
This means that CEM must also take into account customer feedback. On the other hand, subjective testing is expensive, time-consuming, and reference content is sometimes missing. Therefore, the CEM system (CEMS) solution should use the minimum available subjective tests on reference material by building prediction models for real-time estimation.
The first steps of the CEMS architecture developed in the context of the IPNQSIS project focus on the construction of accurate as well as practical QoE prediction models. As a first step, we set out to measure and predict the user’s QoE of multimedia streaming in order to optimize the provisioning of streaming services. This enables us to better understand how QoS parameters affect the service quality as it is actually perceived by the end user. Over the last years, this goal has been pursued by means of subjective tests and through the analysis of the user’s feedback. Our CEMS solution [IPQ 12] proposes a novel approach for building accurate and adaptive QoE prediction models by using, among other methods, machine learning classification algorithms, trained on subjective test data.
These models can be used for real-time prediction of QoE and can be efficiently integrated into online learning systems that can adapt the models according to changes in the network environment. Providing high accuracy of above 90%, the classification algorithms become an indispensable component of a multimedia QoE management system.
TeleManagement Forum TR 148 [TMF 09a] defines service quality management (SQM) as the set of features displayed by an operation support system (OSS) that allow the management of the quality of the different products and services offered by an enterprise. On the other hand, ITU-T defines QoS as “The totality of characteristics of a telecommunications service that bear on its ability to satisfy stated and implied needs of the user of the service” [ITU 08]. Therefore, the term “QoS” is used in this document as a quality figure rather than referring to the ability to reserve resources.
SQM refers to the level of satisfaction a customer perceives when using a given service. To proactively manage this, e2e components that make up the service must be monitored and maintained. Typically, e2e service quality management requires a powerful data aggregation engine and a tool for e2e mapping of services. As such, SQM systems make use of collected information (regarding user perceived QoS and the performance of the provision chain) in order to enhance the guarantee in the quality of the offered services.
Customer traffic data is collected in order to formulate the characterization of services usage. In this way, these activities fulfill the generation of key performance and quality indicators (KPI/KQI), allow threshold management, SLAs surveillance, real-time monitoring, and are the most appropriate for the CEMS approach.
The QoS perceived by the customer depends on:
1) the components that set up the service;
2) business processes related to the service;
3) the resources on which the processes are supported;
4) the performance of the underlying network.
With the purpose of quantifying the perceived QoS, we must collect the KQI and KPI metrics for the services, and apply a methodology that correlates all the network factors.
QoS is targeted toward measuring and controlling the network parameters. It has been recognized for some time that this is not enough, for example, if network congestion leads to packet loss, which one decoder may handle as a freezing in a video. A different decoder may show this as a short time distortion in part of the image. Although the measured packet loss is the same, the user experience is very different. Not only is it important to know what is actually presented to the user when an error occurs, it is essential to understand how it affects the human experience of it.
This understanding has led the definition of QoE, as a concept that also encompasses the experience of the user, when using a service [CAL 12]. The most accurate way of estimating QoE is by subjective testing, which could even be devised for live services. It may still not be sufficient. In a network that should be proactive, i.e. reacting and adjusting the QoE before its user gets annoyed and calls the support or even stops using the services, there are needs for objective metrics that can estimate the QoE for the different services in the network. Most likely there are different metrics for different services. Before these metrics could be applied and trusted they have to be trained and evaluated using data collected from subjective tests.
We may explore two different approaches in order to build QoE-related datasets and in fine assess the impact of various parameters on the e2e QoE: (1) a controlled experiment with a number of volunteers asked to rate short videos, or (2) a crowd-sourced experiment to collect QoE data from a large number of volunteers, thus covering a wide range of situations.
When these metrics have come into place, the aim of IPNQSIS and NOTTS can be realized, i.e. optimizing the network performance guided by QoE measurements and estimations. As such, the effects of the control operations realized on the network will have a maximal impact on the actual service quality experienced by the users.
This section describes the design of the overall architecture to manage the customer experience. There are three separate levels (see Figure 2.1) that are described in the following subsections: data acquisition level, monitoring level and control level, each level composed of different components. This reference architecture has been devised to define next generation CEMS, although not all components will be covered inside IPNQSIS scope.
Figure 2.1.IPNQSIS architecture
This architecture is modular and open to enable easy addition or removal of components, system parameters and features.
1) Data acquisition level (QoE data sources): this level gathers the information of the different data sources: active and passive probes, and other probing devices technologies such as embedded agents or deep packet inspectors.
2) Monitoring level (QoE monitoring system): the input from the data sources is correlated, empowered and transformed to supervise both QoS and QoE. This level comprises all the components that transform basic indicators into customer experience metrics. On the other hand, a set of generic graphical user interface (GUI) tools are also considered in this level.
3) Control level (QoE management system): this level handles the QoE delivered to the customers and is fed back from the monitored level in order to act proactively into the network to improve customer satisfaction.
The following sections explain each of these levels.
Probe systems are flattering and increasingly popular as a tool to monitor real users' QoE, being able to reproduce their behavior in terms of automated tests carried out by active probes [CUA 10].
One of the main advantages of using these devices is that they provide greater versatility and flexibility than other systems based on mediation techniques, being able to be placed anywhere on the network and even acting as real users do.
Data provided by probes is usually highly detailed and offers an in-depth vision of the network behavior and QoS experienced by customers. These systems allow us to measure the quality in terms of customer satisfaction and optimizing service levels across the value chain.
There are active and passive probes. Active probes simulate end users' behavior, sending requests for services and analyzing the response, therefore providing an e2e view of the network. Passive probes capture traffic exchanged between service providers and end users, offering a view of the whole network at any protocol level. Combining the information obtained by both types of probes offers a new solution for monitoring services for QoE enhancement.
In this section, the QoE monitoring element is described. It is composed of two components. The first component is the traffic monitor, which transforms the information gathered by the data sources into monitoring data. The second component is the QoS/QoE engine, which converts monitoring data into quality figures.
There are some basic requirements that a network monitoring tool should fulfill in order to provide an accurate analysis of the traffic flow for QoS/QoE measurements [CUA 11a]. One of them is to be able to capture packets at a very high rate from its underlying link without missing a significant portion of packets.
The IPNQSIS customer experience management system (CEMS) will be based on the traffic monitoring and service supervision system [CUA 11b], all the required elements being designed and implemented within the project. IPNQSIS will make use of enhanced hardware and software probes that operate at different levels (from the network core to end users applications) in order to build the monitoring component, extracting the QoS measurement data for the captured flows.
IPNQSIS will make use of deep packet flow inspection tools on access networks in order to place a strong focus on IP traffic monitoring. The monitoring component will model traffic parameters related to content distribution, traffic trends and user characterization, for instance, content popularity (by time, location and access type, etc.), traffic location, traffic mix and traffic variation.
The monitoring component is composed of active probes, passive probes and traffic classification modules. Probes will be adapted to deal with multimedia services like IPTV, and QoE measurements will be defined and implemented. Deep packet inspection methods and Bayesian classifiers (which are based on the inherent features of the network traffic) will be used by the traffic classification module. This module will provide means of detecting popular services for which QoE requirements will exist, feeding this relevant output to the control module.
A vital part of a customer experience management system is the capability of assessing the quality experienced by the users of the monitored networks and services. As described in the previous sections, the system being developed in IPNQSIS project is capable of gathering low-level network quality measurement information from different types of network probes and using these to form network quality awareness at the monitoring level.
This information alone, however, does not provid a good insight on how the applications using the network area performing from the user’s perspective. For this reason, another component, QoS/QoE engine, is added to map the network QoS data to QoE estimations. If the relationship between the QoS measurements and human perception is clearly understood [FIE 10], the information offered by QoS can be used to improve the decision criteria used in the network systems and to optimize the user’s QoE [CUA 11c].
QoE is a subjective measurement of a user’s experience with the service being used. Its automatic real-time measurement is challenging by definition because the natural way of measuring it, asking the user’s opinion, is difficult in practical scenarios. To mimic the experience of human subjects, different types of methods for mapping QoS parameters to QoE scores have been developed in the IPNQSIS project. What is common for most of them is that some type of a model (e.g. neural network, fuzzy system) for user experience is trained with controlled user tests in such a way that when the model is used later, it can give accurate enough estimations on user-perceived quality just by observing objective quality parameters such as packet loss or jitter.
