84,99 €
Presents current trends and potential future developments by leading researchers in immersive media production, delivery, rendering and interaction
The underlying audio and video processing technology that is discussed in the book relates to areas such as 3D object extraction, audio event detection; 3D sound rendering and face detection, gesture analysis and tracking using video and depth information. The book will give an insight into current trends and developments of future media production, delivery and reproduction. Consideration of the complete production, processing and distribution chain will allow for a full picture to be presented to the reader. Production developments covered will include integrated workflows developed by researchers and industry practitioners as well as capture of ultra-high resolution panoramic video and 3D object based audio across a range of programme genres. Distribution developments will include script based format agnostic network delivery to a full range of devices from large scale public panoramic displays with wavefield synthesis and ambisonic audio reproduction to ’small screen’ mobile devices. Key developments at the consumer end of the chain apply to both passive and interactive viewing modes and will incorporate user interfaces such as gesture recognition and ‘second screen’ devices to allow manipulation of the audio visual content.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 820
Veröffentlichungsjahr: 2013
Contents
Cover
Title Page
Copyright
List of Editors and Contributors
Editors
Contributors
List of Abbreviations
Notations
General
Specfic Symbols
Chapter 1: Introduction
Chapter 2: State-of-the-Art and Challenges in Media Production, Broadcast and Delivery
2.1 Introduction
2.2 Video Fundamentals and Acquisition Technology
2.3 Audio Fundamentals and Acquisition Technology
2.4 Live Programme Production
2.5 Coding and Delivery
2.6 Display Technology
2.7 Audio Reproduction Technology
2.8 Use of Archive Material
2.9 Concept of Format-Agnostic Media
2.10 Conclusion
Notes
References
Chapter 3: Video Acquisition
3.1 Introduction
3.2 Ultra-High Definition Panoramic Video Acquisition
3.3 Use of Conventional Video Content to Enhance Panoramic Video
3.4 High Frame Rate Video
3.5 High Dynamic Range Video
3.6 Conclusion
Notes
References
Chapter 4: Platform Independent Audio
4.1 Introduction
4.2 Terms and Definitions
4.3 Definition of the Problem Space
4.4 Scene Representation
4.5 Scene Acquisition
4.6 Scene Reproduction
4.7 Existing Systems
4.8 Conclusion
References
Chapter 5: Semi-Automatic Content Annotation
5.1 Introduction
5.2 Metadata Models and Analysis Architectures
5.3 Domain-independent Saliency
5.4 Person Detection and Tracking
5.5 Online Detection of Concepts and Actions
5.6 Supporting Annotation for Automated Production
5.7 Conclusion
References
Chapter 6: Virtual Director
6.1 Introduction
6.2 Implementation Approaches
6.3 Example Architecture and Workflow
6.4 Virtual Director Subprocesses
6.5 Behaviour Engineering: Production Grammar
6.6 Virtual Director: Example Prototype
6.7 Conclusion
References
Chapter 7: Scalable Delivery of Navigable and Ultra-High Resolution Video
7.1 Introduction
7.2 Delivery of Format-Agnostic Content: Key Concepts and State-of-the-Art
7.3 Spatial Random Access in Video Coding
7.4 Models for Adaptive Tile-based Representation and Delivery
7.5 Segment-based Adaptive Transport
7.6 Conclusion
References
Chapter 8: Interactive Rendering
8.1 Introduction
8.2 Format-Agnostic Rendering
8.3 Device-less Interaction for Rendering Control
8.4 Conclusions
References
Chapter 9: Application Scenarios and Deployment Domains
9.1 Introduction
9.2 Application Scenarios
9.3 Deployment in the Production Domain
9.4 Deployment in the Network Domain
9.5 Deployment in the Device Domain
9.6 Deployment in the User Domain
9.7 Conclusion
References
Index
This edition first published 2014 © 2014 by John Wiley & Sons, Ltd
Registered officeJohn Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, United Kingdom
For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com.
The right of the author to be identified as the author of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books.
Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. It is sold on the understanding that the publisher is not engaged in rendering professional services and neither the publisher nor the author shall be liable for damages arising herefrom. If professional advice or other expert assistance is required, the services of a competent professional should be sought.
Library of Congress Cataloging-in-Publication Data
Media production, delivery, and interaction for platform independent systems : format-agnostic media / by Oliver Schreer, Jean-François Macq, Omar Aziz Niamut, Javier Ruiz-Hidalgo, Ben Shirley, Georg Thallinger, Graham Thomas. pages cm Includes bibliographical references and index. ISBN 978-1-118-60533-2 (cloth) 1. Video recording.2. Audio-visual materials.3. Video recordings--Production and direction. I. Schreer, Oliver, editor of compilation. TR850.M395 2014 777-dc23
2013027963
A catalogue record for this book is available from the British Library.
ISBN: 978-1-118-60533-2
List of Editors and Contributors
Editors
Dr. Oliver Schreer
Scientific Project Manager, Fraunhofer Heinrich Hertz Institut and Associate Professor Computer Vision & Remote Sensing, Technische Universität Berlin, Berlin, Germany
Dr. Jean-François Macq
Senior Research Engineer, Alcatel-Lucent Bell Labs, Antwerp, Belgium
Dr. Omar Aziz Niamut
Senior Research Scientist, The Netherlands Organisation for Applied Scientific Research (TNO), Delft, The Netherlands
Dr. Javier Ruiz-Hidalgo
Associate Professor, Universitat Politècnica de Catalunya (UPC), Barcelona, Spain
Ben Shirley
Senior Lecturer at University of Salford, Salford, United Kingdom
Georg Thallinger
Head of Audiovisual Media Group, DIGITAL – Institute for Information and Communication Technologies, JOANNEUM RESEARCH, Graz, Austria
Professor Graham Thomas
Section Lead, Immersive and Interactive Content, BBC Research & Development, London, United Kingdom
Contributors
Werner Bailer
Key Researcher at Audiovisual Media Group, DIGITAL – Institute for Information and Communication Technologies, JOANNEUM RESEARCH, Graz, Austria
Dr. Johann-Markus Batke
Senior Scientist, Research and Innovation, Audio and Acoustics Laboratory, Deutsche Thomson OHG, Hannover, Germany
Malte Borsum
Research Engineer, Image Processing Laboratory, Deutsche Thomson OHG, Hannover, Germany
Ray van Brandenburg
Research Scientist, The Netherlands Organisation for Applied Scientific Research (TNO), Delft, The Netherlands
Dr. Arvid Engström
Researcher, Mobile Life Centre, Interactive Institute, Kista, Sweden
Ingo Feldmann
Scientific Project Manager at ‘Immersive Media & 3D Video’ Group, Fraunhofer Heinrich-Hertz Institut, Berlin, Germany
Rene Kaiser
Key Researcher at Intelligent Information Systems Group, DIGITAL – Institute for Information and Communication Technologies, JOANNEUM RESEARCH, Graz, Austria
Axel Kochale
Senior Development Engineer, Image Processing Laboratory, Deutsche Thomson OHG, Hannover, Germany
Marco Masetti
Networked Media Team Leader, Research & Innovation, Softeco Sismat Srl, Genoa, Italy
Frank Melchior
Lead Technologist, BBC Research & Development, Salford, United Kingdom
Dr. Rob Oldfield
Audio Research Consultant, Acoustics Research Centre, University of Salford, United Kingdom
Martin Prins
Research Scientist, The Netherlands Organisation for Applied Scientific Research (TNO), Delft, The Netherlands
Dr. Patrice Rondão Alface
Senior Research Engineer, Alcatel-Lucent Bell Labs, Antwerp, Belgium
Richard Salmon
Lead Technologist, BBC Research & Development, London, United Kingdom
Dr. Johannes Steurer
Principal Engineer Research & Development, ARRI, Arnold & Richter Cine Technik GmbH & Co. Betriebs KG, München, Germany
Marcus Thaler
Researcher at Audiovisual Media Group, DIGITAL – Institute for Information and Communication Technologies, JOANNEUM RESEARCH, Graz, Austria
Nico Verzijp
Senior Research Engineer, Alcatel-Lucent Bell Labs, Antwerp, Belgium
Wolfgang Weiss
Researcher at Intelligent Information Systems Group, DIGITAL – Institute for Information and Communication Technologies, JOANNEUM RESEARCH, Graz, Austria
Dr. Goranka Zorić
Researcher, Mobile Life Centre, Interactive Institute, Kista, Sweden
List of Abbreviations
2DTwo-dimensional3DThree-dimensional3GPP3rd Generation Partnership Project4DFour-dimensional4KHorizontal resolution on the order of 4000 pixels, e.g. 3840×2160 pixels (4K UHD)7KHorizontal resolution on the order of 7000 pixels, e.g. 6984×1920 pixelsAAMLAdvanced Audio Markup LanguageACESAcademy Color Encoding SystemADRAutomatic Dialogue ReplacementADSLAsymmetric Digital Subscriber LineAIFFAudio Interchange File FormatAPIApplication Programming InterfaceAMPASAcademy of Motion Picture Arts and SciencesAPIDISAutonomous Production of Images based on Distributed and Intelligent SensingARMAAuto Regressive Moving-Average modelARNAudio Rendering NodeARRIArnold & Richter Cine TechnikASDFAudio Scene Description FormatATMAsynchronous Transfer ModeAudioBIFSAudio Binary Format for Scene DescriptionAVAudio-visualAVCAdvanced Video CodingBBCBritish Broadcasting CorporationBWFBroadcast Wave FormatCCDCharge Coupled DeviceCCFLCold Cathode Fluorescent LampCCIRComité Consultatif International des RadiocommunicationsCCNContent-Centric NetworkingCCUCamera Control UnitCDFContent Distribution FunctionCDFWTCohen-Daubechies-Feauveau Wavelet TransformCDNContent Delivery NetworkCGComputer GraphicsCGIComputer Generated ImageryCIFCommon Intermediate FormatCMOSComplimentary Metal-Oxide SemiconductorCOPSSContent Oriented Publish/Subscribe SystemCPUCentral Processing UnitCRTCathode Ray TubeCUDACompute Unified Device ArchitectureDASHDynamic Adaptive Streaming over HTTPdBDecibelDBMSData Base Management SystemDCIDigital Cinema InitiativeDLNADigital Living Network AllianceDLPDigital Light ProcessingDMDDigital Micromirror DeviceDMIPSDhrystone Million Instructions Per SecondDOCSISData Over Cable Service Interface SpecificationDONAData-Oriented Network ArchitectureDPXDigital Picture ExchangeDSLDigital Subscriber LineDSLAMDigital Subscriber Line Access MultiplexerDSLRDigital Single-Lens ReexDSPDigital Signal ProcessorDTAKDynamic Time Alignment KernelDTWDynamic Time WarpingDVBDigital Video BroadcastingDVDDigital Versatile DiscEBUEuropean Broadcasting UnionEBUCoreBasic metadata set defined by the EBUEMDEarth Mover's DistanceENGElectronic News GatheringEOFOVEdges Of Field Of ViewEOTFElectro-Optical Transfer FunctionEPGElectronic Program GuideESPNEntertainment and Sports Programming NetworkESSExtended Spatial ScalabilityEXRHigh Dynamic Range Image FormatFascinatEFormat-Agnostic SCript-based INterAcTive ExperienceFCCFast Channel ChangeFMOFlexible Macro-block OrderingFRNFlexible Rendering NodeFSMFinite State MachinesFTTHFibre-to-the-HomeFullHDHD resolution of 1920 × 1080 pixelsGBGigabyteGOPGroup Of PicturesGPUGraphical Processing UnitGUIGraphical User InterfaceHASHTTP Adaptive StreamingHBBHybrid Broadcast BroadbandHBBTVHybrid Broadcast Broadband TVHDHigh-DefinitionHDMIHigh-Definition Multimedia InterfaceHDRHigh Dynamic RangeHDTVHigh-Definition TelevisionHEVCHigh Efficiency Video CodingHIHearing ImpairedHLFEHigh-Level Feature ExtractionHMMHidden Markov ModelHOAHigher Order AmbisonicsHOGHistograms of Oriented GradientsHQHigh QualityHRTFHead Related Transfer FunctionHTML5HyperText Markup Language 5HTTPHyperText Transfer ProtocolIBCInternational Broadcasting Convention, annual industrial fair, Amsterdam, The NetherlandsIBRImage-Based RenderingICPIterative Closest PointIDIdentityIEEEInstitute of Electrical and Electronics EngineersIETFInternet Engineering Task ForceIGMPInternet Group Management ProtocolIMAXImage Maximum (motion picture film format)I/OInput/OutputIPInternet ProtocolIPTVInternet Protocol TelevisionIROIInteractive Region-Of-InterestISOInternational Standards OrganisationITInformation TechnologyITUInternational Telecommunications UnioniTVInteractive TVJNDJust Noticeable DifferenceJPEGJoint Photographic Experts GroupJPIPJPEG2000 over Internet ProtocolJSIVJPEG2000-based Scalable Interactive VideoJVTJoint Video TeamkBkilo BytesKLTTracking approach proposed by Kanade, Lucas, TomasiKLVKey, Length, Value; a binary encoding format used in SMPTE standardskNNk-Nearest NeighbourLBPLocal Binary PatternsLCDLiquid Crystal DisplayLCSLongest Common SubsequenceLDRLow-Dynamic RangeLEDLight Emitting DiodeLFLight FieldLFELow Frequency EffectsLIDARLight Detection And RangingLSRLayered Scene RepresentationMADMean Absolute DifferenceMAPMean Average PrecisionMDAMulti-Dimensional AudioMLDMulticast Listener DiscoveryMOCAMultimedia over CoaxMP4MPEG-4 Part 14MPDMedia Presentation DescriptionMPEGMoving Picture Experts GroupMPLSMultiprotocol Label SwitchingMVCMultiview Video CodingMXFMaterial eXchange FormatNABNational Association of Broadcasters, synonym for the annually held industrial convention in Las Vegas, USANATNetwork Address TranslationNDNNamed Data NetworkingNHKNippon Hoso Kyokai (Japan Broadcasting Corporation)NTSCNational Television System Committee (analogue television standard used on most of American continent)NTTNippon Telegraph and Telephone Corporation (Japanese Telecom)NVIDIAan American global technology company based in Santa Clara, CaliforniaOBOutside BroadcastOLEDOrganic Light-Emitting DiodeOmniCamOmni-directional camera by Fraunhofer HHIOpenCVOpen source Computer Vision libaryOpenEXRa high dynamic range (HDR) image file formatOPSIOptimized Phantom Source ImagingOSROn-Site RenderingOTTOver-The-TopOVPOnline Video PlatformOWLWeb Ontology LanguageP2PPeer to PeerPCPersonal ComputerPCIPeripheral Component Interconnect (standard computer interface)PDPPlasma Display PanelPiPPicture-in-PicturePSEProduction Scripting EnginePSIRPPublish-Subscribe Internet Routing ParadigmPSNRPeak Signal-to-Noise RatioPTSPresentation Time StampsPTZPan-Tilt-Zoompub/subPublish/subscribePVRPersonal Video RecorderQoEQuality of ExperienceQoSQuality of ServiceRADARRadio Detection and RangingRAIDRedundant Array of Independent DisksRANSACRandom Sample ConsensusRFRandom ForestRGBRed-Green-Blue colour spaceRGBERGB with a one byte shared exponentROReplay OperatorROIRegion-of-InterestRSSRich Site SummaryRTPReal-time Transport ProtocolRUBENSRethinking the Use of Broadband access for Experience-optimized Networks and ServicesSAOCSpatial Audio Object CodingSDStandard DefinitionSHDSuper High-DefinitionsidSpatial IdentifierSIFTScale-Invariant Feature TransformSLAService-Level AgreementSMILSynchronised Multimedia Integration LanguageSMPTESociety of Motion Picture and Television EngineersSNScripting NodeSNRSignal to Noise RatioSpatDIFSpatial sound Description Interchange FormatSQLStructured Query LanguageSTBSet-Top BoxSVCScalable Video CodingSVMSupport Vector MachineSXGASuper eXtended Graphics Adapter referring to resolution of 1280×1024 pixelsSXGA+SXGA at resolution of 1400×1050 pixelsTCPTransmission Control ProtocolTDOATime Difference Of ArrivalTIFFTagged Image File FormatTOFTime Of FlightTRECVIDTREC (Text Retrieval Conference) Video TrackTVTelevisionUCNUser Control NodeUDPUser Datagram ProtocolUHDUltra High DefinitionUHDTVUltra High Definition TVUIUser InterfaceUPnPUniversal Plug and PlayUSBUniversal Serial BusVBAPVector Based Amplitude PanningVBRVideo Based RenderingVDSLVery High Speed Digital Subscriber LineVFXVisual EffectsVMVision MixerVODVideo On DemandVRMLVirtual Reality Modelling LanguageVRNVideo Rendering NodeVTRVideo Tape RecorderVVOVirtual View OperatorWFWave FieldWFSWave Field SynthesisXMLExtensible Markup LanguageXPathXML Path LanguagexTVExplorative TVYUVLuminance and chrominance color space1
Introduction
Oliver Schreer1, Jean-François Macq2, Omar Aziz Niamut3, Javier Ruiz-Hidalgo4, Ben Shirley5, Georg Thallinger6 and Graham Thomas7
1Fraunhofer Heinrich Hertz Institute, Berlin, Germany
2Alcatel-Lucent Bell Labs, Antwerp, Belgium
3TNO, Delft, The Netherlands
4Universitat Politècnica de Catalunya (UPC), Barcelona, Spain
5University of Salford, Manchester, United Kingdom
6Joanneum Research, Graz, Austria
7BBC Research & Development, London, United Kingdom
The consumption of audio-visual media has changed rapidly in the past decade. Content is now viewed on a variety of screens ranging from cinema to mobile devices. Even on mobile devices, today's user expects to be able to watch a personal view of a live event, for example, with a level of interactivity similar to that of typical web applications. On the other hand, current video and media production technology has not kept up with these significant changes. If we consider the complete media processing chain, the production of media, the delivery of audio-visual information via different kinds of distribution channels and the display and interaction at the end user's terminal, many challenges have to be addressed. The major challenges are the following.
Due to reuse of video content for different distribution channels, there is a need for conversion and post-production of the content in order to cope with different screen sizes. It is widely accepted that a movie production for cinema is recorded in a significantly different way to that intended for smaller screens. However, production budgets are limited; hence complex and costly re-purposing must be avoided. A good example is the production of 3D movies, where the aim is to develop camera technologies that allow 2D and 3D capture at the same time. Approaches to multiformat production that require parallel shooting or significant manual re-editing are no longer financially viable.
The convergence of broadcast and Internet requires future media production approaches to embrace the changes brought by web-based media. The habits of media consumption have changed drastically, partially due to the availability of user interaction with users freely navigating around web pages and interactively exploring maps and views of the street for example. Hence, future media production and delivery must support interactivity.
Although the overall bandwidth available for media delivery is continuing to increase, future media services will still face limitations, particularly if the end user at home or on-the-go is considered. Hence, new distribution formats are required to allow for the provision of audio-visual media beyond current HDTV formats, to support interactivity by the end user and to support intelligent proxies in the network that are capable of performing processing, which cannot be offered by low capacity devices. First developments towards resolution beyond HD are already appearing commercially, such as 4K camera and display technologies.
In addition, the user wants to decide when, where and on which device to watch audio-visual media as nowadays a variety of devices are available (including mobiles, TV at home and immersive large projection systems in cinemas). All of these devices must be supported by media delivery and rendering. Therefore, a large variety of audio-visual formats must be provided for the full spectrum of terminals and devices taking their special capabilities and limitations into account.
Even in live events, a lot of human operators such as directors or cameramen are involved in content creation and capturing the event from different viewpoints. Due to the increasing number of productions, automated viewpoint selection may be able to make a significant contribution to limiting production costs.
A new concept appearing on the horizon that could provide answers to these issues and challenges is referred to as format-agnostic media production. The basic idea is to define a new approach to media production that supports the necessary flexibility across the whole production, delivery and rendering chain. A key aspect of this approach is to acquire a representation of the whole audio-visual scene at a much higher fidelity than traditional production systems, and to shift closer to the user-end the decision of how the content is experienced. This idea allows end users to experience new forms of immersive and interactive media by giving them access to audio-visual content with the highest fidelity and flexibility possible. This book discusses current challenges, trends and developments along the whole chain of technologies supporting the format-agnostic approach. This approach could lead to a gradual evolution of today's media production, delivery and consumption patterns towards fully interactive and immersive media.
In Chapter 2 “State-of-the-art and Challenges in Media Production, Broadcast and Delivery”, we give an overview on the current situation in audio-visual acquisition, coding and delivery and the evolution of terminal devices at the end-user side in current media production and delivery. Based on the review of the state-of-the-art and a summary of current and upcoming challenges, the format-agnostic concept is explained. This concept offers the capability to deal successfully with the new requirements of current and future media production.
The acquisition and processing of audio-visual media following a format-agnostic approach is discussed in two separate chapters, Chapter 3 and Chapter 4. In Chapter 3 “Video Acquisition”, the three major video format parameters, spatial resolution, temporal resolution and colour depth (i.e., the dynamic range) are investigated with respect to the benefits they offer for future immersive media production. Due to the large variety of future video formats moving towards higher resolution, frame rate and dynamic range, the need for a format-agnostic concept is particularly helpful in supporting media production and rendering independent of the specific format. The composition and merging of visual information from different sensors will lead to more appealing and higher quality images. In Chapter 4 “Platform-Independent Audio”, the current challenges faced in audio broadcast using a channel-based approach and sound scene reproduction techniques such as wave field synthesis are reviewed. The problem of having many competing audio formats is addressed at both the production and reproduction (user) ends. The concept of object-based audio representation is introduced and several example implementations are presented in order to demonstrate how this can be realised.
In Chapter 5 “Semi-automatic Content Annotation”, both manual and automatic content annotation technologies that support format-agnostic media production are discussed. The specific requirements on those tools, in particular under real-time constraints of live scenarios are investigated. Relevant video processing approaches such as detection and tracking of persons as well as action detection are presented. Finally, user interfaces in media production are discussed, which help the production team to perform semi-automatic content annotation.
One of the advanced concepts of media production currently under discussion and development is presented in Chapter 6 “Virtual Director”. This concept builds on various audio-visual processing techniques that allow for automatic shot framing and selection to be used at the production side or by the end user. Approaches are discussed for addressing the semantic gap between data from low-level content analysis and higher-level concepts – a process called Semantic Lifting, finally leading to content and view selection that fulfils the desires of the user.
Chapter 7 “Scalable Delivery of Navigable and Ultra-High Resolution Video” deals with the main challenges in delivering a format-agnostic representation of media. As the final decision on how content will be presented is moved closer to the end user, two factors have a significant impact on delivery: higher data rate at the production side and higher levels of interactivity at the end-user side. The chapter focuses on coding and delivery techniques, which support spatial navigation based on the capture of higher resolution content at the production side. Methods for content representation and coding optimisation are discussed in detail. Finally, architectures for adaptive delivery are presented, showing how ultra-high resolution video can be efficiently distributed to interactive end users.
Chapter 8 “Interactive Rendering” starts with a list of challenges for end user devices resulting from increased interaction with the content supported by the format-agnostic media production and delivery concept. Gesture-based interaction is one of the recent trends in interactive access to media, and this is discussed in detail. A number of technologies already on the market and currently under development are presented. This chapter concludes with user studies of gesture interfaces showing that technology development must coincide with continuous evaluation in order to meet user requirements.
Finally, Chapter 9 “Application Scenarios and Deployment Domains” discusses the format-agnostic concept from an application point of view. Based on the technologies described in the previous chapters, various application scenarios are derived. An analysis is presented of the impact of the format-agnostic concept and related new technologies in the production, network, device and end user domains. Based on this future outlook, this chapter concludes the book.
This book offers a comprehensive overview of current trends, developments and future directions in media production, delivery and rendering. The format-agnostic concept can be considered as a paradigm shift in media production, moving the focus from image to scene representation and from professionally-produced programmes to interactive live composition driven by the end user. Therefore, this will influence how media is produced, delivered and presented leading to more efficient, economic and user-friendly ways for media to be produced, delivered and consumed. Offering new services, better accessibility to content and putting the user in control are the main aims.
The idea for this book was born in the European FP7 research project FascinatE (Grant agreement no.: FP7 248138, http://www.fascinate-project.eu), which was proposing and investigating the format-agnostic concept for the first time. Beside the editors and the co-authors, which contributed to this book, there are several other colleagues to be mentioned. Without their expertise, their ideas and the fruitful discussion over more than 5 years, this book would not have been possible. Therefore we gratefully thank the following colleagues from several institutions and companies in Europe: R. Schäfer, P. Kauff, Ch. Weissig, A. Finn, N. Atzpadin and W. Waizenegger (Fraunhofer Heinrich Hertz Institute, Berlin Germany); G. Kienast, F. Lee, M. Thaler and W. Weiss (Joanneum Research, Graz, Austria); U. Riemann (Deutsche Thomson OHG, Hannover, Germany); A. Gibb and H. Fraser (BBC R&D, London, United Kingdom); I. Van de Voorde, E. Six, P. Justen, F. Vandeputte, S. Custers and V. Namboodiri (Alcatel-Lucent Bell Labs, Antwerp, Belgium); J.R. Casas, F. Marqués and X. Suau (University Politecnica Catalunya, Barcelona, Spain); O. Juhlin, L. Barkhuus and E. Önnevall (Interactive Institute, Stockholm, Sweden); I. Drumm (University of Salford, Manchester, United Kingdom); and F. Klok, S. Limonard, T. Bachet, A. Veenhuizen and E. Thomas (TNO, Delft, The Netherlands).
The editorial team, August 2013
2
State-of-the-Art and Challenges in Media Production, Broadcast and Delivery
Graham Thomas1, Arvid Engström2, Jean-François Macq3, Omar Aziz Niamut4, Ben Shirley5 and Richard Salmon1
1BBC Research & Development, London, UK
2Interactive Institute, Stockholm, Sweden
3Alcatel-Lucent Bell Labs, Antwerp, Belgium
4TNO, Delft, The Netherlands
5University of Salford, Manchester, UK
2.1 Introduction
To place the current technological state of media production and delivery in perspective, this chapter starts by looking at some of the key milestones in the development of the world of broadcasting, taking the BBC as an example. The BBC started its first radio broadcasts in 1922 over 25 years after Marconi first demonstrated the transmission of pulsed outdoor radio transmissions in 1895. It was the first broadcaster in the world to provide a regular 405-line ‘high definition’ television service, starting in November 1936. The BBC launched a 625-line colour service in June 1967, although the 405-line monochrome TV service continued until January 1985. Teletext services started to appear in the 1970s, with the BBC's Ceefax service launching in September 1974. The BBC started digital widescreen (16:9) broadcasting terrestrially and by satellite in 1988, including digital text services based on MHEG (ISO, 1997), and turned off the last analogue 625-line transmissions (and thus also Ceefax) in October 2012. The UK was by no means the first country to make the switch to fully-digital TV, with the Netherlands completing the transition as early as 2006, although some other countries do not plan to switch until 2020 or beyond. Experiments in high definition television (HDTV) were underway in the early 1980s (Wood, 2007), although the BBC's first digital HDTV (now meaning 1,080 lines rather than 405!) did not start as a full service until December 2007. This was about the same time that the iPlayer online catch-up service was launched. At first, it was just available through a web browser, but was rapidly developed to support a wider range of devices at different bitrates and resolutions, and is currently available for media hubs, game consoles, smartphones, portable media players, smart TVs and tablets.
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
