Eyestrain Reduction in Stereoscopy - Laure Leroy - E-Book

Eyestrain Reduction in Stereoscopy E-Book

Laure Leroy

0,0
139,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Stereoscopic processes are increasingly used in virtual reality and entertainment. This technology is interesting because it allows for a quick immersion of the user, especially in terms of depth perception and relief clues. However, these processes tend to cause stress on the visual system if used over a prolonged period of time, leading some to question the cause of side effects that these systems generate in their users, such as eye fatigue. This book explores the mechanisms of depth perception with and without stereoscopy and discusses the indices which are involved in the depth perception. The author describes the techniques used to capture and retransmit stereoscopic images. The causes of eyestrain related to these images are then presented along with their consequences in the long and short term. The study of the causes of eyestrain forms the basis for an improvement in these processes in the hopes of developing mechanisms for easier virtual viewing.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 192

Veröffentlichungsjahr: 2016

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Title

Copyright

Acknowledgments

Introduction

1 Principles of Depth and Shape Perception

1.1. Function of the eye

1.2. Depth perception without stereoscopy

1.3. Depth perception through stereoscopic vision

1.4. Perception of inclinations and curves

1.5. Artificial stereoscopic vision

2 Technological Elements

2.1. Taking a picture

2.2. Reproduction

2.3. Motion parallax restitution

3 Causes of Visual Fatigue in Stereoscopic Vision

3.1. Conflict between accommodation and convergence

3.2. Too much depth

3.3. High spatial frequencies

3.4. High temporal frequency

3.5. Conflicts with monoscopic cues

3.6. Vertical disparities

3.7. Improper device settings

4 Short- and Long-term Consequences

4.1. Short-term effects

4.2. Long-term consequences

5 Measuring Visual Fatigue

5.1. Visual acuity

5.2. Proximum accommodation function

5.3. Ease of accommodation

5.4. Stereoscopic acuity

5.5. Disassociated heterophorias

5.6. Fusional reserves

5.7. Subjective tests

6 Reducing Spatial Frequencies

6.1. Principle

6.2. Technical solution

6.3. Experiment

6.4. Measurements of fatigue taken

6.5. Result

7 Reducing the Distance Between the Virtual Cameras

7.1. Principle

7.2. Experiment

7.3. Results

7.4. Discussion

Conclusion

Bibliography

Index

End User License Agreement

List of Tables

6 Reducing Spatial Frequencies

Table 6.1. Relative time for image treatment using wavelet transform and BOX FILTER

Table 6.2. Number of images per second for a BOX FILTER decomposition and recomposition of an image in terms of its size

Table 6.3. Correspondence between the number of periods of remaining oscillations and the number of pixels for the rolling average

Table 6.4. Comparison of the three algorithms studied

List of Illustrations

1 Principles of Depth and Shape Perception

Figure 1.1. Diagram of eye function

Figure 1.2. Light and shadows completely change depth perception

Figure 1.3. Interposition between a rectangle and an ellipse. The brain interprets the rectangle as being behind the ellipse

Figure 1.4. Relative size (as well as height) gives the impression that the smallest flower is also the furthest away

Figure 1.5. The further away an object, the less clear its texture

Figure 1.6. Oblique perspective

Figure 1.7. Linear perspective

Figure 1.8. During movement, the retinal image of a near object moves further over the retina than the retinal image of a faraway object

Figure 1.9. Principle of accommodation: the curvature of the lens changes to move the point of focus

Figure 1.10. Convergence of the two optical axes toward the point of focus

Figure 1.11. When we focus on the green apple, the projections of the image of the red apple (which is nearer) form a more obtuse angle than the projections of the image of the green apple. For a color version of this figure, see www.iste.co.uk/leroy/stereoscopy.zip

Figure 1.12. Julesz’s random dot stereogram [JUL 71]

Figure 1.13. An oblique plane produces a horizontal gradient of horizontal disparities

Figure 1.14. An inclined plane produces a vertical gradient of horizontal disparities

Figure 1.15. The gradient of the texture encourages us to perceive an inclined plane

Figure 1.16. The gradient of the texture encourages us to perceive an oblique plane

Figure 1.17. The optimal frequency of variation of disparities depends on the eccentricity (seen from above)

Figure 1.18. The cylinder is seen as deformed depending on its distance from the observer (seen from above)

Figure 1.19. Variation in perception of shape over distance, after the data in [JOH 91]

Figure 1.20. Certain complex surfaces can be reproduced in a realistic fashion with the help of gradients of texture

Figure 1.21. Texture of a curved surface and the translation of its variation in the frequency domain

Figure 1.22. Textures created with either a variation in peak frequencies (left) or a variation in average frequencies (right) [SAK 95]

Figure 1.23. Shadow on a sphere

Figure 1.24. Orientation of the surfaces of the skin with regard to shadows [KOE 96]

Figure 1.25. Stereoscopic projection of an image behind the screen

Figure 1.26. Stereoscopic projection of an object in front of the screen

2 Technological Elements

Figure 2.1. Creating stereoscopic images requires two lenses aiming at the same scene

Figure 2.2. Distances between the viewfinders may range from a few millimeters

1

to more than 100 m [ASS 06]

Figure 2.3. A camera with a fixed distance between the two viewfinders

Figure 2.4. The well-known anaglyph glasses

Figure 2.5. Constructing an anaglyphic image

Figure 2.6. Light is composed of waves of various orientations

Figure 2.7. Horizontal (left) and vertical (right) polarization

Figure 2.8. Two identical filters will not change the initial polarization, while two opposed filters will block all waves

Figure 2.9. Use of linear polarization in stereoscopy

Figure 2.10. Circular polarization

Figure 2.11. Active glasses hide one eye, then the other, in synchronization with the projectors

Figure 2.12. Parallax-barrier filter

Figure 2.13. Auto–stereoscopy by means of lenticular networks

Figure 2.14. Offsetting the point of accommodation

Figure 2.15. Principle of a virtual reality headset

Figure 2.16. Pseudoscopic movements caused by horizontal displacement

Figure 2.17. Pseudoscopic movements caused by a displacement in distance

Figure 2.18. Correction of pseudoscopic movements for a horizontal displacement

Figure 2.19. Functioning principle of a magnetic tracker

Figure 2.20. ARtoolKit, a camera tracking system, often used in augmented-reality applications and which can be used in head tracking

Figure 2.21. Detecting head position by infrared with target

Figure 2.22. Kinect 1 and 2 and the outlines which they capture

2

Figure 2.23. Monoscopic motion parallax

3 Causes of Visual Fatigue in Stereoscopic Vision

Figure 3.1. Accommodation and convergence in natural vision

Figure 3.2. Accommodation and convergence in artificial vision are not necessarily equivalent

Figure 3.3. Acceptable link between distances of accommodation and convergence

Figure 3.4. In stereoscopic vision, our accommodation is fixed on the screen, and so we must adjust convergence so that it remains in the acceptable zone

Figure 3.5. To limit fatigue, make sure horizontal disparity is not too pronounced

Figure 3.6. Horizontal disparity will be much greater and thus more tiring in the front row of the cinema than in the back row

Figure 3.7. Two filled squares with very high and very low spatial frequencies

Figure 3.8. Spatial frequency is the number of cycles of luminance per degree of vision. Here, we have a spatial frequency of 3 cycles per degree (3 cpd)

Figure 3.9. Panum’s area extends on both sides of the horopter

Figure 3.10. Relationship between spatial frequencies and Panum’s area, after data from [SCH 86]

Figure 3.11. Comfort function [PER 98]

Figure 3.12. If the object is stereoscopically located in front of the screen and if it is not cut off by an edge, there is no conflict

Figure 3.13. If the object is stereoscopically located in front of the screen but cut off by an edge, then there is a conflict between cues

Figure 3.14. Calculating a vertical disparity

Figure 3.15. Vectorial representation of disparities in a fronto-parallel plane

Figure 3.16. Image facilitating adjustment of the contrast of a screen or projector [HTT 92]

4 Short- and Long-term Consequences

Figure 4.1. Ease of accommodation. For a color version of this figure, see www.iste.co.uk/leroy/stereoscopy.zip

Figure 4.2. Stereoscopic acuity is the minimum amount of depth perceived by a person, measured as an angle

Figure 4.3. Punctum proximum

5 Measuring Visual Fatigue

Figure 5.1. Landolt rings

Figure 5.2. Letter and number optotypes

Figure 5.3. Punctum proximum

Figure 5.4. Flipper lens test

Figure 5.5. Stereoscopic acuity is the minimum amount of depth perceived by a person, measured as an angle

Figure 5.6. The Zeiss Polatest [ZEI 09]

Figure 5.7. Projection of a line in the Polatest

Figure 5.8. Fly test

Figure 5.9. Wirt points

Figure 5.10. Lang test [MC2 16]

Figure 5.11. Randot test [PRE 09]

Figure 5.12. TNO test [MC2 16]

Figure 5.13. Different heterophorias

Figure 5.14. Point of fusion

Figure 5.15. Point of rupture of fusion

6 Reducing Spatial Frequencies

Figure 6.1. Even–odd decomposition

Figure 6.2. End of the FFT

Figure 6.3. Time–frequency plane on both temporal and Fourier bases

Figure 6.4. Time–frequency plane for sliding window Fourier transform

Figure 6.5. Time–frequency plane for wavelets

Figure 6.6. Decomposition of a 1D signal into continuous wavelets

Figure 6.7. Figure to be decomposed

Figure 6.8. Wavelet decomposition of an image

Figure 6.9. Decomposition of a 1D signal by a Mallat algorithm

Figure 6.10. The Mallat algorithm pyramid

Figure 6.11. Mallat transform for a 1D signal

Figure 6.12. Malla reconstruction algorithm for one dimension

Figure 6.13. Wavelet decomposition of an image

Figure 6.14. Fast wavelet transformation of an image

Figure 6.15. Theoretical comparison between the number of operations for a FTT product and for a convolution. For a color version of this figure, see www.iste.co.uk/leroy/stereoscopy.zip

Figure 6.16. Decomposition and recomposition time for the convolution. For a color version of this figure, see www.iste.co.uk/leroy/stereoscopy.zip

Figure 6.17. Decomposition and recomposition time with the Fourier transform. For a color version of this figure, see www.iste.co.uk/leroy/stereoscopy.zip

Figure 6.18. Example pixel values for an original image, for BOX FILTER calculations

Figure 6.19. Example integral image of BOX FILTER calculations

Figure 6.20. Representation of the points to be summed for BOX FILTER. For a color version of this figure, see www.iste.co.uk/leroy/stereoscopy.zip

Figure 6.21. Representation of the summation process in BOX FILTER 1. For a color version of this figure, see www.iste.co.uk/leroy/stereoscopy.zip

Figure 6.22. Representation of the summation process in BOX FILTER 2

Figure 6.23. Explanation of the averages in the BOX FILTER example

Figure 6.24. High frequencies in the BOX FILTER calculation example

Figure 6.25. Decomposition into high and low frequencies with BOX FILTER on a 5-pixel square basis of calculation

Figure 6.26. Decomposition into high and low frequencies with BOX FILTER on an 8-pixel square basis of calculation

Figure 6.27. Starting matrix for the difference in calculating averages

Figure 6.28. Calculating averages over squares of pixels

Figure 6.29. Calculating rolling averages

Figure 6.30. Rolling averages over 2-pixel-per-side squares

Figure 6.31. Rolling quadratic average carried out on a 4-pixel-per-side base of calculation

Figure 6.32. Rolling Gaussian average calculated on a 5-pixel-per-side base

Figure 6.33. Example of calculation of Haar wavelet transform

Figure 6.34. Decomposition with an average over squares of 2 pixels per side

Figure 6.35. Decomposition with an average over squares of 4 pixels per side

Figure 6.36. Complexity of BOX FILTER and averages in relation to size of calculation square for a 1,024 × 1,024 image. For a color version of this figure, see www.iste.co.uk/leroy/stereoscopy.zip

Figure 6.37. Complexity for rolling BOX FILTER and rolling averages as a function of the size of the calculation square for a 1,024 × 1,024 image. For a color version of this figure, see www.iste.co.uk/leroy/stereoscopy.zip

Figure 6.38. Putting a threshold on the Fourier transform of the initial image

Figure 6.39. Thresholding of the Fourier transform of the rolling average over 2-pixel-per-side squares

Figure 6.40. Thresholding of the Fourier decomposition for rolling averages over 4, 8, 16 and 32 pixels (left-to-right and top-to-bottom)

Figure 6.41. Calculation of horizontal parallax

Figure 6.42. Calculation of number of pixels per degree of visual angle

Figure 6.43. The virtual world. For a color version of this figure, see www.iste.co.uk/leroy/stereoscopy.zip

Figure 6.44. Virtual world without textures, seen front on and from the side

Figure 6.45. Subject performing the non-blurred task. For a color version of this figure, see www.iste.co.uk/leroy/stereoscopy.zip

Figure 6.46. Change in the proximum accommodation function

Figure 6.47. Change in accommodative facility

Figure 6.48. Change in stereoscopic acuity

Figure 6.49. Effectiveness of the task with respect to cylinders involved

Figure 6.50. Which case was the more aesthetically pleasing?

Figure 6.51. In which case was the task more tiring for your eyes?

Figure 6.52. In which case was the task easier?

Figure 6.53. Initial image

Figure 6.54. Treating object by object

Figure 6.55. Treating the whole image

7 Reducing the Distance Between the Virtual Cameras

Figure 7.1. Diagram of real equipment

Figure 7.2. Real sphere

Figure 7.3. Virtual sphere

Figure 7.4. Real unknown shape close-up

Figure 7.5. Complete real set-up, distance view

Figure 7.6. Virtual unknown shape

Figure 7.7. Positioning of virtual and real objects and of the observer when the virtual object moves

Figure 7.8. Screen capture during the test: requesting confirmation of a response

Figure 7.9. Influence of stereoscopic disparities or motion parallax on the cumulative perception of the sphere and the random form

Figure 7.10. Average time taken for the task

Figure 7.11. Influence of stereoscopy on perception of curves when motion parallax is present

Figure 7.12. Difference between points of rupture before and after experiment

Figure 7.13. Difference in stereoscopic acuity before and after experiment

Figure 7.14. Difference between the punctum proximum of accommodation before and after experiment

Figure 7.15. Difference between ease of accommodation before and after experiment

Figure 7.16. Accuracy and precision in the positioning task for four conditions: monoscopic vision, stereoscopic vision, stereoscopic vision at the beginning but not at the end and stereoscopic vision at the end of the experiment

Figure 7.17. Accuracy and position in the depth perception task for four conditions: monoscopic vision, stereoscopic vision, stereoscopic vision at the beginning and not at the end, and stereoscopic vision at the end of the experiment

Figure 7.18. Difference in response time between four conditions: monoscopic vision, stereoscopic vision, stereoscopic vision at the beginning and not at the end, and stereoscopic vision at the end of the experiment

Figure 7.19. Accuracy and precision in the curve perception task for four conditions: monoscopic vision, stereoscopic vision, stereoscopic vision at the beginning and not at the end, and stereoscopic vision at the endof the experiment

Figure 7.20. Difference in time for the curve perception task for four conditions: monoscopic vision, stereoscopic vision, stereoscopic vision at the beginning and not at the end, and stereoscopic vision at the end of the experiment

Figure 7.21. Accuracy and precision in the collision detection task for four conditions: monoscopic vision, stereoscopic vision, stereoscopic vision at the beginning and not at the end, and stereoscopic vision at the end of the experiment

Figure 7.22. Difference in time for the collision detection task for four conditions: monoscopic vision, stereoscopic vision, stereoscopic vision at the beginning and not at the end, and stereoscopic vision at the end of the experiment

Guide

Cover

Table of Contents

Begin Reading

Pages

C1

iii

iv

v

ix

x

xi

xii

xiii

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

155

156

157

158

159

160

161

162

163

164

165

166

167

G1

G2

G3

G4

G5

G6

G7

FOCUS SERIES

Series Editor Imad Saleh

Eyestrain Reduction in Stereoscopy

Laure Leroy

First published 2016 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:

ISTE Ltd27-37 St George’s RoadLondon SW19 4EUUK

www.iste.co.uk

John Wiley & Sons, Inc.111 River StreetHoboken, NJ 07030USA

www.wiley.com

© ISTE Ltd 2016

The rights of Laure Leroy to be identified as the author of this work have been asserted by her in accordance with the Copyright, Designs and Patents Act 1988.

Library of Congress Control Number: 2016939644

British Library Cataloguing-in-Publication Data

A CIP record for this book is available from the British Library

ISBN 978-1-84821-998-4

Acknowledgments

I would like to thank the following:

– Philippe Fuchs, who introduced me to stereoscopic science and to research in general. He was my thesis supervisor and I acknowledge my enormous debt to him. His desire to go to the limit of things and to understand everything was an example for me;

– Imad Saleh, my laboratory director, for having allowed me to write this book and for his support;

– Ghislaine Azémard, my team leader from whom I still learn many things day after day, for her kindness and continued support;

– Ari Bouaniche, who did her internship with me, for her high-quality work on intermittent stereoscopy;

– David Aura, who did his thesis with me, for his work on spatial frequencies related to perspective;

– Indira Thouvenin and Safwan Chendeb for their countless pieces of advice, their everyday support and our long discussions;

– Jean-Louis Vercher and Pascaline Neveu for all the passionate discussions on the human neurological function and visual system;

– Bruno Leroy, Claire Desbant, Anaїs Juchereaux, Xavier Pagano, Julia Philippe and Patricia Azame for their astute proofreading and their advice;

– Matthew, my partner, who has supported me for long years and sustains me day after day;

– my friends, my family and my family-in-law for having always been there for me, even in the most difficult moments.

Introduction

Devices offering stereoscopic vision are becoming more and more frequent in everyday life. We are offered films to watch with depth perception – the famous “3D Cinema” – we are offered games consoles including small three-dimensional (3D) screens, the video game industry assures us that virtual reality helmets will be all the rage tomorrow, the first smartphones with 3D screens have begun to appear, etc. Even if television screens are showing a decline in sales, 3D vision, or stereoscopic vision, is slowly becoming part of our everyday lives.

On the other hand, some professionals have already long been using stereoscopic vision for extended periods of time. For example, the review of virtual car prototypes is carried out in immersive rooms with 3D vision, some training methods are also performed in stereoscopy, scientists observe the molecules that they create immersively and in 3D, etc. For all these people, 3D vision is an important element of their professional life.

Despite this enthusiasm, more and more people report having headaches coming out of a 3D film, de-activating the 3D feature on their console or not using their stereoscopic TV screen. Some professionals reduce the use of 3D in their applications from time to time to rest their eyes. All these signs show that there are questions to answer about these techniques.

This book does not intend to explain how and why we should ban artificial stereoscopy from our lives, nor, on the contrary, to affirm that stereoscopy is not at all tiring for the eyes, and that this miracle of technology has no secondary effects. It intends to explain why it can be tiring, and to offer some paths for content creators to reduce visual fatigue among users, yet without insisting that technological advances will be able to resolve all the psychological problems linked to 3D technology.

Chapter 1 will explain the main principles of 3D vision in general and of stereoscopic vision in particular. In fact, we will see that stereoscopy cannot be studied on its own, outside the context given by all the other indicators of depth. Our visual system uses all the information at its disposal and the problems begin to appear when conflicts arise between pieces of information.

Chapter 2 discusses the elements of technology currently used to achieve artificial stereoscopy. It will allow us to familiarize ourselves with the technological terms and to be able to understand the ins and outs of each technology.

Chapter 3 will explain the known causes of visual fatigue in stereoscopy. It gives a description of the current research in this area. It is important to be able to differentiate between causes of fatigue to know which are those over which we can have some influence and which are those for which an in-depth revision of the content is necessary.

Chapter 4 quickly explains the consequences of long and sometimes uncontrolled stereoscopic viewing. Unfortunately, we do not yet have sufficient hindsight to be able to understand the long-term effects, but some short-term effects have already been measured.

Chapter 5