Remote Sensing Imagery -  - E-Book

Remote Sensing Imagery E-Book

0,0
144,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Dedicated to remote sensing images, from their acquisition to their use in various applications, this book covers the global lifecycle of images, including sensors and acquisition systems, applications such as movement monitoring or data assimilation, and image and data processing. It is organized in three main parts. The first part presents technological information about remote sensing (choice of satellite orbit and sensors) and elements of physics related to sensing (optics and microwave propagation). The second part presents image processing algorithms and their specificities for radar or optical, multi and hyper-spectral images. The final part is devoted to applications: change detection and analysis of time series, elevation measurement, displacement measurement and data assimilation. Offering a comprehensive survey of the domain of remote sensing imagery with a multi-disciplinary approach, this book is suitable for graduate students and engineers, with backgrounds either in computer science and applied math (signal and image processing) or geo-physics. About the Authors Florence Tupin is Professor at Telecom ParisTech, France. Her research interests include remote sensing imagery, image analysis and interpretation, three-dimensional reconstruction, and synthetic aperture radar, especially for urban remote sensing applications. Jordi Inglada works at the Centre National d'Études Spatiales (French Space Agency), Toulouse, France, in the field of remote sensing image processing at the CESBIO laboratory. He is in charge of the development of image processing algorithms for the operational exploitation of Earth observation images, mainly in the field of multi-temporal image analysis for land use and cover change. Jean-Marie Nicolas is Professor at Telecom ParisTech in the Signal and Imaging department. His research interests include the modeling and processing of synthetic aperture radar images.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 558

Veröffentlichungsjahr: 2014

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Preface

Part 1. Systems, Sensors and Acquisitions

Chapter 1. Systems and Constraints

1.1. Satellite systems

1.2. Kepler’s and Newton’s laws

1.3. The quasi-circular orbits of remote sensing satellites

1.4. Image acquisition and sensors

1.5. Spectral resolution

Chapter 2. Image Geometry and Registration

2.1. The digital image and its sampling

2.2. Sensor agility and incidence angle

2.3. Georeferencing of remote sensing images

2.4. Image registration

2.5. Conclusion

Chapter 3. The Physics of Optical Remote Sensing

3.1. Radiometry

3.2. Geometric etendue, sensitivity of an instrument

3.3. Atmospheric effects

3.4. Spectral properties of the surfaces

3.5. Directional properties of the surfaces

3.6. Practical aspects: products, atmospheric corrections, directional corrections

Chapter 4. The Physics of Radar Measurement

4.1. Propagation and polarization of electromagnetic waves

4.2. Radar signatures

4.3. The basics of radar measurement physics: interaction between waves and natural surfaces

4.4. Calibration of radar images

4.5. Radar polarimetry

Part 2. Physics and Data Processing

Chapter 5. Image Processing Techniques for Remote Sensing

5.1. Introduction

5.2. Image statistics

5.3. Preprocessing

5.4. Image segmentation

5.5. Information extraction

5.6. Classification

5.7. Dimensionality reduction

5.8. Information fusion

5.9. Conclusion

Chapter 6. Passive Optical Data Processing

6.1. Introduction

6.2. Pansharpening

6.3. Spectral indices and spatial indices

6.4. Products issued from passive optical images

6.5. Conclusion

Chapter 7. Models and Processing of Radar Signals

7.1. Speckle and statistics of radar imagery

7.2. Representation of polarimetric data

7.3. InSAR interferometry and differential interferometry (D-InSAR)

7.4. Processing of SAR data

7.5. Conclusion

Part 3. Applications: Measures, Extraction, Combination and Information Fusion

Chapter 8. Analysis of Multi-Temporal Series and Change Detection

8.1. Registration, calibration and change detection

8.2. Change detection based on two observations

8.3. Time series analysis

8.4. Conclusion

Chapter 9. Elevation Measurements

9.1. Optic stereovision

9.2. Radargrammetry

9.3. Interferometry

9.4. Radar tomography

9.5. Conclusion

Chapter 10. Displacement Measurements

10.1. Introduction

10.2. Extraction of displacement information

10.3. Combination of displacement measurements

10.4. Conclusion

Chapter 11. Data Assimilation for the Monitoring of Continental Surfaces

11.1. Introduction to data assimilation in land surface models

11.2. Basic concepts in data assimilation

11.3. Different approaches

11.4. Assimilation into land surface models

11.5. Data assimilation – in practice

11.6. Perspectives

Bibliography

List of Authors

Index

First published 2014 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:

ISTE Ltd27-37 St George’s RoadLondon SW19 4EUUK

www.iste.co.uk

John Wiley & Sons, Inc.111 River StreetHoboken, NJ 07030USA

www.wiley.com

© ISTE Ltd 2014

The rights of Florence Tupin, Jordi Inglada and Jean-Marie Nicolas to be identified as the authors of this work have been asserted by them in accordance with the Copyright, Designs and Patents Act 1988.

Library of Congress Control Number: 2013955375

British Library Cataloguing-in-Publication Data

A CIP record for this book is available from the British Library

ISBN 978-1-84821-508-5

Preface

This book is aimed at students, engineers and researchers wishing to gain broad knowledge of remote sensing imagery. It is an introductory book that covers the entire chain from the acquisition of data to processing, and from physics and measuring principles to their resulting applications in the observation of planet Earth. We took it upon ourselves to present the different means of imagery together, be they optic, hyper-spectral or radar, since the user is increasingly acquainted with the joint use of this data. We have written this book for a readership that is acquainted with applied mathematics, signal processing or computer science, as taught in scientific professions and in engineering schools. It should thus enable engineers and researchers in image and signal processing, as well as geophysicians and mathematicians working in different applied fields of remote sensing: geographers, agronomists, urban planners, etc., to take advantage of the results. The book does not therefore specialize in the themes it approaches, but the references given will enable the curious readers to refer to other works.

 

This book was built as a coherent whole, giving a general view of remote sensing imagery, but it is equally possible to read the chapters independently. The book is organized into three parts. Part 1 is dedicated to acquisition systems, orbitography of satellites, sensors and the physics of measuring and observation, be it radar or optical. Part 2 presents the data and image processing tools, while emphasizing the specificity of remote sensing data. We will therefore discern between optical and multi-spectral signals, or radar data that show very different modelings. Part 3 is dedicated to the applications, the change detection and the analysis of multi-temporal series, as well as elevation measures, motion measures and data assimilation.

 

This book is a collective effort and several authors have had their input, which we hereby thank them for. We also wish to thank CNES (National Center of Space Studies), which has been the source of several studies which were indispensable for this book, and that has made the book more interesting through the numerous illustrations it has made available. We would also like to thank Henri Maitre, who initiated this book and also brought it to completion.

Florence TUPINJordi INGLADAJean-Marie NICOLASDecember 2013

Part 1

Systems, Sensors and Acquisitions

Chapter 1

Systems and Constraints

1.1. Satellite systems

A remote sensing satellite placed in orbit around the Earth is subject to several gravitational forces that will define its trajectory and motion. We will see that orbit formalism dates as far as Kepler (1609), and the motion of satellites is modeled using Newton’s laws. The Earth has specific properties, such as being flat at the poles; these specificities will introduce several changes to the Kepler model: quite strangely, as we will see, the consequences will turn out be extremely beneficial for remote sensing satellites, since they will allow us to have heliosynchronous sensors; this will enable them to acquire data at the same time as the solar hour, which in turn simplifies the comparisons of the respective data acquisitions.

The objective of this chapter is to briefly analyze orbital characteristics in order to draw some conclusions regarding the characteristics of imaging systems that orbit the Earth. For more details, readers can refer to the work of Capderou [CAP 03].

1.2. Kepler’s and Newton’s laws

By studying the appearance of the planets around the Sun (and, in particular, that of Mars), in 1609, Kepler proposed (in a purely phenomenological manner) the following three laws describing the motion of planets around the Sun:

– The planets’ trajectory lies in a plane and is represented by an ellipse having the Sun as its focus.
– The area swept out by the segment joining the Sun and the planet during a given period of time is constant.
– The square of the revolution period is proportional to the cube of the length of the major axis of the ellipse.

In 1687, Newton demonstrated these laws by giving us a model of the universal attraction. This model stipulates that two punctual masses m and M exercise a force F against each other, colinear to the line joining these two masses:

[1.1]

This force being the only one that can modify the motion of the satellite, we can therefore show that this motion verifies the following essential properties:

– The trajectory of the satellite lies in a plane, the “orbital plane”. The distance r verifies, using polar coordinates, the equation of an ellipse:

[1.2]

We can easily deduce the relations:

[1.3]

– Since the attractive force is colinear to the distance vector , and there is no other force, the angular momentum

[1.4]

is conserved, so that:

where C is a constant that represents the law of equal areas, i.e. the second Kepler law.

– An ellipse can be characterized by its semimajor axis a defined by:

By applying the law of equal areas, we obtain the period T of the satellite:

[1.5]

which is the expression of the third Kepler law. The parameters of this period T are, therefore, only a – the semimajor axis – and μ (related to the Earth’s mass).

On an ellipse, the speed is not constant. We show that

[1.6]

except when we have a perfectly circular trajectory, for which we have:

[1.7]

From this, we may then deduce the following useful relation:

[1.8]

which shows that the ratio of the speeds to the perigee and apogee depends only on the eccentricity and therefore on the shape of the ellipse.

To conclude on the general aspects of orbits, we must emphasize the fact that these ellipses only need two parameters to be described accurately. We often choose a, the semimajor axis, and e, the eccentricity.

1.3. The quasi-circular orbits of remote sensing satellites

The satellite era started with the launch of the first satellite Sputnik in 1957. Some numbers of civilian remote sensing satellites have since been placed in orbit around the Earth. These orbits, whose eccentricity is very low (e < 0.001), are quasi-circular and, therefore, described either by the semimajor axis a or by their altitude h, defined by the relation:

In the case of a perfectly circular orbit, the orbital period T is thus written as (equation [1.5]):

[1.9]

and the speed V is written as (constant on a circular orbit, relation [1.7]):

which gives the following table for different circular orbits:

It is important to remember the orders of magnitude: for a remote sensing satellite, the orbital period is of the order of 100 min, the speed is of 7.5 km/s and the number of orbits per day is around 15. More precisely, SPOT is at an altitude of 822 km: its period is 101.46 min, its speed is 7.424 km/s and it orbits 14.19 times a day.

– Finally, the speed of the satellite will not be constant. Using relation [1.8], we show that for SPOT, the speed varies between 7.416 km/s at the apogee and 7.432 km/s at the perigee: the deviation seems very small, but we must note that at the end of 1 s the difference in the trajectory is approximately 17 m, i.e. nearly 2 pixels.

1.3.1. The orbit in the terrestrial referential: the recurrence cycle

In the hypothesis that the Earth can be modeled by a punctual mass, the trajectory of a remote sensing satellite lies in a plane and its orbit is closed and quasi-circular. The orbital plane has a well-defined orientation that we determine using a unitary vector positioned at the focus (the Earth center) and perpendicular to this plane: in this orbital plane, the period T of the satellite is determined by its altitude.

The Earth is also characterized by a daily rotation around an axis Oz (by definition, the north–south axis) defined by a unitary vector . It is, therefore, in relation to this axis that we define i, the “tilt angle” of the orbital plane (also called orbit inclination):

For a terrestrial observer, the problem becomes more complex because the Earth turns around its axis. In a first approximation, we may notice that the observer belongs to a non-Galilean referential that completes one rotation per day. The relative motion of the satellite is no longer reduced to a simple circular motion, but it associates the proper motion of the satellite (rotation about ) with the rotation of the Earth (rotation about ).

The orbit of a satellite is usually described by its “track”, defined by the points on Earth that have been flown over by the satellite. If the Earth did not have its own rotation motion, the track would be the intersection between the orbital plane and the terrestrial sphere: we would, therefore, have a circle whose center would be the center of the Earth. However, because the Earth has its own rotation motion, the track has a specific appearance that results from the combination of the rotation motion of the satellite and that of the Earth. Figure 1.1 shows a SPOT orbit on Earth, which illustrates the equatorial lag at the end of a period.

The purpose of a remote sensing satellite is to acquire information from the surface of the terrestrial globe, if possible in its entirety. To do this, let us consider a rotation of the equatorial plane so that the normal vector in the equatorial plane is perpendicular to the Earth’s axis: in this case study, the satellite flies over the north and south poles and we therefore speak of a “polar orbit”. Because of the Earth’s own rotation, it will be made to fly over different points of the equator with each revolution.

Figure 1.1.A SPOT orbit represented in an Earth-bound referential

We have seen that the T period of the satellite only depends on the h altitude: the designers of satellite missions, therefore, have the possibility of choosing the period by fine-tuning the altitude h. One of the essential points in the choice of altitude lies with p, the number of orbits per day (of a duration TJ):

which is, a priori, a real number. In the case where p is expressed as a rational fraction:

[1.10]

with r and q being coprime integers, we note that at the end of q days, the number of orbits N will be:

Table 1.1.Orbital parameters of some remote sensing satellites

1.3.2. The effects of the Earth’s flattening: the precession of the orbits

The simplified model used thus far has supposed that the Earth was spherical and homogeneous and that, using the Gauss theorem, it could be considered as a punctual mass located in its gravity center. The reality, however, is completely different: the Earth is neither perfectly spherical nor is it homogeneous. Thus, since the beginning of astronautics, we have been able to observe that the trajectories were not Kepler’s ideal trajectories and that we had to, therefore, introduce slight modifications into the elliptical orbits in order to better predict the positions of satellites.

The first modification that had to be considered was the Earth’s flattening at the poles. A well-known fact for geodesists is that this flattening can be characterized through ground measurements; we know the Earth’s radius is 6,378.137 km at the equator and 6,356.752 km at the poles. The Earth displays somewhat of an equatorial “roll”. This being the case, we can no longer speak of a potential field in because we can tell from the latitude that the gravitational field will be slightly modified because of this “roll”, so that the gravitational force is not necessarily directed toward the center of the Earth.

Without going into the theoretical modifications generated in the gravitational field U in detail, we can, however, say that, since the potential is no longer central, the trajectory of the satellite is no longer plane. In other words, the angular momentum [1.4] no longer has a fixed direction. Therefore, we can show the following properties:

– completes a precession motion around the north–south axis of the Earth, which we can study by considering the vector . This vector completes a uniform rotation around the north–south axis of the Earth, characterized by the relation:

[1.11]

Because of the Earth’s flattening, we therefore have a precession of the orbit, the magnitude order of this precession is small a priori, because we find a value of the order of degrees per day for the quasi-polar low orbits (i 98°).

1.3.3. Heliosynchronous orbits

When we introduced phasing for remote sensing satellites, it was so that we could easily use the archives of a satellite, or be able to define acquisition schedules. If we wish that all acquisition parameters be identical at the end of a cycle, the acquisition time must also remain the same. To do this, the vector describing the satellite has a direction fixed in the referential related to the Earth: therefore, it goes around the Earth once per year in an absolute referential. We will then say that the satellite is “heliosynchronous”: regardless of the day it passes over the equator, it will always be the same solar time.

The tilting must be slightly larger than the value 90°, which causes this type of orbit to be retrograde. Table 1.1 gives values of orbit tilting for different heliosynchronous satellites. This tilt is close to 90° and the orbits remain quasi-polar (we sometimes call them near polar orbits (NPO)). Therefore, they allow for a good Earth coverage outside of the polar caps.

1.3.4. Tracking the orbits

Because of the flattening of the Earth, Kepler’s laws cannot be applied. However, the effects of this flattening are weak and the orbit of a remote sensing satellite can be approximated at any time by an ellipse described by a focus (the Earth), an orbital plane (with a normal vector ) and the parameters of the Kepler ellipse. The major axis of the ellipse is called the “line of apsides”: this line obviously goes through the center of the Earth. We must now enhance this description to be able to position this ellipse in relation to the Earth.

Figure 1.2.Evolution of the orientation of the orbit of a polar satellite having an orbital plane with a direction given at the summer solstice: the orbital plane is represented by the ascending and descending nodes. a) The Earth is supposed to be perfectly spherical and homogeneous and the orientation of the orbit remains identical throughout the year, b) The Earth is flattened, modifying the orientation of the orbit throughout the year. In this example, the orbit is not heliosynchronous and it does not complete an entire revolution during the year

To do this, we must define a frame related to the Earth and first characterized by a privileged direction: the north–south axis (with a directing vector ) and in the equatorial plane (the perpendicular plane with that the center of the Earth belongs to as well as the equator). The angle formed by a point on the Earth (described by the vector ) and allows us to define the latitude. The equatorial plane is then enhanced with a conventional tracking method based on the Greenwich meridian: the angle formed by the position of the meridian and a point of the equator gives us the longitude.

In this terrestrial frame, we will also have the following definitions:

For remote sensing heliosynchronous satellites, we often provide the time the satellite passes through the ascending node instead of the straight ascension of the ascending node (which is the same thing).

Figure 1.3.Orbit of a satellite with inclination i and intersecting the equatorial plane at the ascending node N, in the Earth frame, which is defined by a reference meridian (Greenwich)

Even if the orbit is not strictly a Keplerian orbit, these descriptions are enough because the perturbations are weak and we can keep in mind the concept of an orbit that is locally plane and elliptical. In the case of a heliosynchronous orbit, we can have an idea of the non-flatness of the orbit. Indeed, let us consider an orbit with a period of 100 min: in 1 year, the satellite will have circled approximately 5,260 orbits. Since it is heliosynchronous, the ascending node will go round the Earth on the equator (40,000 km) in a year, which amounts to a shift toward the east of approximately 7.6 km/orbit.

Figure 1.4.Two examples of an orbit having the same orbital plane, even ascending and descending nodes, but whose arguments of the perigee ω and ω′ are different. The figures are represented in the orbital plane. In these figures, only the node line belongs to the equatorial plane

1.3.5. Usual orbits for remote sensing satellite

To summarize, we see that the space mechanics allow us to have around the Earth satellites on orbits that we can choose to be near circular, cyclical (when we choose specifically the altitude) and heliosynchronous (when we choose the tilting of the orbit). Table 1.2 gives several examples of the most usual satellite orbits, as well as the order of magnitude of the pixel size of corresponding images (for the satellites having several acquisition modes, the given value corresponds to the most current configurations).

These orbits display a weak eccentricity that has, nevertheless, several consequences on the altitude and the speed that slightly vary along the orbit. Therefore, we have the following relations for the altitudes and the speeds at the apogee and the perigee:

[1.12]

Table 1.2.Orbital parameters and image characteristics of several remote sensing satellites

1.4. Image acquisition and sensors

The objective of this section is to provide the basis for the essential concepts in satellite imagery, that is perspective ray, resolution and ground swath. These concepts will allow us to understand how linear sensors build satellite images because of the satellite’s own motion.

Two sensor families are used in satellite imagery: optical sensors and radar “SAR” sensors (Synthetic Array Radar). The first sensors are passive sensors: they measure the backscattering of the solar light on the ground (or for the so-called thermal sensors the terrestrial ground’s own radiation). The second sensors are active sensors: they emit an electromagnetic wave using an antenna, then they use the same antenna to receive the backscattering of this wave on the ground. Both of them are directional: they mainly process the information that comes from a given direction of observation called line of sight (LOS).

1.4.1. Perspective ray in optical imagery for a vertical viewing

In the case of optical sensors, the elementary sensor (a bolometer, a part of photosensitive film, a Charged Coupled Device (CCD) element, etc.) is a passive system that captures the photons coming from the LOS. This capture is done in a limited period of time Tint called the integration time (or exposure).

Since the propagation medium can be, upon a first approximation, considered homogeneous, the celerity of the light is then constant and the light is propagated in a straight line. To each given LOS corresponds a straight line coming from the sensor. For an ideal elementary sensor, only the objects that are located on this straight line contribute to the response of the sensor: we will call this straight line a “perspective ray”. Depending on the wavelength analyzed, the contribution of the objects can be the backscattering of the sunlight (i.e. for the wavelengths ranging from visible to middle infrared) or the object’s heat radiation (in thermal infrared). The solar illumination being incoherent (as is the heat radiation), the intensity measured by a sensor is the sum of these elementary contributions during a given duration called “integration time” Tint.

Actually, the optical system (made of lenses and mirrors) is subject to diffraction laws that define a cone around this perspective ray; this way, any object in this cone contributes to the response being measured. An elementary sensor is, therefore, characterized by a perspective ray (defined by a direction) and an angular aperture δω (called instantaneous field of view (IFOV)). This angular aperture can usually be defined by a relation that links the aperture L of the optical system (the dimension of the mirror or of the lens) and the analyzed wavelength λ. For circular optics, the angular aperture is expressed as the main lobe of a circular aperture (whose impulse response – or point spread function (PSF) – is expressed as a Bessel function). We can choose the following relation as an expression of the angular aperture, which is valid for a monochromatic radiation:

LOS and IFOV define a cone that characterizes the “resolution”1δr of the system in any point of the perspective ray. The optical system points toward the ground of the Earth: the cone associated with the perspective ray then intersects the ground on a small surface, called a “footprint” (FP).

Figure 1.5.a) Optical monosensor viewing the ground vertically: the perspective ray (the dotted line) and the LOS correspond to the same line. The area intersected on the ground has a surface that is proportional to (Rδω)2. b) Radar monosensor viewing the ground with an angle 0: the perspective ray (dotted line) is perpendicular to the LOS because it corresponds to an isochronous line for the wave coming from the sensor; the area intersected on the ground, i.e. the footprint, has a surface that is proportional to

It is worth noting that given the celerity of the light, the flight time between the objects belonging to the cone associated with the perspective ray and the sensor is much shorter than the time of the sensor’s integration. Therefore, all of these objects will contribute to the same output value, regardless of their range from the sensor.

One last important point remains, however, to be noted: in the most general cases, light rays cannot go through objects. Thus, only the backscattering object closest to the sensor will be observed: the other objects located on the perspective ray will then be hidden and will have no contribution.

1.4.2. Perspective ray in radar imaging

A radar imaging system is an active system based on a completely different principle: echolocation. The antenna emits a wave of the shortest possible duration and then acts as a receiver to capture the echoes produced by elementary reflectors (punctual objects, surfaces and volumes). As the celerity of electromagnetic waves through the atmosphere can be assumed to be constant, it is easy to convert a delay measured upon the received signal into a range. Thus, spatial positions that belong to a isosynchronous surface, defined as the sphere whose center is the radar antenna, correspond to a delay given by the real signal (reference time).

This isochronous surface has a range bounded by diffraction laws. In the most general cases, the antenna is rectangular, with a size l × L: the LOS is perpendicular to the antenna and defines the “range axis”. In this chapter, we will assume that the side of the antenna corresponding to the size L is aligned with the trajectory of the satellite; thus, the axis defined is called the “azimuth axis”. Thus, the aperture defined is expressed as the main lobe of a rectangular aperture (whose PSF is expressed as the product of two cardinal sinuses). It has an extension δωL along the azimuth direction, and δωl along the third direction (that is perpendicular to the range axis and the azimuth axis). We then have the relations:

defining the “main lobe” of the antenna. Given the orientation of the antenna in relation to the track, this main lobe points perpendicularly to the track.

On the isosynchronous surface, every backscattering object illuminated by the radar will then re-emit the incidental wave. The contribution of all backscattering targets belonging to this isochronous surface defines the nature of the signal received by the antenna: thus, we can call this isochronous surface a “perspective surface”. We must also note that the illumination is coherent (single, quasi-monochrome source): so the received signal must be seen as the sum in amplitude (and not the sum in energy) of the elementary signals backscattered by each elementary target.

In the plane that is perpendicular to the track, this perspective surface is reduced to a curve that we can call a perspective ray: all the points located on this perspective ray will have a contribution, arriving at the same time on the antenna, therefore, without being resolved. The spatial limitation of this perspective ray is due to the diffraction laws applied to the antenna. If we place ourselves far enough from the antenna, the perspective surface can be locally approximated by a plane, which we can call the “perspective plane”: the perspective ray then becomes a perspective straight line, perpendicular to the LOS of the antenna.

Now we can see the essential difference between a radar sensor and an optical sensor. For the optical sensor, the perspective ray comes from the sensor and is mingled with the LOS; as soon as a diffusing object is located on the perspective ray, it covers the other objects located further on the perspective ray. For a radar sensor, the perspective sensor is perpendicular to the LOS. All the objects located on this perspective ray can contribute to the received signal.

A real radar system cannot distinguish, from a temporal point of view, two events that are very close to each other: it has a temporal resolution δt. We will see that it is the shape of the emitted wave that dictates temporal resolution. Knowing the celerity of electromagnetic waves, we deduce the spatial resolution along the range axis .

Thus, instead of an isochronous surface, we have, at time t, corresponding to a range , a small volume around this isochronous surface, with an extent along the range axis and the extension RδωL along the azimuth axis. All the backscattering objects belonging to this small volume will be mixed in the same output value.

The antenna points toward the Earth and the intersection of this small volume with the ground defines the footprint: it is worth noting that along the range axis, the spatial area depends on the shape of the wave and the angle of incidence and that along the azimuth axis, the spatial area depends on the dimension L of the antenna.

As in the optical case, we can note that the matter absorbs electromagnetic waves; thus, masking elements can appear as soon as absorbing objects are in the LOS of the antenna.

Upon a first analysis, it is interesting to note that a perpective ray can be defined for an optical system as much as for a radar system, but with a fundamental difference in relation to the LOS:

– the optical perspective ray comes from the sensor and is therefore identical to the LOS;
– the radar perspective ray is perpendicular to the LOS.

Thus, analyzed in the plane perpendicular to the track, the perspective ray of an optical system that regards a ground point along an incidence θ and the perspective ray of a radar system whose LOS is π/2 – θ are identical (Figure 1.6).

In the two cases, the masking effects will appear along the LOS, which indeed corresponds to the optical perspective ray, but does not to the radar perspective ray.

1.4.3. Resolution and footprint

1.4.3.1. The case of optical imagery

Let us consider an optical system in a vertical LOS (Figure 1.7): it is characterized by the perspective cone defined by the perspective ray and the angular aperture δω (due to the diffraction of the optical system). At any given range R, two objects belonging to this cone cannot be resolved: we can choose as the definition of the resolution δx, the “footprint”, which is a function of the range R:

[1.13]

Figure 1.6.Optical monosensor and radar monosensor sharing the same footprint and the same perspective ray. These two footprints can be similar if the incidence angle is θ for the radar and the incidence angle is π/2 – θ for the optical sensor. The figure belongs to the plane perpendicular to the track

Figure 1.7.a) Footprint in optical imagery. b) Footprint in radar imagery

1.4.3.2. The case of radar imagery

Similarly, the perspective ray of a radar system of a temporal resolution δt viewing a ground point with a local incidence θ will have a footprint FP equal to (Figure 1.7):

[1.14]

If several objects belong to the same radar FP, their individual contributions will add up coherently (real parts and imaginary parts will add together independently) and their contributions will be mingled. These objects retransmit data toward the antenna in the same way as the footprint was a “ground antenna” emitting a wave: however, unlike usual antennas, the amplitude and phase laws on this “ground antenna” do not verify any simple law, which prevents the calculus of the diagram of this ground antenna.

As in the optical case, we can choose as definition of the resolution δx along the direction Ox (swath), the footprint, which depends on the incidence angle θ:

1.4.3.3. Relations verified by the footprint: the optical case

We have seen that an optical system observing the Earth is characterized by a footprint whose dimensions are deduced from the resolution of the system. We will now see that the geometry of the footprint depends on the range between the sensor and the ground, as well as the local incidence θ, i.e. the angle between the LOS and the ground.

– If we consider the same system at two different altitudes and viewing the Earth with a null incidence (the perspective ray is then perpendicular to the surface of the Earth), the optical laws will then easily demonstrate that the size of the footprint varies linearly with the range. For two ranges R and R′, the footprints FP(R) and FP(R′), verify:

[1.15]

which is shown in Figure 1.8.

– Let us now model how the footprint varies with the local incidence. Let us assume that the ground presents a local slope α: the local incidence is then no longer null. Since in this configuration the range R remains the same, it is easy to show that in this case we have:

[1.16]

which is shown in Figure 1.9.

Figure 1.8.Optical monosensor viewing the ground vertically from different altitudes: the footprint varies linearly with the altitude (equation [1.15])

Figure 1.9.Optical monosensor viewing the ground vertically. As the ground is not horizontal, the footprint depends on the local slope α (equation [1.16])

On the other hand, the perspective ray will have a local incidence θ. By then applying the two previous formulas, we deduce:

[1.17]

We see that, for an optical system, the closer the incidence is to the vertical and the lower the altitude of the satellite, the more the FP is reduced.

Figure 1.10.Optical monosensors of the same angular aperture δω, at the same altitude and viewing the same point. The local incidence angle θ depends on the LOS. The footprint is proportional to(equation [1.17])

1.4.3.4. Relations verified by the footprint: the radar case

Similarly, the footprint of radar systems verifies some specific relations. However, these relations differ from the specific relations of optical imagery and it is important to emphasize them beforehand.

The echolocation signal is a temporal signal of a duration δt. For a system viewing a ground point with the local incidence θ, the radar footprint along the range axis is written as (relation [1.14]):

It is important to emphasize that it does not depend on the range between the ground point and the sensor.

In the presence of local slopes, the footprint verifies the following relation:

where α is the local slope of the ground. The footprint is so much larger that the local slope orients itself in the direction of the sensor. If the footprint is larger, its backscattering will be so much higher as a large number of elementary targets belong to this footprint. Figure 1.11 illustrates this relation.

Figure 1.11.Radar antennas with the same temporal resolution, the same altitude and the same emission angle, viewing Earth areas with various local slopes α. The footprint is larger as the local slope orients itself in the direction of the sensor

Finally, Figure 1.12 shows how the footprint varies depending on the local incidence angle, in the case of antennas placed at an identical altitude. We then have the relation:

[1.18]

1.4.4. The swath in satellite imagery

An elementary sensor will not be of much use in satellite imagery since it only gives information on a very small area of the Earth: the footprint. Thus, a system of satellite imagery has some abilities allowing information acquisition on a wider area, and eventually the construction of an image. The first stage that must be overcome is passing a piece of information acquired on a point to a piece of information acquired on a line.

For an optical sensor, the acquisition of a line of data is carried out by using several LOS, which in turn allows us to acquire information that comes from different perspective rays. This set of perspective rays will be called the “perspective bundle”: the area on the ground intersected by this perspective bundle is called the “swath”.

Figure 1.12.Radar antennas with the same temporal resolution and the same altitude, viewing the same area with two incidence angles θ1andθ2. The law followed by the footprint is in

Historically, the first satellite sensors were based on a unique sensor (for example a microbolometer) and the LOS was modified using a circular scanning obtained by the rotation of a mirror. Thus, we construct the perspective bundle step by step, this bundle being a plane perpendicular to the mirror rotation axis. Let us note that this acquisition mode was very well adapted to analog signals used both for acquisition and transmission to the ground reception stations.

The introduction of numeric sensors (CCD sensors) allowed us to increase the number of elementary sensors. Historically, the first CCD system was the “whiskbroom” where the perspective bundle is always obtained by the rotation of a mirror but where a small number of elementary sensors are aligned according to the mirror rotation axis; thus, we have the possibility to acquire several lines at once, during the rotation of the mirror. For example, the MODIS sensor (for example on board the Terra satellite) is fitted with 30 CCD elements allowing us to acquire 30 image lines in approximately 4 s, i.e. half a mirror rotation.

Nowadays, optical systems are generally based on a linear sensor made up of a large number of CCD elements (several thousand). This CCD array belongs to the focal plane of the optical system and in a perpendicular plane to the motion of the satellite (that is the plane that is perpendicular to the rotation axis of the first systems): we then speak of a “pushbroom” sensor. The number of elements has increased considerably in recent years: if SPOT-1 had a CCD array with 6,000 elements, QuickBird has six CCD arrays, each with 4,736 CCD elements. To each CCD element corresponds a different LOS and therefore a different perspective ray. All of these perspective rays make up the perspective bundle, defined by an angular aperture Ω. In the case where, in a first approximation, we consider the Earth as a flat surface, the swath, defined by the intersection of the perspective bundle and the ground, has a length LSRΩ.

For the radar imagery, there is an unique antenna. After each emission, the received signal is sampled with a time step δt and analyzed on a duration TW much larger than the time step δt. The time swath is based on these time samples. The ground swath is obtained by projecting this time swath on the ground. The nearest point of the ground swath is called the near range and the farthest point, the far range. By assuming the Earth is flat, the swath can be expressed as:

being the average incidence angle.

Let us note that the diffraction laws define a main lobe to every radar antenna. Only the area of the Earth belonging to the main antenna lobe can contribute to the backscattered signal so that there is a physical limit to the duration of the time swath TW

1.4.5. Images and motion

We have seen in section 1.3 that a remote sensing satellite belonging to a circular orbit moves at a constant speed of VS 7.5 km/s. This motion specific to the so-called non-synchronous satellites depends on the orbital parameters and allows them to fly over part of the Earth or the entire Earth. This motion can thus be used to build the images.

Indeed, let us consider, at the time T, a non-synchronous satellite fitted with a CCD array (Figure 1.14): it acquires a line (corresponding to the swath at the time T) during the integration time Tint. Let a time interval Δt ≥ Tint. The system can then acquire a new line at the time T + Δt. Since, on its orbit, the satellite will have moved a distance equal to VSΔt, this new swath will be at a distance VS,groundΔt from the previous one, VS,ground being the relative speed of the satellite on the Earth.

Figure 1.13.Definition of the swath in a plane perpendicular to the satellite motion. a) An optical sensor based on a CCD array and viewing the ground vertically. The aperture Ω defines the swath (RΩ). b) Radar antenna aiming along the incidence θ with a sidelobe δθ. The window of temporal analysis is TW: it corresponds, on the LOS, to a range rWwhich, projected on the ground, results in a swath AB with a length LS

We can, therefore, build an image starting from these lines acquired at different times and by using a simple linear sensor: this is the reason why the CCD 2D matrices are only marginally used in remote sensing.

In the case of a radar sensor, if the duration TF corresponds to the length of the swath LS corresponding to the points on the Earth backscattering the emitted wave, we can emit these waves at a specific frequency called the pulse repetition frequency (PRF). One of the conditions is that the duration associated with the PRF, TPRF, should be higher than TF (if not, the signals coming from some points of the swath would be mixed from one emission to another). Thus, we find ourselves with a configuration analogous to the optical case: the satellite having moved a distance equal to VSTPRF between each radar emission, we can thus build an image starting from these consecutive emissions. Each line is then at a distance VS,groundTPRF from the previous one, VS,ground being the relative speed of the satellite with respect to the Earth. This radar image construction is called “stripmap mode”.

1.5. Spectral resolution

1.5.1. Introduction

Here, we purposely introduce only a few details on the notion of spectral resolution, as it will be developed in more detail in Chapter 3.

Optical sensors measure the quantity of light reflected by the surface observed in a given range of wavelengths. For example, the wavelengths of the visible light range between 400 and 700 nm. If these wavelengths are observed together, we speak of a panchromatic image (containing all the colors). If this range of wavelengths is divided into sub-bands in order to be able to measure the light intensity separately in each band, we speak of a multispectral image. Often, in the case of the visible spectrum, the chosen bands correspond to the colors blue, green and red, which allows us to rebuild images in “natural colors” by composing these three channels. Other choices can be made, as we will see, for example, in Chapter 3 for the Formosat-2 sensor (whose five bandwidths will be detailed in Figure 3.1).

The multispectral image thus allows us to better characterize the observed surfaces, because it allows us to separate the contributions in different wavelength ranges. Thus, the majority of multispectral sensors contain a near infrared band, since the vegetation has a strong response in these wavelengths.

The wavelengths observed in optical do not stop at the visible and the near infrared (NIR). We can, thus, implement sensitive sensors in the short wave infrared (SWIR) and in the thermal infrared (TIR). Thus, the number of bands can go over tens or even hundreds: then, we speak of super and hyperspectral sensors, respectively, the limit between the different categories not being very well defined. We then obtain a very precise signature of the observed materials.

1.5.2. Technological constraints

The technology used for building sensors depends on the wavelength we are interested in. The sensitivity of the materials is indeed different in the different wavelengths and the difficulty of implementing sensors is not the same in all areas of the electromagnetic spectrum.

To simplify, the ability of a sensor to measure light with a good signal-to-noise ratio depends on the flux of received light. This flux is proportional to the surface of the sensor and to the bandwidth. The superspectral and hyperspectral sensors require a thin division of the spectrum. The price to pay is a loss in the spatial resolution, since the footprint must be of a larger size to allow a reasonable number of received photons to be detected.

1.5.3. Calibration and corrections

Calibration is the process of converting the magnitude provided by the sensor (a voltage) into a measure of the observed physical phenomenon. If we are interested in the light flux, a first interesting transform consists of the transformation of digital numbers into radiances (normalized flux related to the surface of the sensor). The light flux measured by the sensor contains contributions from several sources. There is the sunlight, the different atmospheric reflexions and, finally, the quantity of light emitted by the observed scene.

To gain access to this last magnitude, the measure issued from the sensor must suffer a set of transformations and corrections. The effects that are most difficult to correct are those induced by the light traveling through the atmosphere. Beyond the fact that the terrestrial atmosphere is opaque for some wavelengths – we are not interested in these spectral bands – the optical depth of the atmosphere varies with time, space and wavelength.

The calibration and adjustment processing is detailed in Chapter 3, section 3.6 for the optical data, and in Chapter 4, section 4.4 for the radar data.

1.5.4. Image transmission

The images acquired by the satellites must then be transmitted to the ground in order to be exploited. This transmission is done via a radio link (often in X band or Ka band) whose bandwidth (data transmission rate) is limited.

The volume of images acquired depends on the following factors: the number of pixels (field of view and spatial resolution), the number of spectral bands and the radiometric resolution of the number of quantification levels used to encode the values of pixels.

From these elements, we can deduce the size of a rough image. Often, these images are compressed on the satellite, before transmitting them on the ground. This compression allows us to reduce the volume of data to be transmitted, while minimizing information loss.

Transmitting images to the ground needs a radio link to be established between the satellite and a reception station on the ground. The majority of the remote sensing satellites being on polar orbits, it is impossible to always have ground stations that can be seen from these moving satellites.

This is why the images are often stored in a memory on board the satellite while waiting for the passage above a reception station. The size of this memory limits the number of images that can be acquired by the satellite during each orbital period.

Another parameter that limits the acquisition capacity of the satellite is the data transmission to the ground. It is indeed necessary for the satellite to be able to empty its memory during the visibility duration of the ground stations.

1 Let us recall a definition of resolution: it is a measure of the ability of the instrument to separate images of two identical neighboring object points.

Chapter written by Jean-Marie NICOLAS