207,99 €
Synthetic aperture radar provides broad-area imaging at high resolutions, which is used in applications such as environmental monitoring, earth-resource mapping, and military systems. This book presents the tools required for the digital processing of synthetic aperture radar images. They are of three types: (a) the elements of physics, (b) mathematical models and (c) image processing methods adapted to particular applications.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 470
Veröffentlichungsjahr: 2013
Table of Contents
Introduction
Chapter 1. The Physical Basis of Synthetic Aperture Radar Imagery
1.1. Electromagnetic propagation
1.2. Matter-radiation interaction
1.3. Polarization
Chapter 2. The Principles of Synthetic Aperture Radar
2.1. The principles of radar
2.2. The SAR equations
2.3. Acquisition geometry of SAR images
Chapter 3. Existing Satellite SAR Systems
3.1. Elements of orbitography
3.2. Polar orbiting SAR satellites
3.3. Satellites in non-polar orbit
3.4. Other systems
3.5. Airborne SARs
Chapter 4. Synthetic Aperture Radar Images
4.1. Image data
4.2. Radiometric calibration
4.3. Localization precision
Chapter 5. Speckle Models
5.1. Introduction to speckle
5.2. Complex circular Gaussian model of single-look and scalar speckle
5.3. Complex circular multi-variate Gaussian model of vectorial or multi-look speckle
5.4. Non-Gaussian speckle models
5.5. Polarimetric radar speckle
Chapter 6. Reflectivity Estimation and SAR Image Filtering
6.1. Introduction
6.2. Estimations of reflectivity R
6.3. Single-channel filters with a priori knowledge of the scene
6.4. Multi-channel filters
6.5. Polarimetric data filtering
6.6. Estimation of filter parameters
6.7. Filter specificities
6.8. Conclusion
Chapter 7. Classification of SAR Images
7.1. Notations
7.2. Bayesian methods applied to scalar images
7.3. Application of the Bayesian methods to ERS-1 time series
7.4. Classification of polarimetric images
Chapter 8. Detection of Points, Contours and Lines
8.1. Target detectors
8.2. Contour detectors
8.3. Line detectors
8.4. Line and contour connection
8.5. Conclusion
Chapter 9. Geometry and Relief
9.1. Radar image localization
9.2. Geometric corrections
Chapter 10. Radargrammetry
10.1. Stereovision principles: photogrammetry
10.2. Principles of radargrammetry
10.3. Results of radargrammetry
10.4. Conclusion
Chapter 11. Radarclinometry
11.1. Radarclinometry equation
11.2. Resolution of the radarclinometry equation
11.3. Determination of unknown parameters
11.4. Results
11.5. Radarpolarimetry
11.6. Conclusion
Chapter 12. Interferometry
12.1. Interferometry principle
12.2. Interferogram modeling
12.3. Geometric analysis of data
12.4. Applications of interferometry
12.5. Limitations of interferometry imaging
Chapter 13. Phase Unwrapping
13.1. Introduction
13.2. Preprocessing of InSAR data
13.3. Phase unwrapping methods
Chapter 14. Radar Oceanography
14.1. Introduction to radar oceanography
14.2. Sea surface description
14.3. Image of a sea surface by a real aperture radar
14.4. Sea surface motions
14.5. SAR image of the sea surface
14.6. Inversion of the SAR imaging mechanism
Bibliography
List of Authors
Index
First published in France in 2001 by Hermes Science Publications entitled “Traitement des images de RSO”
First published in Great Britain and the United States in 2008 by ISTE Ltd and John Wiley & Sons, Inc.
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:
ISTE Ltd
John Wiley & Sons, Inc.
6 Fitzroy Square
111 River Street
London W1T 5DX
Hoboken, NJ 07030
UK
USA
www.iste.co.uk
www.wiley.com
© ISTE Ltd, 2008
© Hermes Science, 2001
The rights of Henri Maître to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988.
Library of Congress Cataloging-in-Publication Data
[Traitement des images de RSO. English] Processing of synthetic aperture radar images / edited by Henri Maître.
p. cm.
Includes bibliographical references and index.
ISBN 978-1-84821-024-0
1. Synthetic aperture radar. I. Maître, Henri.
TK6592.S95T73 2008
621.3848--dc22
2007022559
British Library Cataloguing-in-Publication Data
A CIP record for this book is available from the British Library
ISBN: 978-1-84821-024-0
Synthetic aperture radar imagery was born as a result of an exciting process extending over more than half a century and involving parallel advances in physics, electronics, signal processing and finally image processing. As radars first appeared at the eve of World War II, their prime task was surveillance, i.e. detection. They gradually acquired reconnaissance capabilities: very low resolution images were produced by space scans, while a persistent display made it possible to distinguish echoes from different reflectors. To move on to an actual image, all that was needed was to accelerate the scan and organize systematically collected echoes along two directions. But above all, major improvements had to be achieved on two key parameters: resolution, which, due to the particular wavelengths that were used, was rather poor at a useful monitoring range, and on the other hand, discriminating power, i.e., receiver sensitivity to major relevant dynamics. Both parameters were improved in the wake of manifold technical progress, but also thanks to some decisive choices, including side vision that helped remove the dominant echo of orthogonal reflection and synthetic aperture that paved the way to virtually unlimited resolution capabilities. As uncomplicated as these ideas may appear, they could not have materialized without proper technological backing. It thus took a continuous movement back and forth between methodological, conceptual strides and progress in areas such as sensors and emitters, electronic components and processing algorithms for radar imaging to eventually emerge on a par with optical imaging as a basic remote sensing tool.
By the 1960s, the essentials of radar imaging that make it so attractive nowadays had been investigated and recorded. Its foundations ranging from the capacity of discriminating among different materials to that of penetrating through various covers and vegetation layers, from geometrical effects to depolarization properties, from stereoscopic, interferometric and clinometric capacities to differential wavelength properties, had all been laid down. This progress, however, was not widely publicized. Born on the spur of anti-aircraft defense needs, radar imaging was still closely connected with military applications. As a result, even its most outstanding advances, which were often regarded as strategic, were very slow to seep into other areas of industry or research. By its very complexity, particularly its hi-tech requirements, radar imaging was out of bounds for many industrial applications, and academics would not get into it without solid support from some powerful constructors. Even having a look at images from synthetic aperture radars was a lot of trouble. This was not only due to obvious property restrictions, but also to the complex way in which they were obtained. These images, which often were the product of experimental sensors, were very hard to use. The basic acquisition parameters that will be detailed further in this work were subject to endless adjustments. Intermediate processing was also constantly improving, involving transient changes that were not always fully documented. To users, a significant leap forward was made with the advent of civilian satellite sensors such as SEASAT, SIR-A and -B, and especially the ERS family. These systems made it possible to establish a number of reference products that became accessible to all laboratories and helped expand the application range to a considerable extent. Whole areas, from natural disaster prevention to geological and mining surveys, from cartography to polar route monitoring and from forestry management to sea surveys, thus opened up to the use of radar imaging.
Radar imaging has numerous advantages over optical imaging. From among them, we have to underline its capacity of working in any weather, which is particularly useful in frequently overcast countries such as those located in the equatorial belt. In addition, its coherent imaging properties (i.e., its capacity of collecting amplitude and phase signals) are used to attain remarkable resolutions in the synthetic aperture version, while interferometry uses them to measure extremely fine altitudes and control some even finer displacements (accounting for bare fractions of the operating wavelength). The penetration capacity of radar waves is also linked to microwave frequency. It helps them get across light foliage and detect underground structures provided they are shallowly buried in very dry environments. Finally, radar waves are for the most part polarized, and the extent to which they are depolarized by different media that backscatter them is a great source of information for agriculture, geology and land management.
Nevertheless, radar imaging is less attractive for its edge over optical imaging than for the way it complements the latter. The formation of radar images, for instance, is governed by time-of-flight laws rather than the projection imaging we are familiar with. Moreover, radar imaging is especially sensitive to the geometric properties of targets, whether microscopic (e.g., roughness, surface effects) or macroscopic (e.g., orientation, multiple reflections). On the other hand, optical imaging is more sensitive to the physicochemical properties (e.g., emissivity, albedo, color) of targets. Radars are sensitive to properties such as the nature of materials (metallic targets, for example) and their condition (such as soil humidity or vegetation dryness) that optics is frequently unable to perceive. Finally, optical imaging depends on a source of light, which is usually the Sun, while radar imaging has nothing to do with this. As a result, radar images, as compared to optical images, have higher daytime and seasonal stability, but depend to a greater extent on the position of the sensor when it takes a shot.
For all these reasons, many satellites have been equipped with imaging radars. While some of them were merely experimental, others lived on through their descendants, such as the Lacrosse family, which are US military satellites, and the ERS, which are European civilian satellites. These satellites are permanent sources of information on our planet. Such information is mostly processed by photograph interpreters, but automatic techniques are gaining ground, driven by an increased amount of images that need to be processed and the growing demand for reliable and quantitative measurements. This work is designed to contribute to the development of such automatic methods.
It therefore covers the three basic types of tools that are required to digitally process images supplied by synthetic aperture radars, namely:
– physical concepts that help account for the main propagation phenomena and substance-radiation interactions and provide notions on how radars and their supporting platforms operate;
– mathematical models that statistically describe the very peculiar characteristics of radar issued signals and the properties we may expect of them; and
– image processing methods to suit specific applications: detection, reconnaissance, classification or interpretation.
The careful, simultaneous consideration of these three types of properties has helped devise effective automatic methods for extracting information from radar images. For many years, users only adapted to radar imaging a number of algorithms that had worked successfully in optical remote sensing. These commercially available programs were well known to photo interpreters and had been amply tested on images from Landsat, SPOT and Meteosat. Applied to radar images, they yielded very poor results, strengthening the belief that these were definitely unmanageable by automatic tools and open to nothing but qualitative interpretation. It is the goal of this work to disprove this belief and provide the necessary elements for turning the asset of radar imagery to good account.
The physics behind radar image formation is complex and involves several different topics. Some deal with electronic components devoted to transmission and reception of the wave, but they will not be discussed here. Other aspects, namely wave propagation and the interaction between microwave frequency waves and materials, are more important for our purposes. These two topics are the subject of this chapter. Electromagnetism obviously underlies both these phenomena and we begin with a review of useful results in this area.
An electromagnetic wave such as that emitted by radars is characterized at any point in space and at every moment by four vector values: (electric field), (electric displacement), (magnetic induction) and (magnetic field).
These quantities verify Maxwell’s equations, which in the absence of free charges and current densities are written as [JAC 75]:
In the linear stationary case, the fields, the electric displacement and the magnetic induction are ruled by the following relations:
where is the permittivity and µ is the permeability. We will consider them as scalar values in this book (they are tensors in the general case of anisotropic dielectrics).
The electric field and magnetic field vectors are sufficient to characterize this electromagnetic wave for an unbounded, homogenous, isotropic medium which is free of charges and currents. We will use Maxwell’s equations to show that every component of these fields verifies the wave equation:
[1.1]
We thus observe the electromagnetic energy transmission; v is the propagation velocity of the electromagnetic wave.
By denoting 0 the vacuum permittivity and μ0 the vacuum permeability, we deduce c, i.e. the speed of light, as being:
In the general case and in the absence of any charge or current, the relative permittivity and relative permeability of the propagation medium are normally used, which makes it possible to express the propagation velocity according to c:
The refractive index n for a propagation medium is defined as:
Since the medium is unbounded, and are perpendicular to each other at any and both are perpendicular to the propagation direction that represents the energy path, which is also called a ray.
If a preferred direction can be specified by convention in the plan (, ), we will then be able to characterize (and therefore ) in terms of its polarization, i.e., its orientation with respect to the defined direction.
In the presence of an isotropic radiation source located at the solution of propagation equation [1.1] at any point in space is written:
[1.2]
The wave then propagates from the source (homogenous medium) in such a way that the wavefront, i.e., the normal ray surface everywhere in space, is a sphere centered on the source: the propagation between the source and any observer is carried out in a straight line.
In the specific case of satellite systems, the objects impinged by the electromagnetic wave are far enough from the antenna so that the wave can be seen as locally plane around the study zone (Fraunhoffer zone). Moreover, the only waves that are generally taken into account are quasi-monochromatic waves that have a frequency fc (harmonic case) and are defined by their wavelength and wave vector
Given these hypotheses, we show that in the presence of a source in u, which is the propagation equation solution in is written as:
[1.3]
and fields differ from one another by a phase term and an attenuation term (a term in A surface defined by a set of points sharing the same phase is called a wave surface: is normal to the wave surface at every point, and the electric and magnetic fields are situated in the plan tangent to the wave surface.
In the general case (equation [1.2]) as well as in the quasi-monochromatic case (equation [1.3]), the term appears describing an attenuation phenomenon arising from energy conservation. By integrating energy over a wave surface or the wave front, energy transmitted by the source should be obtained. This attenuation effect, which is quite strong in airborne radars, may also be significant in satellite imaging radars. With the transmitter located hundreds of kilometers away in orbit and imaged areas extending over dozens of kilometers, the attenuation term may indeed vary by several percentage points and create noticeable effects on the images.
As the wave no longer propagates through a homogenous medium, electric and magnetic fields no longer obey propagation equation [1.1]. In this case, several major phenomena have to be taken into account, i.e.:
– a change in the propagation, which is no longer a straight line due to a wavefront bend;
– a scattering phenomenon (e.g., backscattering, multiple scattering) that alters the energy transmitted along a ray;
– a potential transfer of energy into heat leading to wave absorption.
As a general rule, a simple expression of the propagation equation will no longer be available. Nevertheless, if the perturbation caused by the heterogenities of the propagation medium is weak enough, we can resort to a traditional method (also in the linear framework) which consists of adding a complementary term to equation [1.1]:
[1.4]
where
By decomposing the field u into two terms: u0 which is the incident field and up which is the field created by the effects of the perturbation:
we note that, in the end, the problem comes down to solve the following equation:
If there is no absorption, we can use the Born first order approximation to solve it by only taking into consideration the incident field within the perturbation term:
This makes it possible to interpret the term as a source term in the field up, thus explaining the wave scattering process.
In the case of permittivity variations, we can show that the propagation equation verified by field is in general written as [LAV 97]:
[1.5]
In the harmonic case, the second term of this equation can be ignored if the permittivity variations verify the relation:
[1.6]
Taking into account a variation Δover a displacement Δr , the above relation can also be written as:
Under these assumptions, the propagation equation is:
[1.7]
This occurs as if we had replaced v by v.
Two cases have to be considered:
– when permittivity varies around a stationary mean value, Λ is written:
making it possible to rewrite the propagation equation within the Born approximation as:
[1.8]
Energy scattering is still present, but the rays remain unchanged:
– when permittivity varies slowly (and thus relation [1.6] is greatly verified), we will assume that the notions of wavefront and propagation ray are still valid. In this case, the solution of equation [1.4] is the geometric optical solution, which by applying Fermat’s principle makes it possible to establish the curvilinear abscissa s along a ray through the relation:
Once we are positioned along the ray thus defined, our search for a solution of the type:
where yields non-trivial solutions if Ψ verifies the eikonal equation:
[1.9]
To account for a potential absorption of the incident wave, we can model the source term using integrodifferential operators. In this case, the wave vector may formally have an imaginary component: the wave undergoes a generally significant absorption phenomenon that may even lead to a quasi-total lack of propagation (perfect conductor case).
In order to reach the ground, the electromagnetic radiation emitted by the radar has to travel across the ionosphere, then the neutral atmosphere.
The ionosphere is the region of the atmosphere traditionally extending from 50 to 1,000 km in height, where there are enough free electrons to modify wave propagation. It is made up of three distinct layers, every one of which has a different electronic density ρ expressed in electrons per cubic meter. Some of the diurnal and nocturnal characteristics of these layers are summarized in Table 1.1.
Table 1.1.The different layers of the ionosphere: diurnal and nocturnal rough estimates of the electronic density ρ expressed in electrons per m3
In the ionosphere, the index depends on electronic density and is expressed for an f frequency in the form of:
[1.10]
where f0 is the plasma frequency that depends on electronic density and can be approximated by relation (f0 in MHz). Given the ionospheric ρ rough estimates, we see that, in the different layers, this phenomenon has a very weak or even negligible effect on the centimetric waves used by imaging radars.
[1.11]
where T is the temperature in Kelvin, P is the atmospheric pressure and e is the partial water pressure2, both expressed in hPa.
Systematic soundings of the entire surface of the Earth have provided an (at least statistically) accurate knowledge of the index for the stratosphere and mesosphere. In particular, Bean’s atlas [BEA 66], which for most of the Earth provides the co-index N(h) according to altitude h in the shape of a 5-parameter model whose coefficients are monthly averages calculated over a period of 5 years. By contrast, major index fluctuations are found in the troposphere, mostly as a result of air moisture and related clouds.
Index variations are low for both the ionosphere and neutral atmosphere: the hypotheses required by the eikonal equation are fully justified and the effects linked to index variation are only perceptible in time-of-flight measurement between the radar and the ground. In the neutral atmosphere, some gaseous components may exhibit resonances within the range of frequencies that is of interest to us, as peripheral electrons of their atoms and molecules are excited. This is the case with water vapor in particular (lines at 22.2 GHz, 183.3 GHz and 325.4 GHz) and oxygen (lines from 50 to 70 GHz and one isolated line at 118.74 GHz). The signal is almost entirely absorbed at these frequencies.
Other phenomena such as hydrometeors may also have an influence on propagation, both on wave delay and wave absorption. Hail, snow and lightning impact considerably on the radar signals but are difficult to model.
The source term in equation [1.4] shows that the propagation of an electromagnetic wave is scattered if the medium is not homogenous. A scattered wave then appears, which is not necessarily isotropic and its radiation pattern depends on the source term. This approach does not easily cover phenomena related to the discontinuities of the propagation medium (e.g., surfaces between two media with different indices) or those related to reflecting targets.
Any phenomenological approach must take into account the radiation wavelength and L, the characteristic length of discontinuities. Even though the general case eludes all analytical considerations – except for the homogenous spherical scatterer treated by the exact Mie model – we can still analytically handle two essentially opposite cases:
– L >> λ: this is the case of the flat interface, which can be considered as unbounded so that it possible to use the Snell-Descartes equation;
– L << λ: this is the case of a point target which we will refer to as a Rayleigh target.
However, this approach fails to provide a reasonable account of reality, where in rare cases there may be only one perfectly smooth surface or only one quasi-point target. A pragmatic view of reality will thus prompt us to study in more detail two cases of high practical relevance, namely rough surfaces and point target distributions.
Flat interfaces have been studied since the time of Descartes and Snell. The relations obtained in visible optics (Fermat’s principle) are derived from the continuity relations imposed on Maxwell equation solutions. In the case of a flat interface between two media defined by their indexes n and n’, if an incident wave impinges this interface at an angle θ with respect to the normal of the interface, we will have a reflected wave at an angle θ and, in the second medium, a wave refracted at an angle θ’, so that:
The only situation in which the Snell-Descartes formalism may be altered is when the second medium is more or less conductive. The wave vector will then include an imaginary component specific to attenuation through the second medium. In the case of a perfect conductor, we will only have an evanescent wave in this second medium, as the energy of the incident wave is entirely conveyed to the reflected wave.
In reality, this is obviously an ideal situation, since interfaces are neither unbounded nor rigorously flat. We may nevertheless consider that an interface can be locally put into its tangent plane: the dimensions on which this approximation is valid correspond to the dimensions of an antenna with a directivity pattern that is directly related to these dimensions. The incident wave will therefore be backscattered with some directivity pattern mainly along the refracted and reflected rays of Snell-Descartes law, as well as according to a radiation pattern for the other directions, in keeping with the Huygens principle and the diffraction theory. This radiation pattern is that of an antenna whose dimensions are those of the approximation area. Despite its simplistic appearance, this analysis gives us the order of magnitude of the backscattered field in other directions than those of the Snell-Descartes angles.
Figure 1.1.Descartes laws for an unbounded plane (left) with the same original conditions, reflection on a plane sector (right)
Note that this approach only covers the kinematic aspects. To go into finer details and include the dynamic aspects, wave polarization will need to be considered (as seen in section 1.3) which may require adding a 180° phase rotation in some cases.
This case, based on a target much smaller than wavelength λ, is the opposite of the previous one. By considering a spherical homogenous target with an electric permittivity e, we can resort to either exact calculations using the Mie model that makes no assumption as to sphere size, or we can choose approximate calculations (the Rayleigh model in which a sphere much smaller than the wavelength is implied).
The behavior of the Rayleigh model can be deduced from equation [1.8]. Indeed, if the target is homogenous and small enough compared to the wavelength, the source term varies little inside the target. Assuming an incident plane wave it can be written:
where the proportionality factor involves V, i.e. the target volume.
In this way, here we have a secondary source that radiates like a dipole, proportionally to the frequency square, to local permittivity variation inside the target, and to target volume.
While the physical models described above provide a better understanding of how electromagnetic waves propagate, they do not make it possible to cover situations found in radar imaging. A pragmatic approach will lead us to consider three more realistic cases that will turn out to be very important for imaging: the rough interface, the case of a generic target and the scattering by a set of (point or not) targets.
The Snell-Descartes laws assume the interface to be flat. Such an assumption can be called into question in radar imaging as deviations from planarity must not exceed a wavelength fraction (typically ): for example, a roughcast wall may no longer be considered, for some radar wavelengths, to be a smooth surface.
An interface is said to be rough for an incident ray impinging it at an angle θ, if the mean quadratic deviation of surface irregularities, Δh, verifies the Rayleigh quality criterion:
[1.12]
meaning a mean quadratic phase shift higher than
The higher the roughness, the more the backscattering diagram differs from that of a flat interface. Moreover, it depends on the angle of incidence θ, in particular, the wider the angle of incidence, the more significant the roughness, and the more perturbed the radiation diagram.
The limit case is a surface whose roughness effects completely offset the flat interface appearance. Such a surface will then scatter incident radiation isotropically in a half-plane. The backscattering will in this case be characterized by the albedo, which represents the energetic fraction of the received signal backscattered by this surface.
A target that does not satisfy the Rayleigh criterion can still be characterized by using its directivity pattern and by its Radar Cross-Section (RCS). In order to define RCS, we will consider that the target behaves at reception like an antenna having an area a and as if the entire intercepted power was backscattered isotropically (unit gain antenna); the value of a is RCS3.
The major drawback of this model lies in the fact that RCS is often strongly dependent on the configuration under which the target is illuminated by the incident wave. Even a minor change in this configuration may cause a major change in σ.
Let us consider a set of Rayleigh point targets (they can be seen as isotropic targets). These targets may be distributed on a plane (we then refer to their area density) or in a volume (we then refer to their volume density).
The backscattered wave is the sum of basic waves backscattered by every target. Assuming that target density is not too high, we will be able to omit multi-reflection, in which backscattered waves are in turn backscattered by other targets. This often justified assumption verifies the Born approximation hypotheses.
Generally, the emitted radar wave train is far longer than the wavelength: we thus speak of a coherent illumination. In this case, the sum of echoes backscattered by each target will be carried out coherently, i.e. amplitudes are summed up rather than energies4. The received signal therefore has a specific appearance induced by speckle, generally well known by opticians. This issue, which is a major one for radar image processing, will be discussed in more depth in Chapter 5.
When a plane divides a space into two semi-unbounded, isotropic, homogenous media, the incidence plane of an electromagnetic wave characterized by its wave vector can be defined as the plane containing both and the normal to the boundary plane dividing the two media.
The polarization of an electromagnetic wave is conventionally defined by the direction of a field : we say that the polarization is perpendicular if the field is perpendicular to the plane of incidence (TE polarization, ), and that the polarization is parallel if the field belongs to the plane of incidence (TM polarization,
Starting from the Descartes laws and energy conservation, we can calculate the transmission coefficient t and reflection coefficient r for the flat interface. The fields and are related to the incident field by the following relations:
Figure 1.2.Fresnel laws for parallel polarization, i.e. where is parallelto the incidence plane (left), and for perpendicular polarization, i.e.whereis perpendicular to the incidence plane (right)
where is the observer’s position and [FRA 70]:
These relations highlight the different behaviors of parallel and perpendicular polarizations. In particular, for the parallel polarized wave is no longer reflected; θ is known as the Brewster angle in this case.
In the general backscattering case, the components of the backscattered field are linearly related to the incident field components. This is usually written in matrix form as follows:
[1.13]
This formulation can be used to describe both the reflection by a plane and the scattering by a target, even though in the latter case parallel and perpendicular polarizations are entirely arbitrary notions.
The polarization of a plane wave describes, versus time, the tip location of an electric field vector (t) in a plane orthogonal to . Generally, this location is an ellipse (the wave is said elliptically polarized), which in some cases may degenerate into a straight line segment (linear polarization) or a circle (circular polarization). An elliptically polarized wave is shown in Figure 1.3 [ULA 90].
For an observer, the ellipse orientation angle Ψ is the angle between the horizontal and the major axis of the ellipse describing the polarized wave. It ranges between 0° and 180°. χ is the ellipticity angle, such that, by definition the tangent is the ratio of the ellipses minor and major axes. It ranges between -45° and +45°, and its sign conventionally determines the direction of polarization: right if χ < 0 or left if χ > 0. Note that opticians refer to polarization as being positive when an observer looking at the wave that propagates towards him sees the ellipse described in the direct sense, i.e., to the left.
Figure 1.3.Polarization of a wave: conventions and notations
The polarization of a wave is then defined using the couple (Ψ, χ) deduced from the variations of the Eh and Ev components of field (t) along the axes h and v. These axes are defined in the plane orthogonal to and are conventionally related to the observer’s reference frame (rather than according to its belonging to an incidence plane related to an interface, as in the previous section):
[1.14]
[1.15]
In the case of remote sensing radars, the observer’s reference frame is related to the Earth and vector is horizontal. Particular cases are:
The polarization of a wave can also be described by using a real Stokes vector defined as follows:
From equations [1.15], can be expressed versus the orientation and ellipticity angles (Ψ, χ) as
[1.16]
The first component g0=\Eh|2 + |Ev|2is the total power carried by the wave. When the wave is fully polarized, i.e. the parameters |Eh|, |Ev|, δh and δv are constant over time, the wave checks the equality: (derived from equation [1.16]). This is generally the case for the transmitted wave. Conversely, a backscattered wave is the coherent sum of waves backscattered by elementary targets (that are assumed randomly distributed) in a resolution cell, and it is represented by a random time variable. It verifies the inequality: where gi are time averages; the wave is then said to be partially polarized. The polarization degree of a wave, defined as is therefore 1 for a completely polarized wave, less than 1 for a partially polarized wave and 0 for a completely depolarized wave.
When an electromagnetic wave is scattered by a target, the fields are expressed in local coordinate systems related to the transmitting antenna and the receiving antenna while the global system is that of the observed target, as shown in Figure 1.4. In the monostatic case, i.e. when the transmission and reception locations are the same, the variables and coincide according to the backscattering alignment (BSA) convention.
Figure 1.4.Local coordinate systems and geometry of the BSA convention, describing the incident wave and target scattered wave
In the case of backscattering by a target, the reflected wave and the incident wave are related to each other by the scattering matrix S (equation [1.13]):
[1.17]
where defines the observer’s location.
The elements Sij of matrix S depend on the target’s characteristics, particularly on the geometric (roughness) and dielectric (moisture) features, but also on acquisition characteristics, in particular wave frequency, incidence, etc. In addition, the reciprocity principle [TSA 85] implies that ShvSvh (rigorously this is true only when the polarized waves H and V are transmitted simultaneously, which is actually not the case in radars alternating V and H transmissions. However, even in the latter case, data are calibrated in order to verify the relationship ShvSvh ).
In the following, we represent the complex backscattering matrix either using matrix form S or vector form
[1.18]
For calibrated data, reduces to three components:
In numerous applications, our interest focuses on distributed or spread targets and their average properties, rather than point targets. This is the case for studies on farming crops, sea currents and iceberg drifting. For such studies, we would rather not use S, but one of the two matrices given below:
– the complex Hermitian covariance matrix C (monostatic case):
[1.19]
Assuming reciprocity, C reduces to a 3 x 3 matrix:
– the Stokes or Mueller matrix M.
The Mueller (or Stokes) matrix has been defined such that polarimetric synthesis might be expressed using either fields or Stokes vectors :
[1.20]
Thus, by analogy with equation [1.17], M is defined as the matrix connecting Stokes vectors with incident and reflected waves:
[1.21]
where is the transmitted (or incident) Stokes vector, is the received (or scattered) Stokes vector, and:
M is a real 4×4 square matrix. In the monostatic case, it is symmetric (according to the reciprocity principle) and related to S by [ULA 90]:
[1.22]
In the case of point targets and a monostatic radar, five relationships exist connecting M terms, namely [VAN 87]:
[1.23]
These relationships are a necessary and sufficient condition for a given Mueller matrix to have a single backscattering matrix associated with it. Therefore, a Mueller matrix only corresponds to an actual “physical” target when [1.23] is verified. Now, the mean Stokes parameters of the waves backscattered by an object varying in either time or space are related to the Stokes parameters of the transmitted wave by an average Mueller matrix E[M]. However, as relations [1.23] are generally lost by averaging the M matrices, there is no complex backscattered matrix corresponding to E[M] and only the first relation out of five in [1.23] is verified.
The most widespread definition of M is the one given above, but other definitions exist. In particular, the Mueller matrix is sometimes defined from the modified Stokes vector:
The corresponding “modified” Mueller matrix Mm is then written as follows:
[1.24]
There are other representations of polarimetric information. Some, such as the Poincaré sphere, the Jones representation, etc., are less widespread because they are more specific to some problems or interpretations. Some, such as the coherence matrices, will be dealt with in section 7.4.2.
1 Chapter written by Jean-Marie NICOLAS and Sylvie LE HGARAT-MASCLE.
1 The presence of free electrons in the mesosphere explains some overlapping with the ionosphere.
2 This partial pressure also depends on P and T.
3 This concept deserves a more elaborate definition, especially one that includes polarimetry (see, for example, [LEC 89]).
4 We have a very different situation where optical wavelengths are concerned, since on the one hand photons differ in frequency and, on the other hand, have very short coherence lengths. To most receivers, they will appear incoherent, which makes it possible to sum up their intensity contributions resulting in the speckle-free images we are familiar with.
Formulated as early as 1891 by the American Hugo Gernsback, the radar principle (“Radio Detection and Ranging”) is based on the principles of electromagnetic propagation: an electromagnetic wave emitted by a source is backscattered by targets. The received signal, once analyzed, makes it possible to detect and locate these targets, by assuming that the propagation velocity of the wave remains fairly constant.
The first experiments in aircraft detection by radar date back to 1934, explaining why the British, Americans and Germans possessed such systems both on the ground and in the air during World War II. Nowadays, radars are well known systems with numerous applications, including detection, location, surveillance, telemetry, imaging, etc.
In this chapter, we will outline the principles of Real Aperture Radar (RAR) and Synthetic Aperture Radar (SAR)1. Our approach is primarily based on traditional signal processing methods (matched filtering); other approaches such as the holographic approach (also called Fourier optics) yield similar results. In order to provide a better understanding of radar formalism, we will begin by analyzing the principles of surveillance and early imaging radars, paving the way for a simple overview of concepts such as antenna, ground swath and resolution. We will then continue with a discussion on the mathematical principles of SAR and finally geometric properties of images acquired by such a system.
Standard surveillance radars are typically found in airports or on ships. They make it possible to detect the presence of passive objects (called targets) by means of echoes which they send back in response to the emission of an electromagnetic pulse. They also make it possible to determine the positions of these objects (more precisely, their distance from the radar). Actually, they are not really imaging radars, the role of which would be to supply at a given moment an image of the observed scene. However, it will be useful to review here how they operate, if only to improve the understanding of imaging radars that were derived from them. We will not go into detail here except on their basic principle that has not changed in over 50 years, even though modern radars are quite sophisticated (they involve colors, moving object tracking, map overlays and so forth).
Figure 2.1.Surveillance radar: beam geometry depends on the geometric features of the antenna
A surveillance radar consists of an antenna that can rotate along the vertical axis Oz (see Figure 2.1) and a transmitter/receiver system of quasi-monochromatic microwave signals of wavelength λ. The antenna is generally long (dimension L along Oy) and not very tall (dimension l along Oz) so that L > l. By assimilating it to an evenly illuminated rectangular aperture, we can deduce its directivity [GOO 72]. Using the Fraunhofer approximation2 that is largely valid in this case, we can write the field U observed at a point of a plane lying at a range D from the aperture and orthogonal to as:
The properties and shape of a sinc function are well known: the shape of the main lobe and the sidelobe locations are provided in Table 2.1 and illustrated in Figure 2.2. The sidelobes of a sinc function, with the first one at –13 dB, are always a problem in image formation, as they create artifacts on the image. In practice, however, an apodization can be performed on the radiating elements of an antenna so that the sidelobes are lowered at the expense of a broadening of the main lobe. This especially applies to slotted waveguide antennae (such as those of ERS radars and RADARSAT). In the case of active modular antennae, known as MMIC (Monolithic Microwave Integrated Circuit), energy balance constraints are such that every radiating element emits maximum energy (i.e., without apodization), even though double weighting will then be necessary on reception. Higher energy can be emitted in this way, while the antenna pattern (resulting from the transmitting and receiving patterns) remains satisfactory.
Figure 2.2.Diffraction by a rectangular antenna: energy distribution is the product of two sinc functions – a narrow one in azimuth and a wider one in range
The radiation pattern is also conventionally3 characterized by the angle of aperture at -3dB : θ3dB ~
Even as the antenna rotates along the vertical axis, it also emits impulses in short regular succession, each time illuminating an angular space sector as narrow as possible. Following the transmission of each pulse, the radar switches into receiving mode. Any target (airplane, ship or any obstacle) located within the illuminated sector will return part of the energy it has received and the radar will detect this echo. The time (tAR) it takes a wave to travel to and from the target at light speed (c) is a measure of the distance separating the antenna from the target:
A radar scans the horizon (azimuth) surrounding itself and returns an image of the targets on an image scope. This image has two independent dimensions: range and azimuth, as shown in Figure 2.3.
A radar’s resolution is linked to its ability to distinguish clearly in both range and azimuth between two adjacent targets.
Figure 2.3.Surveillance radar: image scope
Figure 2.4.Range resolution of a radar
Range resolution rdist depends on the duration of the transmitted pulse: τ (see Figure 2.4). If the pulse is very short, the radar will receive two distinct echoes from two neighboring targets (at a distance d from each other) i.e., it will be able to distinguish between them. On the contrary, if the echoes of the two targets will be mixed up. We will thus have:
Note that the wave travels the distance twice (to and from the target), which explains the factor 2 in these formulae.
Figure 2.5.Range and azimuth resolutions of a detection radar
Azimuth resolution raz (see Figure 2.5) is determined by the antenna pattern (or conventionally speaking, by the angular aperture at –3 dB) and is proportional to range:
Unlike surveillance radars, in which image formation depends on the rotation of the system, the image obtained by imaging radar is associated with the movement of a platform bearing the side-looking antenna. Each pulse illuminates a strip of land as narrow as possible (Figure 2.6).
This is the operating principle of the first imaging radars, known as side-looking airborne radars (SLAR). This type of radar was widely used for many years, largely for military cartography purposes.
The ground swath that determines the image width depends on ranges d1 and d2 (Figure 2.6), which in turn depend on times t1, i.e. when the echo begins to be corresponding recorded, and t2, i.e. when the recording ends. The distance corresponding to the edge nearest to the image is called near range, whereas the distance corresponding to the opposite end of the image is called far range.
It is important to note the following points:
– As a radar deals with range information, sidelooking is required. Indeed, if we illuminated the ground vertically, we would always have two points located at the same distance, one on each side of the track (Figure 2.7). As a result, the image would fold onto itself, with points located right and left of the track mixing together.
Figure 2.6.Airborne side-looking imaging radar: d12
Figure 2.7.Airborne imaging radar: vertical looking and side looking. Vertical looking leads to image folding
– Although imaging radars have a rectangular image (rather than a circular image, as provided by surveillance radars), French radar operators still use the more familiar notions of range and azimuth (see Figure 2.8). By contrast, the English-speaking radar operators would rather refer to them as cross track and along track, which is more appropriate.
Figure 2.8.Left: range and azimuth axes of an airborne imaging radar; right: side-looking radar resolution
– Range resolution, just like above, will be proportional to the emitted pulse width
– Pulse repetition frequency (PRF) has to be adapted to this resolution and to the platform speed v; it is such that the radar travels a distance along its trajectory between two pulse transmissions.
To improve resolution, as seen in Chapter 1, the antenna has to be lengthened. Since this cannot be physically done, it takes a virtual solution to attain this goal. The American Carl Wiley first had the idea in 1951 of using platform movement and signal coherence to reconstruct a large antenna by calculation. As the radar moves between two pulse transmissions, it is indeed possible to combine in phases all of the echoes and synthesize a very large antenna array. This is the principle of synthetic aperture radar, the mathematics of which will be discussed at length in section 2.2.3.
Such signal reconstruction depends on the precise knowledge of the platform trajectory. In the case of airborne radars, it is necessary to take into account every possible movement of the aircraft, the path of which is hardly straight and smooth, but quite influenced by yaw, roll and pitch.
Figure 2.9.The synthetic aperture principle: all along its path, the radar acquires a series of images that are combined by post processing. The final image looks like an image acquired by an antenna that is the sum of all the basic antennae
A radar can be modeled by the so-called radar equation which links the received power to the transmitted power in the presence of a target characterized by RCS σ° (see section 1.2.2.2). In the monostatic case, i.e., where the emitting antenna and the receiving antenna are the same (as they usually are in radars), we have:
[2.1]
where:
Pr : received power
Pe : transmitted power
G : antenna gain
λ : wavelength
a : losses related to absorption in the propagation medium
D : range between antenna and target
Table 2.2.The various frequency bands, including their notations and correspondingwavelength ranges (on which, however, there is no consensus, so thatconsiderably different values are sometimes cited in other works)
P
0.225–0.390 GHz
133–76.9 cm
L
0.39–1.55 GHz
76.9–19.3 cm
SC
1.55–4.20 GHz4.20–5.75 GHz
19.3–7.1 cm7.1–5.2 cm
X
5.75–10.90 GHz
5.2–2.7 cm
K
u
K
a
10.90–22.0 GHz22.0–36 GHz
2.7–1.36 cm1.36–0.83 cm
The D4 term corresponds to geometric attenuation terms (see section 1.1.1.2) for the range traveled by the pulse from antenna to target (D2) and back to the antenna (D2). The RCS is a rather complex function which takes into account the dimensions (area) and the dielectric constants of the scatterer material4, and depends on the frequency and polarization of the incident wave.
If the radar equation shows the relation between transmitted and received wave powers, then the potential noise, whether external (radiation, Sun, etc.) or internal (circuit thermal noise, etc.), must also be taken into account as the received signals are analyzed and processed. Consequently, to assess the performance of a radar, we need to estimate the signal-to-noise ratio (SNR), which takes into consideration more factors than those mentioned earlier. These are generally grouped into two variables, namely, T, which represents noise input as a sole entity measured in Kelvin and the bandwidth B, and it is demonstrated that:
This is a very important relationship and it shows that the SNR is better for a narrow bandwidth signal than for a broad bandwidth signal.
Since the Traveling Wave Tubes (TWT) commonly used on board satellites will not allow very short transmissions, separation capacity cannot be improved by acting directly on this parameter. However, pulse compression can work on longer pulses and still yield fine resolution by a linear frequency modulation of the signal, as will be shown in section 2.2.1.
Radar-used wavelengths cover various bands corresponding to the frequency ranges and wavelengths shown in Table 2.2.
In space-based remote sensing, we have to take into account a wave’s capability to travel across the atmosphere. The shortest wavelengths (Ka, Ku) are strongly attenuated in the lower layers of neutral atmosphere (troposphere). Long wavelengths (P) in turn are subject to strong scattering while passing through the ionosphere (layer F). The intermediate bands (X, C, S, and L) are therefore the most widely used. Selecting one of them essentially depends on the main mission (L-band: scientific mission, biomass estimation and bio-geophysical parameters, ground penetration, polarimetry – X band: high resolution, cartography, detection) and technological limitations, including the radar’s size and antenna length (which are proportional to wavelength) active module efficiency, etc. The C-band (ERS, RADARSAT) offers an acceptable tradeoff for all applications. Future missions will most likely tend to use two-frequency systems, with the two bands being as far apart as possible (X+L).
We present in this section the principle of pulse compression related to range resolution, and the principle of SAR synthesis, which is linked to azimuth resolution. For the sake of clarity, simplified assumptions have been made. The obtained results (such as the relations providing the radial and azimuth resolutions) should therefore be viewed as values indicating the overall dimension of the problem and not exact results.
In order to better understand the principles of SAR, let us first consider a non-moving radar at a point P that emits an almost sinusoidal, frequency-modulated signal (such signals are widely known as “chirps”), centered on frequency fc and represented by the complex quantity A(t):
[2.2]
We assume that at range Dc there is a target backscattering with RCS σ°. We then receive on the radar a signal vr(t):
where To be able to analyze the received signal, we filter it using a filter matched to the emitted signal A(f). The result is a detected signal gtc (t):
where t' verifies the double constraint that:
We address here the case where t > tc (the other case is written analogously) and have so we can write:
Considering that (hence verifying and u 2πK (t -tc), we obtain:
The general expression of the detected signal, i.e., for t∈[tc - τ,tc + τ] is written:
[2.3]
where
The argument of the sinc function depends on t by u and Ts(t). In fact, it is verified that for the common values of K and r in an area Vtc around tc, we can only take u into account, since Ts varies very slowly and can be reasonably well approximated by the constant value In this case equation [2.3] becomes:
[2.4]
Resolution, i.e., the ability to distinguish between two targets, is obtained by
Table 2.3.Range resolution: for each space resolution criterion, the necessary time resolution is provided either for the case of a frequency modulated wave (chirp) or for the case of a purely sinusoidal wave limited by the triangular window Ts(t) alone
We also indicate the width of the triangular window that would lead to this very resolution if the transmitted signal were pure frequency. As can be seen, for equal pulse widths, the pulse compression principle – here applied by chirp – improves our resolution by a factor in the order of
We will use the lobe width of 3.92 dB as a reference hereafter, which will result in a range resolution:
It is important to note that range resolution is independent from frequency fc and depends solely on the bandwith Kτ of the transmitted signal.
In order to better analyze the resolution characteristics of a SAR, we substitute the variable in equation [2.2]. We then have:
where and
For a target with equation [2.4] is written as:
Application to the ERS-1 satellite
In the case of ERS-1, we have the following numerical values.
In the radar geometry reference frame (distance reference frame, where
