121,99 €
The classic text that covers practical image processing methods and theory for image texture analysis, updated second edition
The revised second edition of Image Processing: Dealing with Textures updates the classic work on texture analysis theory and methods without abandoning the foundational essentials of this landmark work. Like the first, the new edition offers an analysis of texture in digital images that are essential to a diverse range of applications such as: robotics, defense, medicine and the geo-sciences.
Designed to easily locate information on specific problems, the text is structured around a series of helpful questions and answers. Updated to include the most recent developments in the field, many chapters have been completely revised including: Fractals and Multifractals, Image Statistics, Texture Repair, Local Phase Features, Dual Tree Complex Wavelet Transform, Ridgelets and Curvelets and Deep Texture Features. The book takes a two-level mathematical approach: light math is covered in the main level of the book, with harder math identified in separate boxes. This important text:
Contains an update of the classic advanced text that reviews practical image processing methods and theory for image texture analysis
Puts the focus exclusively on an in-depth exploration of texture
Contains a companion website with exercises and algorithms
Includes examples that are fully worked to enhance the learning experience
Written for students and researchers of image processing, the second edition of Image Processing has been revised and updated to incorporate the foundational information on the topic and information on the latest advances.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 1127
Veröffentlichungsjahr: 2021
Cover
Title Page
Copyright
Preface to the Second Edition
Preface to the First Edition
Acknowledgements
About the Companion Website
1 Introduction
2 Binary Textures
2.1 Shape Grammars
2.2 Boolean Models
2.3 Mathematical Morphology
3 Stationary Grey Texture Images
3.1 Image Binarisation
3.2 Grey Scale Mathematical Morphology
3.3 Fractals and Multifractals
3.4 Image Statistics
3.5 Texture Features from the Fourier Transform
3.6 Markov Random Fields
3.7 Gibbs Distributions
3.8 Texture Repair
4 Non‐stationary Grey Texture Images
4.1 The Uncertainty Principle and its Implications in Signal and Image Processing
4.2 Gabor Functions
4.3 Prolate Spheroidal Sequence Functions
4.4 Local Phase Features
4.5 Wavelets
4.6 The Dual Tree Complex Wavelet Transform
4.7 Ridgelets and Curvelets
4.8 Where Image Processing and Pattern Recognition Meet
4.9 Laws' Masks and the “What Looks Like Where” Space
4.10 Local Binary Patterns
4.11 The Wigner Distribution
4.12 Convolutional Neural Networks for Textures Feature Extraction
Bibliographical Notes
References
Index
End User License Agreement
Chapter 2
Table 2.1 Pairs
when
is the distribution function of a Gaussian probabilit...
Table 2.2 The numbers on the left were drawn from a uniform probability densi...
Table 2.3 Perimeter and area of digital and continuous circles. The perimeter...
Table 2.4 Numerical values measured from the image for the aggregate paramete...
Table 2.5 Estimation of the aggregate and individual parameters of the images...
Table 2.6 Estimated parameters for the image in Figure 2.24b, assuming a 2D B...
Table 2.7 Aggregate parameters
and
and individual parameter
estimated fr...
Table 2.8 Aggregate parameters
and
and individual parameter
estimated fr...
Chapter 3
Table 3.1 Pixels with grey values in the ranges on the right are flagged in t...
Table 3.2 The last row shows the values of the various generalised fractal di...
Table 3.3 The numbers of each column have been fitted with the least squares ...
Table 3.4 The local connected fractal dimension of the pixels of Figure 3.41.
Table 3.5 Values of
and
estimated from the log–log plots of the average sq...
Table 3.6 The fractal dimension computed for the textures of Figure 3.1 using...
Table 3.7 The fractal dimensions and correlation coefficients with each
.
Table 3.8 The fractal dimensions for four different directions in the spectra...
Table 3.9 The multifractal spectra for the images using as measure the contra...
Table 3.10 The multifractal spectra for the images using as measure the sum o...
Table 3.11 The five point sets from the matrix of alpha values.
Table 3.12 The five point sets from the matrix of alpha values.
Table 3.13 The number of pixels with each one of the grey values and the rank...
Table 3.14 Number of pairs of samples with certain absolute difference in val...
Table 3.15 The range and sill of the variograms of images 3.1, as well as the...
Table 3.16 Features derived from the profiles of the autocorrelation function...
Table 3.17 Fitting the fractal model to the variograms of the texture images ...
Table 3.18 Best fitting of the fractal model to the variograms of the texture...
Table 3.19 The coordinates of the pixels that make up the digital circles in ...
Table 3.20 The coordinates of the pixels that will appear after the centre of...
Example Table 3.21 Contrast value for the four main directions for the image ...
Table 3.22 Contrast values for
different directions for the image in Figure ...
Table 3.23 The five strongest peaks in the signatures of the spectra of the i...
Table 3.24 The five strongest peaks in the signatures of the spectra of the i...
Table 3.25 Second column: values of function
. Third column: a Gaussian windo...
Table 3.26 Values of the power spectra of
,
, and
.
is a Gaussian mask th...
Table 3.27 The results of the fractal dimension
by calculating
.
Table 3.28 The data used for the estimation of the best set of Markov paramet...
Table 3.29 The data used for the estimation of the best set of Markov paramet...
Table 3.30 Parameters estimated for the textures in Figure 3.1 using the MLE ...
Table 3.31 Parameters estimated for the textures in Figure 3.1 using the maxi...
Table 3.32 Estimated MRF parameters using the LSE method for the images in Fi...
Table 3.33 The number of
horizontal
pairs
in the configurations of Figure 3....
Table 3.34 The number of
vertical
pairs
in the configurations of Figure 3.14...
Table 3.35 The number of
horizontal
pairs
in the configurations of Figure 3....
Table 3.36 The number of
vertical
pairs
in the configurations of Figure 3.14...
Table 3.37 The number of
horizontal
pairs
in the configurations of Figure 3....
Table 3.38 The number of
vertical
pairs
in the configurations of Figure 3.14...
Table 3.39 The number of
horizontal
pairs
in the configurations of Figure 3....
Table 3.40 The number of
vertical
pairs
in the configurations of Figure 3.14...
Table 3.41 The number of single pixels in each configuration of Figure 3.148 ...
Table 3.42 The number of single pixels with value
in each configuration prod...
Table 3.43 The number of single pixels with value
in each configuration prod...
Table 3.44 The number of single pixels with value
in each configuration prod...
Table 3.45 Most probable configurations for different sets of clique potentia...
Table 3.46 Values of probability
for the most probable configurations of exa...
Table 3.47 Number of iterations required for the greedy algorithm to converge...
Table 3.48 Value of
for number of grey levels
.
Table 3.49 Average grey value of each configuration of Figure 3.148.
Table 3.50 Average grey value of each configuration of Figure 3.148 if the tw...
Table 3.51 Average grey value of each configuration of Figure 3.148 if the tw...
Table 3.52 Average grey value of each configuration of Figure 3.148 if the tw...
Table 3.53 Probability of each configuration in Figure 3.148 arising, when ex...
Table 3.54 Probability of each configuration in Figure 3.148 arising if the t...
Table 3.55 Probability of each configuration in Figure 3.148 arising if the t...
Table 3.56 Probability of each configuration in Figure 3.148 arising if the t...
Table 3.57 Probability of having an image with average grey level
for differ...
Table 3.58 Free energy
for different structures of the configuration space....
Table 3.59 All possible grey values
of a pixel, their corresponding probabil...
Chapter 4
Table 4.2 Orientations of the angular bands of the Gaussian masks.
Table 4.1 Values of
,
,
, and
used to segment the image in Figure 4.33.
Table 4.3 Reconstruction errors for the three studied cases: complete Gaussia...
Table 4.4 Normalised values obtained for the low, band and high pass filters.
Table 4.5 If we use these ranges of indices in Equation 4.248 with
and the v...
Table 4.6 Some commonly used scaling filters. These filters are appropriate f...
Table 4.7 Two sets of wavelet filters that can be used in the dual tree compl...
Table 4.8 Classification results at ILSVRC (http://www.image‐net.org/).
Table 4.9 AND gate in two dimensional data.
Table 4.10 Perceptron training process (AND gate).
Table 4.11 OR gate in two‐dimensional data.
Table 4.12 OR training process.
Table 4.13 The step by step caluclation of the gradient method.
.
Table 4.14 XOR problem.
Table 4.15 The step by step calculation of three‐layer NN(1).
Table 4.16 The step by step calculation of three‐layer NN(2).
Table 4.17 Traditional methods for the development of texture analysis.
Table 4.18 Time‐line of deep‐learning based methods.
Table 4.19 The coordinates of the pixels that make up the digital circle acco...
Table 4.20 Results compared with other methods.
Chapter 1
Figure 1.1 Costas in bloom. Source: Maria Petrou.
Figure 1.2 (a) An original image. Source: Maria Petrou. (b) Manually extract...
Figure 1.3 (a) Blewbury from an aeroplane (size
). Source: Maria Petrou. (b...
Figure 1.4 (a) When a scanning window of size
was placed in Figure 1.3b wi...
Figure 1.5 A surface imaged from two different distances may create two very...
Figure 1.6 A surface imaged under different illumination directions may give...
Figure 1.7 (a) Original image of size
showing a town. Source: Maria Petrou...
Figure 1.8 Each tile in this figure represents a pixel. The black tiles repr...
Figure 1.9 (a) An original image of size
, showing the cross section of the...
Chapter 2
Figure 2.1 We can easily recognise the depicted objects, even from a binary ...
Figure 2.2 Four binary textures: which one is different from the other three...
Figure 2.3 (a) A random texture. (b) A texture with a regular primitive patt...
Figure 2.4 An example of a regular pattern.
Figure 2.5 There may be more than one primitive pattern that may be used to ...
Figure 2.6 The rules of a grammar that may be used to produce the texture of...
Figure 2.7 Successive application of the rules of Figure 2.6 allows us to re...
Figure 2.8 The rules of a grammar that may be used to characterise the textu...
Figure 2.9 Successive application of the rules of Figure 2.8 allows us to re...
Figure 2.10 The rules of a grammar that may be used to characterise the text...
Figure 2.11 Successive application of the rules of Figure 2.10 allows us to ...
Figure 2.12 An alternative pattern that may be produced by the successive ap...
Figure 2.13 Two semi‐stochastic textures.
Figure 2.14 A pattern that was produced by the application of the rules of F...
Figure 2.15 A one‐to‐one relationship between
and
.
Figure 2.16 A Gaussian probability density function with
and
and the his...
Figure 2.17 2D Boolean patterns created for different values of parameter
...
Figure 2.18 2D Boolean patterns created for different values of parameter
...
Figure 2.19 (a) A
binary image. (b)The boundary pixels of the image. These...
Figure 2.20 (a) Some digital circles. (b) Boundary pixels that share a commo...
Figure 2.21 A grain at distance
from the origin of the axes will have radi...
Figure 2.22 A circular image of radius
with two grains of radii
.
Figure 2.23 Estimating the aggregate parameters of a 2D Boolean model. (a) A...
Figure 2.24 Two semi‐stochastic textures.
Figure 2.25 The original image and two textures created using the 2D Boolean...
Figure 2.26 (a) An original binary image. The individual strings we can crea...
Figure 2.27 The first few steps of drawing a Hilbert curve.
Figure 2.28 At the top, an original binary image. In the middle, the individ...
Figure 2.29 (a) A raster scanning line that reads the rows of the image sequ...
Figure 2.30 An original image and the strings one can create from it by foll...
Figure 2.31 Function
estimated from image 2.24b using the scanning methods...
Figure 2.32 Function
estimated from image 2.24b using the image rows (a), ...
Figure 2.33 An
binary image and the string created from its pixels by read...
Figure 2.34 Some example structuring elements. The crossed lines mark the ce...
Figure 2.35 The amorous hippopotamus.
Figure 2.36 (a) The amorous hippopotamus about to be dilated with the struct...
Figure 2.37 (a) The amorous hippopotamus about to be eroded with the structu...
Figure 2.38 (a) The amorous hippopotamus about to be opened. All white pixel...
Figure 2.39 (a) The amorous hippopotamus about to be closed. All white pixel...
Figure 2.40 (a) The amorous hippopotamus about to be dilated with the struct...
Figure 2.41 (a) The amorous hippopotamus about to be eroded with the structu...
Figure 2.42 (a) The amorous hippopotamus about to be opened with the structu...
Figure 2.43 (a) The amorous hippopotamus about to be closed with the structu...
Figure 2.44 The object of Figure 2.34a dilated with the structuring element ...
Figure 2.45 (a) The amorous hippopotamus after his dilation with structuring...
Figure 2.46 (a) The amorous hippopotamus after his erosion with structuring ...
Figure 2.47 The image in (a) is dilated with structuring element 2.34c, to p...
Figure 2.48 (a) The amorous hippopotamus about to be dilated with structurin...
Figure 2.49 The dilated amorous hippopotamus of Figure 2.48b about to be dil...
Figure 2.50 (a) The amorous hippopotamus about to be eroded with structuring...
Figure 2.51 The eroded amorous hippopotamus of Figure 2.50b about to be erod...
Figure 2.52 Some openings of image 2.1 with structuring elements of size (a)...
Figure 2.53 The pattern spectrum for image 2.1.
Figure 2.54 (a) The original image of an object. (b) The
structuring eleme...
Figure 2.55 If the
neighbourhood of a pixel looks like any one of these co...
Figure 2.56 (a) The original image. (b) The delineated pixels will be remove...
Figure 2.57 Pairs of elements that should be used to produce the skeleton of...
Figure 2.58 Panels (e1) and (e2) are the hit‐or‐miss transforms of images (a...
Figure 2.59 Images (a1) and (a2) eroded with structuring elements (f1) and (...
Figure 2.60 Panels (b1) and (b2) are the complements of (a1) and (a2) and th...
Figure 2.61 Panels (e1) and (e2) are the hit‐or‐miss transforms that mark th...
Figure 2.62 The successive thinning stages of the second cycle of applying t...
Figure 2.63 Applying successively the elements of Figure 2.57 in turn to int...
Chapter 3
Figure 3.1 Some stationary grey texture images. Images (a), (c), (e), (f) an...
Figure 3.2 Splitting a grey image into a set of binary images by thresholdin...
Figure 3.3 Splitting an image into a set of binary images all of which have ...
Figure 3.4 Splitting an image into a set of binary images after histogram eq...
Figure 3.5 Splitting an image into a set of binary images by bit‐slicing. (a...
Figure 3.6 A binary sequence A may be dilated with the asymmetric structurin...
Figure 3.7 (a) A grey image. The highlighted
frame indicates the structuri...
Figure 3.8 (a) The original signal. (b) The structuring element. (c) The top...
Figure 3.9 (a) The original signal. (b) The flat structuring element of size...
Figure 3.10 (a) The original image. (b) The image eroded by subtracting at e...
Figure 3.11 Underneath each panel one can see which panels of Figure 3.10 we...
Figure 3.12 Underneath each panel one can see which panels of Figure 3.10 we...
Figure 3.13 Underneath each panel one can see which panels of Figure 3.10 we...
Figure 3.14 (a) Structuring element 3.10d with the bias removed. (b) Erosion...
Figure 3.15 (a) The signal of Figure 3.8a eroded with a flat structuring ele...
Figure 3.16 (a) The signal of Figure 3.8a dilated by a flat structuring elem...
Figure 3.17 (a) The signal of Figure 3.8a dilated and eroded by a flat struc...
Figure 3.18 Enlargement of the image in order to deal with boundary effects ...
Figure 3.19 Morphological operations applied to the image of Figure 3.1a wit...
Figure 3.20 Pattern spectra for some images of Figure 3.1. They may be used ...
Figure 3.21 To create a von Koch snowflake curve, start with a straight line...
Figure 3.22 To create a fractal surface from a
Sierpinski triangle
, start wi...
Figure 3.23 The line segments in (b) are assumed to be rigid measuring rods ...
Figure 3.24 The box‐counting method for the computation of the fractal dimen...
Figure 3.25 Calculating the fractal dimension of a circle by successive divi...
Figure 3.26 (a) The first stage of constructing the fractal surface of figur...
Figure 3.27 (a) The regular tetrahedron with which we replace triangle
of ...
Figure 3.28 The cross‐section of the surface created at the first stage of t...
Figure 3.29 The cross‐section of the surface created at the first four stage...
Figure 3.30 The first four stages of creating the curve of Example 3.21 and ...
Figure 3.31 The first bit of the indices of the pixels in each quadrant of a...
Figure 3.32 The largest font is used to indicate the first bit of the index ...
Figure 3.33 A
2‐bit image.
Figure 3.34 Treating the image as a landscape, we imagine that it exists in ...
Figure 3.35 A ganglion cell.
Figure 3.36 We cover the image with boxes of decreasing size and count the n...
Figure 3.37 The number of object pixels inside each box.
Figure 3.38 The multifractal spectrum of image 3.35.
Figure 3.39 A
area of the image in Figure 2.1. Source: Maria Petrou.
Figure 3.40 The multifractal spectrum of image 3.39.
Figure 3.41 (a) An original binary image. (b)–(f) The thick black frames ind...
Figure 3.42 The histogram of the local connected fractal dimensions, compute...
Figure 3.43 The histogram of the local connected fractal dimensions for imag...
Figure 3.44 Fractal dimension and self‐affinity.
Figure 3.45 Average square differences versus
for the textures of Figure 3...
Figure 3.46 The value of
, which appears in 3.114, is constant along each d...
Figure 3.47 Synthesising 1D fractals from a sequence of white noise with
....
Figure 3.48 Synthesising 2D fractals of size
. Source: Maria Petrou.
Figure 3.49 The pairs of points
fitted with a straight line for each of th...
Figure 3.50 A
‐bit image.
Figure 3.51 Lacunarity values using
windows displayed as images. (a) Food....
Figure 3.52 Histograms of the lacunarity values. (a) Food. (b) Plastic. Both...
Figure 3.53 Synthetic 1D multifractal
Figure 3.54 Synthetic 1D multifractal using a different random sequence at e...
Figure 3.55 Synthetic 2D multifractals. Source: Maria Petrou.
Figure 3.56 Synthetic 2D multifractals using a different random sequence at ...
Figure 3.57 The matrix of alpha values.
Figure 3.58 The five point sets from the matrix of alpha values.
Figure 3.59 The matrix of alpha values. Source: Maria Petrou.
Figure 3.60 The five point sets from the matrix of alpha values. Source: Mar...
Figure 3.61 Construction of a binomial multifractal distribution with
bins...
Figure 3.62 The first two stages of creating a 2D binomial multifractal.
Figure 3.63 Creating a 2D binomial multifractal image.
Figure 3.64 Raw (on the left) and smoothed (on the right) probability densit...
Figure 3.65 Histograms of the gradient magnitude for the textures of Figure ...
Figure 3.66 The orientations of the gradient vectors computed for each pixel...
Figure 3.67 (a) The orientation histogram of image 3.66. (b) The orientation...
Figure 3.68 The unit sphere and an infinitesimal surface element on it.
Figure 3.69 The generalisation of the Sobel filter to 3D. These are the thre...
Figure 3.70 The accumulator array we create in order to compute the 3D orien...
Figure 3.71 (a) A
‐bit grey image. (b) Its rank‐frequency plot.
Figure 3.72 A
‐bit
image.
Figure 3.73 Each matrix is
, as there are four different grey values and th...
Figure 3.74 Variograms for the images of Figure 3.1. (a) Using the whole ima...
Figure 3.75 The black dots represent data values. The solid line represents ...
Figure 3.76 The variograms of the two images in Figure 3.1.
Figure 3.77 Distorted versions of the 2D fractal in Figure 3.48 for
. The o...
Figure 3.78 Normalised auto‐covariance function (the autocorrelation functio...
Figure 3.79 Normalised auto‐covariance (top) and autocorrelation functions (...
Figure 3.80 Marginals of the auto‐covariance function for images food and pl...
Figure 3.81 Marginals of the auto‐covariance function for images cloth 2 and...
Figure 3.82 Marginals of the auto‐covariance function for images fabric 1 an...
Figure 3.83 Plots of the Weibull distribution for several values of paramete...
Figure 3.84 Fitting the histograms of the gradient magnitude for the texture...
Figure 3.85 An image with three grey levels.
Figure 3.86 Top row, co‐occurrence matrices. Bottom row their normalised ver...
Figure 3.87 Digital circles with radii
, from top to bottom and left to rig...
Figure 3.88 Rotationally invariant co‐occurrence matrices for the image bean...
Figure 3.89 Directional co‐occurrence matrices for the image fabric 1. Sourc...
Figure 3.90 Power spectra for four of the textures of Figure 3.1. The spectr...
Figure 3.91 Power spectral signatures for fabric 1, fabric 2, beans and sand...
Figure 3.92 The phase spectra of the first three images of Figure 3.1 with t...
Figure 3.93 Computing the power spectrum of an image. (a) Original image. (b...
Figure 3.94 (a)A 1D digital signal. (b) The discrete Fourier transform of th...
Figure 3.95 The contour along which we are going to integrate.
Figure 3.96 The black dots indicate poles of the integrand. In (a) the sum o...
Figure 3.97 Continuous and discrete Fourier transforms of a Gaussian. (a)
....
Figure 3.98 Two digital signals and their corresponding power spectra. (a)
Figure 3.99 (a) A digital signal. (b) The power spectrum of the digital sign...
Figure 3.100 (a) Digital signal 3.322 repeated three times. (b) The same sig...
Figure 3.101 Computing the power spectrum of an image using a Gaussian windo...
Figure 3.102 Computing the fractal dimension of an image from its power spec...
Figure 3.103 (a) The shaded region indicates the range of indices
and
ov...
Figure 3.104 Real and imaginary parts of the inverse Fourier transform after...
Figure 3.105 Image reconstruction using a fixed value for its Fourier transf...
Figure 3.106 Image reconstruction using a fixed value for its Fourier transf...
Figure 3.107 Inverse Fourier transform using
and
for different values of...
Figure 3.108 (a) Imagine that you start from point
and you move along the ...
Figure 3.109 At the top the true phase of a signal. Below it, the phase we c...
Figure 3.110 Phase‐unwrapped sequences obtained for two horizontal lines of ...
Figure 3.111 Four of the possible paths that lead from pixel A to pixel Q.
Figure 3.112 All paths that lead from A to Q when we can only move from left...
Figure 3.113 A
‐bit image may be written as the linear superposition of its...
Figure 3.114 The approximate representation of the original image.
Figure 3.115 The approximate representation of the original image. Source: M...
Figure 3.116 Some Markov neighbourhoods and the conditions that have to be s...
Figure 3.117 Some more exotic Markov neighbourhoods, where the neighbours th...
Figure 3.118 An empty grid. We have to choose grey values for these pixels t...
Figure 3.119 Four directional textures.
Figure 3.120 The parameters and pixels that influence the value of the centr...
Figure 3.121 Context allows us to determine the missing values.
Figure 3.122 Some Markov neighbourhoods. The numbers indicate the Markov par...
Figure 3.123 A random field created using a binomial distribution with
and...
Figure 3.124 Codings of a
image when the Markov neighbourhoods of figures ...
Figure 3.125 Creating textures using the Markov model with a binomial distri...
Figure 3.126 Creating textures using the Markov model with a binomial distri...
Figure 3.127 A random field created using the approximation of a binomial di...
Figure 3.128 Creating textures using the Markov model and approximating the ...
Figure 3.129 Two neighbourhoods with Markov parameters that retain the balan...
Figure 3.130 Creating textures using the Markov model and approximating the ...
Figure 3.131 Creating textures using the Markov model and approximating the ...
Figure 3.132 Creating textures using the Markov model with a normal distribu...
Figure 3.133 We have a different probability density function for each value...
Figure 3.134 An original texture and texture created using the parameters es...
Figure 3.135 (a) A
‐bit
image. (b) The two codings keeping only the pixel...
Figure 3.136 Synthesised textures using the parameters estimated by the leas...
Figure 3.137 Synthesised textures using the parameters estimated by least sq...
Figure 3.138 The way we construct Gaussian and Laplacian pyramids.
is the ...
Figure 3.139 The frequency domain of an image, with
and
the frequencies ...
Figure 3.140 (a) A
‐bit
image. (b) A smoothing Gaussian mask.
Figure 3.141 The smoothed and sub‐sampled images.
Figure 3.142 The smoothed image and sub‐sampled images.
Figure 3.143 The smoothed and sub‐sampled images.
Figure 3.144 The Laplacian pyramid.
Figure 3.145 The cliques that correspond to the Markov neighbourhoods shown ...
Figure 3.146 We mark with a different symbol the neighbourhood of each pixel...
Figure 3.147 The cliques that correspond to the Markov neighbourhoods shown ...
Figure 3.148 All possible binary
images we can have when the grey pixels c...
Figure 3.149 Results of the greedy algorithm applied to a starting configura...
Figure 3.150 Results of the greedy algorithm applied to random images with t...
Figure 3.151 Results of the greedy algorithm applied to starting configurati...
Figure 3.152Figure 3.152 Results of the modified greedy algorithm applied to...
Figure 3.153 Results of the modified greedy algorithm applied to a random im...
Figure 3.154 Probability density function for having an image with average g...
Figure 3.155 Free energy
as a function of the mean grey value
.
Figure 3.156 (a) A texture binary image. The circles indicate pixels with gr...
Figure 3.157 All islands of perimeter
one can draw around a pixel.
Figure 3.158 On the left some configurations
with the same island of 0s in...
Figure 3.159 The damaged image to be reconstructed. Source: Maria Petrou.
Figure 3.160 Each position, encoded here with a letter, in this
neighbourh...
Figure 3.161 The feature map of co‐occurrence matrix
. Source: Maria Petrou...
Figure 3.162 (a) An image patch that is to be repaired, by giving values to ...
Figure 3.163 A 2‐bit image that needs repair.
Figure 3.164 The first iteration of the in‐painting algorithm.
Figure 3.165 The second iteration of the in‐painting algorithm.
Figure 3.166Figure 3.166 The third iteration of the in‐painting algorithm.
Figure 3.167 Two low pass filters that may be used in normalised convolution...
Figure 3.168 (a) The image that is to be repaired with
values ...
Figure 3.169 The incomplete neighbourhood in (a) has to be compared with the...
Figure 3.170 (a) The distances from the centre of all positions of the neigh...
Chapter 4
Figure 4.1 A continuous function (a), and the real part (b), imaginary part ...
Figure 4.2 The effects of a rectangular window: windowed parts of the signal...
Figure 4.3 The effects of a rectangular window: windowed parts of the signal...
Figure 4.4 The effects of a Gaussian window: windowed parts of the signal (f...
Figure 4.5 The effects of a Gaussian window: windowed parts of the signal (f...
Figure 4.6 At the top an infinitely long signal with parts of it seen throug...
Figure 4.7 At the bottom an original signal with three different positions o...
Figure 4.8 Feature sequences obtained from the real (a) and the imaginary (b...
Figure 4.9 Feature sequences obtained from the magnitude of the short time F...
Figure 4.10 This figure shows a digital signal scanned by a window
samples...
Figure 4.11 Top: the signal of Equation 4.65. Bottom: an ideal feature that ...
Figure 4.12 Feature sequences obtained from the real (a) and imaginary (b) p...
Figure 4.13 (a) Feature sequences obtained from the magnitude of the short t...
Figure 4.14 Feature sequences obtained from the magnitude of the short Fouri...
Figure 4.15 Averaged feature sequences obtained from the magnitude of the sh...
Figure 4.16 The original signal of Figure 4.6 scanned by a narrow window. If...
Figure 4.17 Suppose that we are seeing a signal with fundamental frequency
Figure 4.18 Gabor function
may be used to create elementary signals in ter...
Figure 4.19 Gabor function
may be used to create elementary signals in ter...
Figure 4.20 (a) At the top the real and imaginary parts of
, the Fourier tr...
Figure 4.21 Inverse Fourier transform of all windowed Fourier transforms of ...
Figure 4.22 Gabor features produced by squaring and convolving the functions...
Figure 4.23 Gabor features produced by squaring and convolving the functions...
Figure 4.24 A schematic tessellation of the 2D frequency domain. The crosses...
Figure 4.25 A schematic representation of the analysis of an image into cont...
Figure 4.26 A polar coordinates‐based tessellation of the frequency domain. ...
Figure 4.27 A tessellation of the frequency domain where higher frequencies ...
Figure 4.28 The coordinates of point P in the large coordinate system are
,...
Figure 4.29 An octave is the interval between a frequency and its double. Th...
Figure 4.30 The ellipse represents an isocontour of a Gaussian window in the...
Figure 4.31 (a) When the number of azimuthal bands is even (e.g.
), pairing...
Figure 4.32 Function
for
,
, and
, from bottom to top respectively, whe...
Figure 4.33 (a) A Girona pavement to be segmented and (b) the magnitude of i...
Figure 4.34 Top row: magnitude of the Fourier transform of image 4.33a multi...
Figure 4.35 Top row: magnitude of the Fourier transform of image 4.33a multi...
Figure 4.36 Top row: magnitude of the Fourier transform of image 4.33a multi...
Figure 4.37 Top row: magnitude of the Fourier transform of image 4.33a multi...
Figure 4.38 Top row: magnitude of the Fourier transform of image 4.33a multi...
Figure 4.39 Energies of the 20 bands used in the Gabor expansion of the imag...
Figure 4.40 The central band containing the dc component of the image and th...
Figure 4.41 For each pair of panels: on the left the Gaussian filters that c...
Figure 4.42 For each pair of panels: on the left the Gaussian filters that c...
Figure 4.43 Reconstruction error as a function of the channels used for the ...
Figure 4.44 Energies of the
bands created by truncating the Gaussian masks...
Figure 4.45 For each pair of panels: on the left the truncated Gaussian filt...
Figure 4.46 For each pair of panels: on the left the truncated Gaussian filt...
Figure 4.47 Truncated Gaussian windows. The
and the corresponding local en...
Figure 4.48 Energies of the
bands into which the Fourier transform of the ...
Figure 4.49 (a) Reconstruction error as a function of the channels used for ...
Figure 4.50 For each pair of panels: on the left the flat filters that corre...
Figure 4.51 For each pair of panels: on the left the flat filters that corre...
Figure 4.52 Flat windows: The
and the corresponding local energy maps extr...
Figure 4.53 Top three lines: real part, imaginary part and energy of sequenc...
Figure 4.54 Top line: original signal. Next three lines: Fourier transform o...
Figure 4.55 We wish to have a filter with non‐zero values inside the two mar...
Figure 4.56 (a) Results after applying the filter obtained in Example 4.46 (...
Figure 4.57 (a) Results after applying the filter obtained in Example 4.51 (...
Figure 4.58 (a) Results after applying the filter obtained in Example 4.51 (...
Figure 4.59 Sequence
for the low pass convolution filter.
Figure 4.60 Sequence
for the band pass convolution filter.
Figure 4.61 The
sequence that corresponds to the high pass convolution fil...
Figure 4.62 (a) Results obtained for the segmentation of the signal defined ...
Figure 4.63 As Figure 4.62, but this figure concerns the band pass filter de...
Figure 4.64 (a) Results obtained for the segmentation of the signal defined ...
Figure 4.65 The squared values of
, shown as a grey image. Source: Maria Pe...
Figure 4.66 A tessellation of the 2D frequency domain of a
image into freq...
Figure 4.67 We shift the non‐zero values of array 4.269 in the grey area of ...
Figure 4.68 The tessellation of the 2D frequency domain of a
image shown i...
Figure 4.69 Left: tessellation of the 2D
image into sub‐images of size
. ...
Figure 4.70 (a) Basis functions on which the signal shown in (b) can be proj...
Figure 4.71 An image consisting of two different textures. (a) The pixel val...
Figure 4.72 Magnitude of the outputs of the nine convolution filters designe...
Figure 4.73 Real part of the outputs of the nine convolution filters designe...
Figure 4.74Figure 4.74 Imaginary part of the outputs of the nine convolution...
Figure 4.75 Averaged energies inside a window of size
of the outputs of th...
Figure 4.76 The numbers on the top and left of the grids identify the indice...
Figure 4.77 Real part of the nine filters designed. Each filter is of size
Figure 4.78 Imaginary part of the nine filters designed. Each filter is of s...
Figure 4.79 Real part of the
filters designed. Each filter is of size
an...
Figure 4.80 Imaginary part of the
filters designed. Each filter is of size...
Figure 4.81 The Girona pavement in size
. We wish to construct features tha...
Figure 4.82 Magnitude of the outputs of the nine‐filter bank. Source: Maria ...
Figure 4.83 Real part of the outputs of the nine‐filter bank. Source: Maria ...
Figure 4.84 Imaginary part of the outputs of the nine‐filter bank. Source: M...
Figure 4.85 Real part of the outputs of the 25‐filter bank. Source: Maria Pe...
Figure 4.86 Imaginary part of the outputs of the 25‐filter bank. Source: Mar...
Figure 4.87 Magnitude of the outputs of the 25‐filter bank. The two empty pa...
Figure 4.88 The last two panels of Figure 4.87. They correspond to low frequ...
Figure 4.89 (a) The Fourier transform of a symmetric band limited filter. (b...
Figure 4.90 A local window may isolate part of a signal. If the isolated seg...
Figure 4.91 An image that contains a symmetric and an antisymmetric feature ...
Figure 4.92 The negative of the second derivative of a Gaussian with
. In E...
Figure 4.93 The “what happens when” space. Consider a function that contains...
Figure 4.94 The “what happens when” space. As the sub‐space spanned by the s...
Figure 4.95 A wavelet vector for a 12‐sample long signal, for shifting param...
Figure 4.96 Wavelet and scaling vectors constructed from a six‐tap long filt...
Figure 4.97 A filter of odd length used to produce wavelet vectors for the d...
Figure 4.98 Schematic representation of the multi‐resolution analysis of a s...
Figure 4.99 Schematic representation of the multiresolution analysis of a si...
Figure 4.100 The tree wavelet analysis of a digital signal.
Figure 4.101 The packet wavelet analysis of a digital signal.
Figure 4.102 (a) Original signal. (b) Reconstructed signal after setting the...
Figure 4.103 The signal of Figure 4.102a reconstructed from only the stronge...
Figure 4.104 The packet wavelet representation of an image.
Figure 4.105 The tessellation of the frequency space that corresponds to the...
Figure 4.106 Zooming into a frequency band with the packet wavelet transform...
Figure 4.107 Tree wavelet representation of the image shown in Figure 4.81....
Figure 4.108 Packet wavelet representation of the image shown in Figure 4.81...
Figure 4.109 The structure tree for the texture image fabric 2.
Figure 4.110 The tessellation of the frequency space that corresponds to the...
Figure 4.111 Three levels of analysis with the maximum overlap algorithm. Th...
Figure 4.112 At the top, a band that has to be analysed. It is sub‐sampled i...
Figure 4.113 Maximum overlap structure tree of the image shown in Figure 4.8...
Figure 4.114 Averaged energies of the coefficients of the maximum overlap an...
Figure 4.115 The maximum overlap structure tree for the texture image food. ...
Figure 4.116 The tessellation of the frequency space that corresponds to the...
Figure 4.117 Time–frequency space showing schematically the resolution cells...
Figure 4.118 Gabor basis functions in the time and frequency domains for
,
Figure 4.119 Time–frequency space showing schematically the resolution cells...
Figure 4.120 Some wavelet basis functions in the time and frequency domains....
Figure 4.121 (a) A signal containing a pulse and its shifted version by one ...
Figure 4.122 In both panels, the black dots and the solid line corresponds t...
Figure 4.123 The magnitude plots of these Fourier transforms.
Figure 4.124 Filter
plotted against the real distance of its elements from...
Figure 4.125 An eight‐tap symmetric low pass filter, when sub‐sampled by kee...
Figure 4.126 The signal may be fully reconstructed from the scaling and wave...
Figure 4.127 The different grey tones identify the bands to which the 2D fre...
Figure 4.128 The application of the dual tree complex wavelet transform to a...
Figure 4.129 The feature maps for the image 4.71 using the dual tree complex...
Figure 4.130 The length
of the normal from the centre of the axes to the l...
Figure 4.131 At the top left, an image containing a line‐like structure. The...
Figure 4.132 For fixed
, Radon transform considers a batch of parallel line...
Figure 4.133 All lines that can be formed in a
image. Each panel shows the...
Figure 4.134 All lines of a certain fixed slope, made up from pixels, that c...
Figure 4.135 Lines of a certain slope created in an image with size not a pr...
Figure 4.136 A line and the normal to the line from the centre of the axes....
Figure 4.137 The numbers that have to multiply exponent
to produce the com...
Figure 4.138 These numbers have to multiply
to form the exponent of the fa...
Figure 4.139 As Figure 4.138, but for components 13–24 of the Fourier transf...
Figure 4.140 As Figure 4.138, but for components 25–36 of the Fourier transf...
Figure 4.141 As Figure 4.138, but for components 37–49 of the Fourier transf...
Figure 4.142 The thick black frame identifies the 2D DFT of the image. The d...
Figure 4.143 The numbers below the grid are the values of frequency index
....
Figure 4.144 The original Fourier frame of Figure 4.143 is repeated in all d...
Figure 4.145 The two slices one may consider as being cut in the 2D Fourier ...
Figure 4.146Figure 4.146 (a) All lines with the same slope (
), as in Exampl...
Figure 4.147Figure 4.147 The
vectors of the digital Radon lines as origina...
Figure 4.148 The lines with slope
, all marked with different symbols. Sort...
Figure 4.149 At the top, the digital lines we shall use for the Radon transf...
Figure 4.150 The nine finite ridgelet transforms of the corresponding
wind...
Figure 4.151 The numbers in each panel identify the pixels that make up the ...
Figure 4.152 Histograms of the absolute values of the wavelet coefficients f...
Figure 4.153 At the top, data points along the axis of a feature. In the mid...
Figure 4.154 If the features we are using are not good for class discriminat...
Figure 4.155 Histograms of the relative distances between the absolute value...
Figure 4.156 Histogram of the relative distances in the 10D feature space of...
Figure 4.157 Segmentation results for
using a Gaussian window to compute t...
Figure 4.158 (a) The reference hand segmentation of an image. (b) The result...
Figure 4.159 A region (A+B) in the reference segmentation partly overlaps wi...
Figure 4.160 An image in which we search to identify the
pattern shown on ...
Figure 4.161 The reference image and the three sub‐images with which it has ...
Figure 4.162 We must compute the mutual information of these two images by u...
Figure 4.163 Laws' masks of size
. On the left their names and on the right...
Figure 4.164 Laws' masks of size
. On the left their names and on the right...
Figure 4.165 Laws' masks of size
. On the left their names and on the right...
Figure 4.166 Outputs of the Laws' masks of size
. The
extreme values of e...
Figure 4.167 Segmentation results. Source: Maria Petrou.
Figure 4.168 Outputs of the Laws' masks of size
. The
extreme values of e...
Figure 4.169Figure 4.169 Segmentation results obtained using the determinist...
Figure 4.170 Segmentation results obtained using the deterministic annealing...
Figure 4.171 (a) The first six eigenvalues of matrix
. (b) All eigenvalues ...
Figure 4.172 First six feature maps obtained using PCA. Each principal compo...
Figure 4.173 Segmentation results obtained using the first three features co...
Figure 4.174 (a) If we process a
image with the inverse of the Walsh matri...
Figure 4.175 If we process a sub‐image of size
with the inverse Walsh matr...
Figure 4.176 (a) The nine elementary neighbourhoods in terms of which the
...
Figure 4.177 (a) The
elementary neighbourhoods in terms of which the
Wal...
Figure 4.178 (a) Convolution of an image along the vertical and horizontal d...
Figure 4.179 (a) Laws' masks of size
may be combined in all possible ways ...
Figure 4.180 The expansion of each
neighbourhood of the image in Figure 4....
Figure 4.181 Energies of the outputs of Figure 4.180 computed using a Gaussi...
Figure 4.182 Segmentation results obtained using the deterministic annealing...
Figure 4.183 Expanding the
neighbourhoods of the image in Figure 4.81 in t...
Figure 4.184 Segmentation results obtained using the deterministic annealing...
Figure 4.185 Expanding the
neighbourhoods of the image of Figure 4.81 in t...
Figure 4.186 Energies of the outputs of Figure 4.185 computed using a Gaussi...
Figure 4.187 Panels LI3–SI3 (left) and SI3–SI3 (right) of Figure 4.185, enla...
Figure 4.188 Segmentation results obtained using the deterministic annealing...
Figure 4.189 The pixels around the central pixel in (a) are given value
if...
Figure 4.190 Two circular neighbourhoods with radii
and
are considered a...
Figure 4.191 Local binary pattern maps for the image of Figure 4.81. Source:...
Figure 4.192 Segmentation of image 4.81 using the LBP histograms for radius
Figure 4.193 The stable patterns in a
neighbourhood. All rotational variat...
Figure 4.194 (a) Segmentation result using the LBP histograms of only the ro...
Figure 4.195 Segmentation results using the contrast values only for radius
Figure 4.196 (a) Segmentation result using the LBP histograms of the stable ...
Figure 4.197 Kaiser window for
, from top to bottom, respectively.
Figure 4.198 A train of delta functions
distance apart may be thought of a...
Figure 4.199 (a) A continuous band‐limited signal and its Fourier spectrum s...
Figure 4.200 If the teeth of the sampling comb in the real domain are closer...
Figure 4.201 (a) A continuous band‐limited signal
and its Fourier spectrum...
Figure 4.202 In (a) the 2D Kaiser window created by multiplying two 1D Kaise...
Figure 4.203 Wigner feature maps obtained for the image of Figure 4.81 using...
Figure 4.204 Segmentation results obtained using the deterministic annealing...
Figure 4.205 First six feature maps constructed using principal component an...
Figure 4.206 Segmentation results obtained using the deterministic annealing...
Figure 4.207 Segmentation results obtained using the deterministic annealing...
Figure 4.208 Segmentation results obtained using the deterministic annealing...
Figure 4.209 Energy features constructed from the components of the Wigner s...
Figure 4.210 Segmentation results obtained using the deterministic annealing...
Figure 4.211 The first six eigen‐feature maps produced by PCA for the
Wign...
Figure 4.212 Segmentation results obtained using the deterministic annealing...
Figure 4.213 Segmentation results obtained using the deterministic annealing...
Figure 4.214 Segmentation results obtained using the deterministic annealing...
Figure 4.215 A neuron model:
.
Figure 4.216 A simple Perceptron model [83].
Figure 4.217 A two ADALINE model (three‐layer MADALINE) [98].
Figure 4.218 Pre‐attentively distinguishable texture pairs by Juletz [41].
Figure 4.219 The principal component vectors. For each PCA result with size
Figure 4.220 Similarity definition of edges.
Figure 4.221 Similarity definition of lines.
Figure 4.222 A sigmoid function and the first derivative.
Figure 4.223 Three‐layer neural networks.
Figure 4.224 The three‐layer NN.
Figure 4.225 Error in the three‐layer NN.
Figure 4.226 Very large intra‐class variations caused by changes in illumina...
Figure 4.227 Inter‐class images may also appear to have a similar pattern du...
Figure 4.228 The data for Fisher discriminant analysis.
Figure 4.229 The result for Fisher discriminant analysis.
Figure 4.230 The framework of TCNN and FV‐CNN.
Figure 4.231 The calculation process of the co‐occurrence matrix.
Figure 4.232 Example of different angle
features for a texture image of di...
Figure 4.233 The visualization of GLCM features with parameter
, horizontal...
Figure 4.234 The framework and feature map dimension of Vgg‐16 Network[90]....
Figure 4.235 Examples of the DTD dataset.
Figure 4.236 Examples of the FMD dataset.
Figure 4.237 ResNet flowchart.
Figure 4.238 ResNet convergence.
Cover
Table of Contents
Begin Reading
vii
vii
viii
ix
x
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
795
796
797
798
799
800
801
802
803
804
805
806
Second Edition
(the late) Maria Petrouformerly, Imperial College, London, UK
Revising AuthorSei‐ichiro KamataWaseda University, Tokyo/Kitakyushu, Japan
This second edition first published 2021
© 2021 John Wiley & Sons, Ltd.
Edition History
John Wiley & Sons, Ltd (1e 2006)
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.
The right of Maria Petrou and Sei‐ichiro Kamata to be identified as the authors of this work has been asserted in accordance with law.
Registered Offices
John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA
John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK
Editorial Office
The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK
For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.
Wiley also publishes its books in a variety of electronic formats and by print‐on‐demand. Some content that appears in standard print versions of this book may not be available in other formats.
Limit of Liability/Disclaimer of Warranty
