Scaling, Fractals and Wavelets -  - E-Book

Scaling, Fractals and Wavelets E-Book

0,0
215,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Scaling is a mathematical transformation that enlarges or diminishes objects. The technique is used in a variety of areas, including finance and image processing. This book is organized around the notions of scaling phenomena and scale invariance. The various stochastic models commonly used to describe scaling -- self-similarity, long-range dependence and multi-fractals -- are introduced. These models are compared and related to one another. Next, fractional integration, a mathematical tool closely related to the notion of scale invariance, is discussed, and stochastic processes with prescribed scaling properties (self-similar processes, locally self-similar processes, fractionally filtered processes, iterated function systems) are defined. A number of applications where the scaling paradigm proved fruitful are detailed: image processing, financial and stock market fluctuations, geophysics, scale relativity, and fractal time-space.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 819

Veröffentlichungsjahr: 2013

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Preface

Chapter 1. Fractal and Multifractal Analysis in Signal Processing

1.1. Introduction

1.2. Dimensions of sets

1.3. Hölder exponents

1.4. Multifractal analysis

1.5. Bibliography

Chapter 2. Scale Invariance and Wavelets

2.1. Introduction

2.2. Models for scale invariance

2.3. Wavelet transform

2.4. Wavelet analysis of scale invariant processes

2.5. Implementation: analysis, detection and estimation

2.6. Conclusion

2.7. Bibliography

Chapter 3. Wavelet Methods for Multifractal Analysis of Functions

3.1. Introduction

3.2. General points regarding multifractal functions

3.3. Random multifractal processes

3.4. Multifractal formalisms

3.5. Bounds of the spectrum

3.6. The grand-canonical multifractal formalism

3.7. Bibliography

Chapter 4. Multifractal Scaling: General Theory and Approach by Wavelets

4.1. Introduction and summary

4.2. Singularity exponents

4.3. Multifractal analysis

4.4. Multifractal formalism

4.5. Binomial multifractals

4.6. Wavelet based analysis

4.7. Self-similarity and LRD

4.8. Multifractal processes

4.9. Bibliography

Chapter 5. Self-similar Processes

5.1. Introduction

5.2. The Gaussian case

5.3. Non-Gaussian case

5.4. Regularity and long-range dependence

5.5. Bibliography

Chapter 6. Locally Self-similar Fields

6.1. Introduction

6.2. Recap of two representations of fractional Brownian motion

6.3. Two examples of locally self-similar fields

6.4. Multifractional fields and trajectorial regularity

6.5. Estimate of regularity

6.6. Bibliography

Chapter 7. An Introduction to Fractional Calculus

7.1. Introduction

7.2. Definitions

7.3. Fractional differential equations

7.4. Diffusive structure of fractional differential systems

7.5. Example of a fractional partial differential equation

7.6. Conclusion

7.7. Bibliography

Chapter 8. Fractional Synthesis, Fractional Filters

8.1. Traditional and less traditional questions about fractionals

8.2. Fractional filters

8.3. Discrete time fractional processes

8.4. Continuous time fractional processes

8.5. Distribution processes

8.6. Bibliography

Chapter 9. Iterated Function Systems and Some Generalizations: Local Regularity Analysis and Multifractal Modeling of Signals

9.1. Introduction

9.2. Definition of the Hölder exponent

9.3. Iterated function systems (IFS)

9.4. Generalization of iterated function systems

9.5. Estimation of pointwise Hölder exponent by GIFS

9.6. Weak self-similar functions and multifractal formalism

9.7. Signal representation by WSA functions

9.8. Segmentation of signals by weak self-similar functions

9.9. Estimation of the multifractal spectrum

9.10. Experiments

9.11. Bibliography

Chapter 10. Iterated Function Systems and Applications in Image Processing

10.1. Introduction

10.2. Iterated transformation systems

10.3. Application to natural image processing: image coding

10.4. Bibliography

Chapter 11. Local Regularity and Multifractal Methods for Image and Signal Analysis

11.1. Introduction

11.2. Basic tools

11.3. Hölderian regularity estimation

11.4. Denoising

11.5. Hölderian regularity based interpolation

11.6. Biomedical signal analysis

11.7. Texture segmentation

11.8. Edge detection

11.9. Change detection in image sequences using multifractal analysis

11.10. Image reconstruction

11.11. Bibliography

Chapter 12. Scale Invariance in Computer Network Traffic

12.1. Teletraffic – a new natural phenomenon

12.2. From a wealth of scales arise scaling laws

12.3. Sources as the source of the laws

12.4. New models, new behaviors

12.5. Perspectives

12.6. Bibliography

Chapter 13. Research of Scaling Law on Stock Market Variations

13.1. Introduction: fractals in finance

13.2. Presence of scales in the study of stock market variations

13.3. Modeling postulating independence on stock market returns

13.4. Research of dependency and memory of markets

13.5. Towards a rediscovery of scaling laws in finance

13.6. Bibliography

Chapter 14. Scale Relativity, Non-differentiability and Fractal Space-time

14.1. Introduction

14.2. Abandonment of the hypothesis of space-time differentiability

14.3. Towards a fractal space-time

14.4. Relativity and scale covariance

14.5. Scale differential equations

14.6. Quantum-like induced dynamics

14.7. Conclusion

14.8. Bibliography

List of Authors

Index

First published in France in 2 volumes in 2002 by Hermes Science/Lavoisier entitled: Lois d’échelle, fractales et ondelettes © LAVOISIER, 2002 First published in Great Britain and the United States in 2009 by ISTE Ltd and John Wiley & Sons, Inc.

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:

ISTE Ltd

John Wiley & Sons, Inc.

27-37 St George’s Road

111 River Street

London SW19 4EU

Hoboken, NJ 07030

UK

USA

www.iste.co.uk

www.wiley.com

© ISTE Ltd 2009

The rights of Patrice Abry, Paulo Gonçalves and Jacques Lévy Véhel to be identified as the authors of this work have been asserted by them in accordance with the Copyright, Designs and Patents Act 1988.

Library of Congress Cataloging-in-Publication Data

Lois d’échelle, fractales et ondelettes. English

Scaling, fractals and wavelets/edited by Patrice Abry, Paulo Gonçalves, Jacques Lévy Véhel.

p. cm.

Includes bibliographical references.

ISBN 978-1-84821-072-1

1. Signal processing--Mathematics. 2. Fractals. 3. Wavelets (Mathematics) I. Abry, Patrice. II. Gonçalves, Paulo. III. Lévy Véhel, Jacques, 1960- IV. Title.

TK5102.9.L65 2007

621.382′20151--dc22

2007025119

British Library Cataloguing-in-Publication Data

A CIP record for this book is available from the British Library

ISBN: 978-1-84821-072-1

Preface

It is a common scheme in many sciences to study systems or signals by looking for characteristic scales in time or space. These are then used as references for expressing all measured quantities. Physicists may for instance employ the size of a structure, while signal processors are often interested in correlation lengths: (blocks of) samples whose distance is several times the correlation lengths are considered statistically independent. The concept of scale invariance may be considered to be the converse of this approach: it means that there is no characteristic scale in the system. In other words, all scales contribute to the observed phenomenon. This “non-property” is also loosely referred to as scaling law or scaling behavior. Note that we may reverse the perspective and consider scale invariance as the signature of a strong organization in the system. Indeed, it is well known in physics that invariance laws are associated with fundamental properties. It is remarkable that phenomena where scaling laws have been observed cover a wide range of fields, both in natural and artificial systems. In the first category, these include for instance hydrology, in relation to the variability of water levels, hydrodynamics and the study of turbulence, statistical physics with the study of long-range interactions, electronics with the so-called 1/f noise in semiconductors, geophysics with the distribution of faults, biology, physiology and the variability of human body rhythms such as the heart rate. In the second category, we may mention geography with the distribution of population in cities or in continents, Internet traffic and financial markets.

From a signal processing perspective, the aim is then to study transfer mechanisms between scales (also called “cascades”) rather than to identify relevant scales. We are thus led to forget about scale-based models (such as Markov models), and to focus on models allowing us to study correspondences between many scales. The central notion behind scaling laws is that of self-similarity. Loosely speaking, this means that each part is (statistically) the same as the whole object. In particular, information gathered from observing the data should be independent of the scale of observation.

There is considerable variety in observed self-similar behaviors. They may for instance appear through scaling laws in the Fourier domain, either at all frequencies or in a finite but large range of frequencies, or even in the limit of high or low frequencies. In many cases, studying second-order quantities such as spectra will prove insufficient for describing scaling laws. Higher-order moments are then necessary. More generally, the fundamental model of self-similarity has to be adapted in many settings, and to be generalized in various directions, so that it becomes useful in real-world situations. These include self-similar stochastic processes, 1/f processes, long memory processes, multifractal and multifractional processes, locally self-similar processes and more. Multifractal analysis, in particular, has developed as a method allowing us to study complex objects which are not necessarily “fractal”, by describing the variations of local regularity. The recent change of paradigm consisting of using fractal methods rather than studying fractal objects is one of the reasons for the success of the domain in applications.

We are delighted to invite our reader for a promenade in the realm of scaling laws, its mathematical models and its real-world manifestations. The 14 chapters have all been written by experts. The first four chapters deal with the general mathematical tools allowing us to measure fractional dimensions, local regularity and scaling in its various disguises. Wavelets play a particular role for this purpose, and their role is emphasized. Chapters 5 and 6 describe advanced stochastic models relevant in our area. Chapter 7 deals with fractional calculus, and Chapter 8 explains how to synthesize certain fractal models. Chapter 9 gives a general introduction to IFS, a powerful tool for building and describing fractals and other complex objects, while Chapter 10, of applied nature, considers the application of IFS to image compression. The four remaining chapters also deal with applications: various signal and image processing tasks are considered in Chapter 11. Chapter 12 deals with Internet traffic, and Chapter 13 with financial data analysis. Finally, Chapter 14 describes a fractal space-time in the frame of cosmology.

It is a great pleasure for us to thank all the authors of this volume for the quality of their contribution. We believe they have succeeded in exposing advanced concepts with great pedagogy.

Chapter 1

Fractal and Multifractal Analysis in Signal Processing1

1.1. Introduction

The aim of this chapter is to describe some of the fundamental concepts of fractal analysis in view of their application. We will thus present a simple introduction to the concepts of fractional dimension, regularity exponents and multifractal analysis, and show how they are used in signal and image processing.

Since we are interested in applications, most theoretical results are given without proofs. These are available in the references mentioned where appropriate. In contrast, we will pay special attention to the practical aspects. In particular, almost all the notions explained below are implemented in the FracLab toolbox. This toolbox is freely available from the following site: http://complex.futurs.inria.fr/FracLab/, so that interested readers may perform hands-on experiments.

Before we start, we wish to emphasize the following point: recent successes of fractal analysis in signal and image processing do not generally stem from the fact that they are applied to fractal objects (in a more or less strict sense). Indeed, most real-world signals are neither self-similar nor display the characteristics usually associated with fractals (except for the irregularity at each scale). The relevance of fractal analysis instead results from the progress made in the development of fractal methods. Such methods have lately become more general and reliable, and they now allow to describe precisely the singular structure of complex signals, without any assumption of “fractality”: as a rule, performing a fractal analysis will be useful as soon as the considered signal is irregular and this irregularity contains meaningful information. There are numerous examples of such situations, ranging from image segmentation (where, for instance, contours are made of singular points; see section 1.4.7 and Chapter 11) to vocal synthesis [DAO 02] or financial analysis.

This chapter roughly follows the chronological order in which the various tools have been introduced. We first describe several notions of fractional dimensions. These provide a global characterization of a signal. We then introduce Hölder exponents, which supply local measures of irregularity. The last part of the chapter is devoted to multifractal analysis, a most refined tool that describes the local as well as the overall singular structure of signals. All the concepts presented here are more fully developed in [TRI 99, LEV 02].

1.2. Dimensions of sets

The concept of dimension applies to objects more general than signals. To simplify, we shall consider sets in a metric space, although the notion of dimension makes sense for more complex entities such as measures or classes of functions [KOL 61]. Several interesting notions of dimension exist. This might look like a drawback for the mathematical analysis of fractal sets. However, it is actually an advantage, since each dimension emphasizes a different aspect of an object. It is thus worthwhile to determine the specificity of each dimension. As a general rule, none of these tools outperform the other.

Let us first give a general definition of the notion of dimension.

DEFINITION 1.1.– We call dimension an application d defined on the family of bounded sets of and ranging in , such that:

1) for any point x;

2) (monotonicity);

σ-stable dimensions may be extended in a natural way to characterize unbounded sets of .

1.2.1. Minkowski-Bouligand dimension

The Minkowski-Bouligand dimension was invented by Bouligand [BOU 28], who named it the Cantor-Minkowski order. It is now commonly referred to as the box dimension. Let us cover a bounded set E of with cubes of side ε and disjoint interiors. Let Nε(E) be the number of these cubes. When E contains an infinite number of points (i.e. if it is a curve, a surface, etc.), Nε(E) tends to +∞ when ε tends to 0. The box dimension Δ characterizes the rate of this growth. Roughly speaking, Δ is the real number such that:

assuming this number exists. More generally, we define, for all bounded E, the number:

(1.1)

A lower limit may also be used:

(1.2)

Note that some authors refer to the box dimension only when both indices coincide, that is, when the limit exists.

Both indices Δ and δ are dimensions in the sense previously defined. However, Δ is stable, contrarily to δ, so that Δ is more commonly used. Let us mention an important property: if denotes the closure of E (the set of all limit points of sequences in E), then:

Equivalent definitions

It is not mandatory to use cubes to calculate Δ. The original definition of Bouligand is as follows:

– in , let us consider the Minkowski sausage:

which is the union of all the balls of radius ε centered at E. Denote its volume by Voln(E(ε)). This volume is approximately of the order of Nε(E) εn. This allows us to give the equivalent definition:

(1.3)

– we may also define N′ε(E), which is the smallest number of balls of radius ε covering E; or N″ε(E), the largest number of disjoint balls of radius ε centered on E. Replacing Nε(E) by any of these values in equation (1.1) still gives Δ(E).

Discrete values of ε

This remark is important, as it allows us to perform numerical estimations of Δ.

Let us now give some well-known examples of calculating dimensions.

Let us consider a particular case. When all the interval extremities En are also interval extremities of En+1, E is called a perfect symmetric set [KAH 63] or sometimes, more loosely, a Cantor set. Assume that the ratio log an/log an+1 tends to 1. According to the previous comment on discrete sequences, we obtain the following values:

with and . This set is the attractor of the iterated function system (see Chapters 9 and 10). It is also called aperfect symmetric set with constant ratio.

In other words, Γ is the attractor of the IFS . When Γ is simple, the dimensions δ and Δ assume a common value, which is also the similarity dimension, i.e. the unique solution of the equation

Figure 1.1.Von Koch curve, the attractor of a system of four similarities with common ratio

Function scales

The previous definitions all involve ratios of logarithms. This is an immediate consequence of the fact that a dimension is defined as an order of growth related to the scale of functions {tα, α > 0}. In general, a scale of functions in the neighborhood of 0 is a family of functions which are all comparable in the Hardy sense, that is, for any f and g in , the ratio tends to a limit (possibly +∞ or −∞) when x tends to 0. Function scales are defined in a similar way in the neighborhood of +∞. Scales other than {tα} will yield other types of dimensions. A dimension must be considered as a Dedekind cut in a given scale of functions. The following expressions will make this clearer:

(1.4)

(1.5)

these are equivalent to equations (1.1) and (1.2) (see [TRI 99]).

Complementary intervals on the line

In the particular case where the compact E lies in an interval J of the line, the complementary set of E in J is a union of disjoint open intervals, whose lengths will be denoted by cn.Let |E| be the Lebesgue measure of E (which means, for an interval, its length). The dimension of E may be written as:

(1.6)

Proof. This result may be obtained by calculating an approximation of the length of Minkowski sausage E(ε). Let us assume that the complementary intervals are ranked in decreasing lengths:

thus . It may be shown that both values

are equal to the convergence exponent. It is therefore equal to Δ(E).         

EXERCISE 1.1.– Verify formula (1.6) for the perfect symmetric sets of Example 1.1.

If |E| ≠ 0, then the convergence exponent of Σcn still makes sense. It characterizes a degree of proximity of the exterior with the set E. More precisely, we obtain

(1.7)

where the set E(ε) – E refers to the Minkowski sausage of E deprived of the points of E.

How can we generalize the study of the complementary set in with ? The open intervals must be replaced with an appropriate paving. The results connecting the elements of this paving to the dimension depend both on the geometry of the tiles and on their respective positions. The topology of the complementary set must be investigated more deeply [TRI 87]. The index that generalizes (1.7) (replacing the 1 of the space dimension by n) is the fact fractal exponent, studied in [GRE 85, TRI 86b]. In the case of a zero area curve in , this also leads to the notion of lateral dimension. Note that the dimensions corresponding to each side of the curve are not necessarily equal [TRI 99].

1.2.2. Packing dimension

The packing dimension is, to some extent, a regularization of the box dimension [TRI 82]. Indeed, Δ is not σ-stable, but we may derive a σ-stable dimension from any index thanks to the operation described below.

PROPOSITION 1.1.– Let B> be the family of all bounded sets of and . Then, the function defined for any subsets of as:

is monotonous and σ-stable.

Thus, the inequality holds. The converse inequality stems from monotonicity.         

The packing dimension is the result of this operation on Δ. We set

The term packing will be explained later. The new index Dim is indeed a dimension, and it is σ-stable. Therefore, contrarily to Δ, it vanishes for countable sets. The inequality:

is true for any bounded set. This becomes an equality when E presents a homogenous structure in the following sense:

Proof. Let Ei be a decomposition of E. Since E is compact, a Baire theorem entails that the Ei are not all nowhere dense in E. Therefore, there exist an index i0 and an open set U intersecting E such that , which yields:

As a result, , and thus . The converse inequality is always true.         

EXAMPLE 1.3.– All self-similar sets are of this type, including those presented above: Cantor sets and curves. For these sets, the packing dimension has the same value as Δ(E).

This result will be derived in section 1.3.2.

1.2.3. Covering dimension

The covering dimension was introduced by Hausdorff [HAU 19]. Here we adopt the traditional approach through Hausdorff measures; a direct approach, using Vitali’s covering convergence exponent, may be used to calculate the dimension without using measures [TRI 99].

Covering measures

Originally, the covering measures were defined to generalize and, most of all, to precisely define the concepts of length, surface, volume, etc. They constitute an important tool in geometric measure theory.

Firstly, let us consider a determining function, which is increasing and continuous in the neighborhood of 0, and such that . Let E be a set in a metric space (that is, a space where a distance has been defined). For every ε > 0, we consider all the coverings of E by bounded sets Ui of diameter . Let

When ε tends to 0, this quantity (possibly infinite) cannot decrease. The limit corresponds to the -Hausdorff measure:

In this definition, the covering sets Ui can be taken in a more restricted family. If we suppose that Ui are open, or convex, the result remains unchanged. The main properties are that of any Borel measure:

– if (Ei) is a collection of countable sets, then

– if E1 and E2 are at non-zero distance from each other, any ε-covering of E1 is disjoint from any ε-covering of E2 when ε is sufficiently small. Then . This implies that is a metric measure. The Borel sets are -measurable and for any collection (Ei) of disjoint Borel sets, .

The scale of functions tα

In the case where with α > 0, we use the simple notation .

More generally, when α is an integer, Hα is proportional to the α-dimensional volume.

However, α can also take non-integer values, which makes it possible to define the dimension of any set. The use of the term dimension is justified by the following property: if aE is the image of E by a homothety of ratio a, then

Measures estimated using boxes

If we want to restrict the class of sets from which coverings are taken even more, one option would be to cover E with centered balls or dyadic boxes. In each case, the result is a measure H*α which is generally not equal to Hα(E); nevertheless, it is an equivalent measure in the sense that we can find two non-zero constants, c1 and c2, such that for any E:

Clearly the H*α measures give rise to the same dimension.

Dimension

For every E, there exists a unique critical value α such that:

The dimension is defined as

(1.8)

NOTE 1.1.– This approach is not very different from the one which leads to the box dimension. Compare equation (1.8) with equations (1.4) and (1.5). Once again, it may be generalized by using other function scales than tα.

Properties

The properties of dim directly stem from those of Hα measures. It is a σ-stable dimension, like Dim.

To compare all these dimensions, let us observe that δ can be defined in the same manner as the covering dimension, by using coverings made up of sets of equal diameter. This implies the inequality for any E. The σ-stability property then implies the following:

Packing measures

By considering packings of E, that is, families of disjoint sets at zero distance from E, and by switching inf and sup in the definitions, it is possible to define packing measures which are symmetric to the covering measures and whose critical index is precisely equal to Dim. This explains why Dim is called a packing dimension.

1.2.4. Methods for calculating dimensions

Since a dimension is an index of irregularity, it may be used for the classification of irregular sets and for characterizing phenomenons showing erratic behaviors. Here we focus on signals. In practice, we may assume that the signal Γ is given in axes Oxy by discrete data , with xi < xi+1: for example, a time series. Notice that other sets, such as geographical curves obtained by aerial photography, are not of this type, so that the analysis tools can be different.

The algorithms used for estimating a dimension rely on the theoretical formula which defines the dimension, with the usual limitation of a minimal scale, roughly equal to the distances xi+1 − xi. Indeed, it is impossible to go further into the dataset structure without a reconstruction at very small scales, which leads to adding up arbitrarily new data. The evaluation of a dimension, whose theoretical definition requires the finest coverings possible, is therefore difficult to justify. This is why we do not propose algorithms for the estimation of σ-stable dimensions such as Hausdorff or packing dimensions.

Calculations may be performed to estimate Δ or related dimensions that are naturally adapted to signal analysis. They are usually carried out using logarithmic diagrams. A quantity is estimated, which characterizes the irregularity for a certain number of values of the resolution ε between two limit values εmax and εmin. If follows a power law of the type cεΔ, then is an affine function of log ε, with slope Δ. The idea is to seek functions which provide appropriate logarithmic diagrams, i.e. allow us to estimate the slopes precisely. Here are some examples.

Boxes and disks

After counting the number Nε of squares of a network of sides ε, we draw the diagram (|log ε|, log Nε). Although it is very easy to program, the method presents an obvious disadvantage: the quantities Nε are integers, which makes the diagram chaotic for large values of ε. We could try a method using the Minkowski sausage of Γ:

However, this method is more difficult to program than the previous method and also lacks precision: the diagram shows, in general, a strong concavity – even for a curve as simple as a straight line segment!

These methods are very popular, in spite of numerical difficulties. Unfortunately there is a major drawback for signals. The coordinates are not, in general, of the same nature. If they refer, for example, to stock exchange data, xi is a time value and an exchange value. In this case, it makes no sense to give the units the same value on the axis Ox and Oy. The covering of Γ by squares or disks is therefore meaningless. It is preferable to use algorithms which will provide a slope independent of changes of unit. For this purpose, the calculated quantity should satisfy the following properties: for any real a, there exists c(a), such that, for any ε:

(1.9)

as is the case for the methods which are described below.

Variation method

Here we anticipate section 1.3.4. The oscillation of a function f on any set F is defined as

The ε-variation on an interval J is the arithmetic mean of the oscillation over intervals of length 2ε:

The variation method [DUB 89, TRI 88] consists of finding the slope of the diagram:

Since for all x and ε, we obtain through integration , so equation (1.9) is satisfied. In this case we obtain diagrams which present an almost perfect alignment for a large class of known signals. Furthermore, this method presents the following advantages: ease of programming and speed of execution.

Lpnorms method

Let us define the Lp norm of a function by the relation:

It is a functional norm when . When p → +∞, the expression tends to the norm:

Given a signal f defined on [a, b] and the values and ε > 0, we apply this tool at any x to the local function difference defined by where . Using the norm L∞, this gives . This quantity is equivalent to ε-oscillation, since

It is therefore possible to replace the ε-variation of f by the integral over J of , without altering the theoretical result for Δ. However, it is also possible to use Lp norms. Indeed, the oscillation (or the local norm L∞) only takes into account the peaks of the function. In practice, it can happen that these peaks are measured with a significant error, or even destroyed in the process of acquisition of data (profiles of rough surfaces, for example). It is preferable to use all the intermediate values and replace the ε-variation with the quantity:

Let us develop an example of the index just referred to. Let K be a kernel belonging to the Schwartz class, with integral equal to 1. Let for a > 0. For a function f defined in a compact, let be the convolution of f with Ka. Since is regular, the length Λa of its graph is finite. We define the regularization dimension of the graph of f as:

(1.10)

This dimension measures the speed at which the length of less and less regularized versions of the graph of f tend to infinity. It is easily proved that if f is continuous, the inequality is always true. An interesting aspect of the dimension of regularization is that it is a well-adapted estimation tool. Results obtained on usual signals (Weierstrass function, iterated function system and Brownian fractional motion) are generally satisfactory, even for small-sized samples (a few hundred points). Moreover, the simple analytical form of dimR allows us to easily obtain an estimator for data corrupted by an additional noise, which is particularly useful in signal processing (see [ROU 98] and the FracLab manual for more details).

1.3. Hölder exponents

1.3.1. Hölder exponents related to a measure

The dimensional analysis of a set is related to its local properties. To go further into this study, it is convenient to use a measure µ supported by the set. In many cases (self-similar sets, for example), E is defined at the same time as µ. If E is a curve, constructing a measure on E is called a parameterization. Without a parameterization it is impossible to analyze the curve. However, a given set can support very different measures. Particularly interesting ones are the well-balanced measures, in a sense we will explain.

Given a measure µ of , let us first define the Hölder exponent of µ over any measurable set F by

Given a set E, we use this notion in a local manner, i.e. on arbitrarily small intervals or cubes intersecting E. A pointwise Hölder exponent is then defined using centered balls Bε(x) whose radius tends to 0:

The symmetric exponent can also be useful:

In addition, the geometric context sometimes induces a specific analysis framework. If a measure is defined by its value on the dyadic cubes, it will be easier to use the following Hölder exponents:

where un(x) is the cube of [that contains x. Other covering nets can obviously be used, but dyadic cubes are well suited for calculations.

1.3.2. Theorems on set dimensions

The first theorem can be used as a basis for understanding the subsequent more technical results.

THEOREM 1.2.– Let µ be a finite measure such that µ(E) > 0. Assume that there exists a real α, such that for any :

Proof. Let ε > 0. Let En be the subset of E consisting of all points x such that, for :

If ρ < 2−n, any ρ-covering {ui} of En by dyadic cubes of rank ≥ n is such that:

for all i. Therefore .

First, we deduce that . If n is large enough then µ(En) > 0. This gives . Since En ⊂ E, then .

Secondly, by using the covering of En formed by dyadic cubes of rank , we obtain:

Therefore , which implies . As a consequence, by σ-stability. By making ε tend to 0, we obtain the desired result.         

An analogous theorem can be stated with balls Bε(x) centered at E. We can develop the arguments of the preceding proof to obtain more general results as follows.

THEOREM 1.3.– Assume that µ is a finite measure such that µ(E) > 0. Then:

(1.11)

(1.12)

(1.13)

The same results hold if, for example, we replace the network of balls centered at E with that of dyadic cubes.

EXAMPLE 1.5.– A perfect symmetric set (see Example 1.1) is the support of a natural or canonical measure: each of the 2n covering intervals of rank n is associated with the weight 2−n. In the case where the set has constant ratio a, these intervals have rank n and their Hölder exponent assumes the value log 2/|log a| uniformly. This grid of intervals allows the computation of dimensions. Indeed

By making the successive ratios an vary, it is also possible to construct a set such that:

EXAMPLE 1.6.– The set Ep of p-normal numbers (see Example 1.4) supports a measure which makes it possible to estimate its dimension and which is known as the Besicovitch measure. It is defined on the dyadic intervals [0, 1].

Set and . The weights p and 1 − p are then distributed similarly at the following stages: Since each dyadic interval un of rank n is the union of the intervals vn (on the left) and v′n (on the right) of rank n + 1, we put

It is easy to calculate the exact measure of each dyadic interval un(x) containing the point x by using the base 2 expansion of x. Denote:

(1.14)

Secondly, to compute the dimension, we first need to determine the value of the Hölder exponent. equation (1.14) implies the following result:

for any x of [0, 1]. If , then α(un(x)) tends to the value:

(1.15)

1.3.3. Hölder exponent related to a function

The Hölder exponents of a function give much more information than those of measures. Firstly, let us generalize the notion of measure of a set F, by using the notion of oscillation of the function f in F:

This allows us to define a Hölder exponent:

Given an interval J and a function , we may use this notion locally in arbitrarily small neighborhoods of . The pointwise Hölder exponent offin t is obtained as

According to this definition, the exponent of an increasing function f is the same as that of the measure µ defined on any [c, d] ⊂ J by . Indeed, is also the oscillation value of f in [c, d]. However, in general, f is not monotonous and it is therefore necessary to carry out a more accurate analysis, as we will see below.

As in the case of measures, we may also consider the “symmetric” exponent defined with an upper limit, and also the exponents obtained as lower and upper limits by using particular grids of intervals, like the dyadic intervals.

Oscillation considered as a measurement of the local variability of a function possesses many advantages. In particular, it is closely related to the box dimension. However, there are some counterparts: it is not simple to use in a theoretical context, it is sometimes difficult to estimate with precision under experimental conditions and, finally, it is sensitive to various disturbances (discretization, noise, etc.). It is possible to replace the oscillation with other set functions showing more robustness. However, most alternatives no longer verify the important triangle inequality:

for all F1,F2 ⊂ J (see [TRI 99]).

We can simplify the analysis by restricting the general F to the class of intervals and by setting:

We may even consider only dyadic intervals, and take:

where cj,k is the wavelet coefficient of f at scale j and position k (see also Chapters 2, 8 and 9).

Let us now give an alternate and useful definition of the Hölder exponent.

DEFINITION 1.2.– Let be a function, s > 0 and x0 a real number. Then , if and only if there is a real number η > 0, a polynomial P of degree ≤ s and a constant C, such that

(1.16)

The pointwise Hölder exponent of f at x0, denoted or αp(x0), is given by

In many applications, the regularity of a signal is as important as, or more important, than its amplitude. For example, the edges of an image do not vary if an affine transformation of the gray-levels is carried out. This will modify the amplitude, but not the exponents. The pointwise Hölder exponent will thus be one of the main ingredients for the fractal analysis of a signal or image. Generally speaking, measuring and studying the pointwise exponent is particularly useful in the processing of strongly irregular signals whose irregularity is expected to contain relevant information. Examples includes biomedical signals and images (ECG, EEG, echography, scintigraphy), Internet traffic logs, radar images, etc.

The pointwise exponent, however, presents some drawbacks. For example, it is neither stable by integral-differentiation nor, more generally, under the action of pseudo-differential operators. This means that it is not possible to predict the exponent at x of a primitive F of f knowing . It is only guaranteed that . In the same way, the exponent of the analytical signal associated with f is not necessarily the same as f. This is a problem in signal processing, since this type of operator is often used. The second disadvantage, related to the first, is that αp(x) does not provide the whole information on the regularity of f in x. A common example is that of the chirp |x|γ sin(1/|x|δ), with γ and δ positive. We verify that the pointwise exponent at 0 is equal to γ. In particular, it is independent of δ: αp(0) is not sensitive to “oscillating singularities”, i.e. to situations where the local frequency of the signal tends to infinity in the neighborhood of 0.

It is therefore necessary to introduce at least one more exponent to fully describe the local irregularity. During the last few years, several quantities have been proposed, including the chirp exponent, the oscillation exponent, etc. Here we will focus on the local Hölder exponent, which we now define (for more details on the properties of this exponent, see [LEV 98a, SEU 02]). Let us first recall that, given a function , where is an open set, we say that f belongs to the global Hölder space , with 0 < s < 1, if there is a constant C, such that for any x, y in Ω:

(1.17)

If , then means that there exists a constant C, such that, for any x, y in Ω:

(1.18)

Now let . Notice that if Ω′ ⊂ Ω, then . To define the local Hölder exponent, we will use the following lemma.

LEMMA 1.1.– Let (Oi)i∈I be a family of decreasing open sets (i.e. Oi ⊂ Oj if i > j), such that:

(1.19)

Let:

(1.20)

Then, αl(x0) does not depend on the choice of the family (Oi)i∈I.

This result makes it possible to define the local exponent by using any intervals family containing x0.

DEFINITION 1.3.– Let f be a function defined in a neighborhood of x0. Let be a decreasing sequence of open intervals converging to x0. By definition, the local Hölder exponent of f at x0, noted αl(x0), is:

(1.21)

Let us briefly note that the local exponent is related to a notion of critical exponent of fractional derivation [KOL 01].

The local exponent possesses an advantage over αp: it is stable under the action of pseudo-differential operators. However, as well as αp, αl cannot by itself completely characterize the irregularity around a point. Moreover, αl is, in a certain sense, less “precise” than αp. The results presented below support this assertion.

PROPOSITION 1.2.– For any continuous f and for all x:

The following two theorems describe the structure of the Hölder functions, i.e. the functions which associate with any x the exponents of f at x.

THEOREM 1.4.– Let be a function. The two assertions below are equivalent:

– g is the lower limit of a sequence of continuous functions;

THEOREM 1.5.– Let be a function. The following two assertions are equivalent:

– g is a lower semi-continuous (LSC) function;

NOTE 1.2.– Let us recall that a function is LSC if, for any x ∈ D and for any sequence (xn) in D tending to x:

(1.22)

Since the class of lower limits of continuous function is much larger than that of lower semi-continuous functions, we observe that αp generally supplies more information than αl. For example, αp can vary much “faster” than αl. In particular, it is possible to construct a continuous function whose pointwise Hölder function coincides with the indicator function of the set of rational numbers. It is everywhere discontinuous, and thus its local Hölder function is constantly equal to 0. The following results describe more precisely the relations between αl and αp.

PROPOSITION 1.3.– Let be a continuous function, and assume that there exists γ > 0 such that . Then, there exists a subset D of I such that:

– D is dense, uncountable and has Hausdorff dimension zero;

Moreover, this result is optimal, i.e. there exists a function of global regularity γ > 0 such that αp(x) ≠ αl(x) for all x outside a set of zero Hausdorff dimension.

THEOREM 1.6.– Let 0 < γ < 1 and be a lower limit of continuous functions. Let g: [0, 1] → [γ,1] be a lower semi-continuous function. Assume that for all . Then, there exists a continuous function such that:

– for all x outside a set of zero Hausdorff dimension, .

This theorem shows that, when the “compatibility condition” is satisfied, we can simultaneously and independently prescribe the local and pointwise regularity of a function outside a “small” set. These two measures of irregularity are thus to some extent independent and provide complementary information.

1.3.4. Signal dimension theorem

Let us investigate the relationships between the dimension of a signal and its Hölder exponents. There is no general result concerning the Hausdorff dimension, apart from obvious upper bounds resulting from the inequalities . Here is a result for Dim(Γ) [TRI 86a].

THEOREM 1.7.– If Γ is the graph of a continuous function f, then:

(1.23)

The same inequalities are true if we use the grid of the dyadic intervals:

(1.24)

We do not provide the demonstration of these results, which requires an evaluation of the packing measure of the graph. In the same context, we could show that if the local Hölder exponents α(un(x)) tend uniformly to a real α, then this number is also equal to Δ(Γ). However, a much more interesting equality may be given for Minkowski-Bouligand dimension of the graph which is both simple and general.

THEOREM 1.8.– Let f be a continuous function defined on an interval J, and non-constant on J. For any ε > 0, let us call ε-variation of f on J the arithmetic mean of ε-oscillations:

then:

(1.25)

The assumption that f is not constant is necessary, as otherwise the oscillations are all zero and . In this case, the graph is a horizontal segment and the value of its dimension is 1.

Proof. A proof [TRI 99] using geometric arguments consists of estimating the area of the Minkowski ε-sausage Γ(ε). We show that this is equivalent to that of the union of the horizontal segments ∪t∈J[t − ε, t + ε] × {z(t)} centered on the graph. This is equal to the variation varε(z).         

– an integer ;

– N affine triangular applications of the plane T1,…,TN, such that, for each Ti, the image of the segment A1AN+1 is the segment AiAi+1. These may be written as:

EXAMPLE 1.8.– Lacunary series, such as the Weierstrass function, provide other examples of signals:

Figure 1.3.Graph of a nowhere derivable function, attractor of a system of four affine applications, suchthatand. Dimensions Δ and Dim are equal to. The Hausdorff dimension lies between 1 and

1.3.5. 2-microlocal analysis

In this section, we briefly present an analysis of local regularity which is much finer than the Hölder exponents described above. As was observed previously, neither αp nor αl allow us to completely describe the local irregularity of a function. To obtain an exhaustive description, we need to consider an infinite number of exponents. This is the purpose of 2-microlocal analysis. This powerful tool was defined in [BON 86], where it is introduced in the framework PDE in the frame of Littlewood-Paley analysis. A definition based on wavelets is developed in [JAF 96]. We present here a time-domain characterization [KOL 02].

DEFINITION 1.4.– A function belongs to the 2-microlocal space if there exist and C > 0 such that, for all (x, y) satisfying |x − x0| < δ and |y − x0| < δ:

(1.26)

To conclude this brief presentation, let us mention that algorithms exist which make it possible to numerically estimate the 2-microlocal frontier. They often allow us to calculate the values of αp and αi more precisely than a direct method (various estimation methods for the exponents and the 2-microlocal frontier are proposed in FracLab). Furthermore, it is possible to develop a 2-microlocal formalism [LEV 04a] which presents strong analogies with the multifractal formalism (see below).

1.3.6. An example: analysis of stock market price

As an illustration of some of the notions introduced above, we use this section to detail a simplified example of the fractal analysis of a signal based on Hölder exponents. Our purpose is not to develop a complete application (this would require a whole chapter) but instead to demonstrate how we calculate and use the information provided by local regularity analysis in a practical scenario.

The signal is a stock market log. Financial analysis offers an interesting area of application for fractal methods (see Chapter 13 for a detailed study). We will consider the evolution of the Nikkei index between January 1, 1980 and November 5, 2000. These signals comprise 5,313 values and are presented in Figure 1.5. As with many stock market accounts, it is extremely irregular. We calculate the local Hölder exponent of the logarithm of this signal, which is the quantity on which financial analysts work. The exponent is calculated by a direct application of Definition 1.3: at each point x, we find, for increasing values of ε, one couple (y, z) for which the signal oscillation in a ball centered in x of radius ε is attained. A bilogarithmic regression between the vector of the values found for the oscillation and the distances |y − z| is then performed (see the FracLab manual for more details on the procedure). As Figure 1.6 shows, most local exponents are comprised between 0 and 1, with some peaks above 1 and up to 3. The exponent values in [0,1], which imply that the signal is continuous but not differentiable, confirm the visual impression of great irregularity. We can go beyond this qualitative comment by observing that the periods where “something occurs” have a definite signature in terms of the exponents: they are characterized by a dramatic increase of αl followed by very small values, below 0.2. Let us examine some examples. The most remarkable point of the local regularity graph is its maximum at abscissa 2,018, with an amplitude of 3. The most singular points, i.e. those with the smallest exponent, are situated just after this maximum: the exponent is around 0.2 for the abscissae between 2,020 and 2,050, and of the order of 0.05 between points 2,075 and 2,100. These two values are distinctly below the exponent average, which is 0.4 (the variance being 0.036). Calculations show that less than 10% of the log points possess an exponent smaller than 0.2. This remarkable behavior corresponds to the crash of October 19,1987 which occurs at abscissa 2,036, in the middle of the first zone of points with low regularity after the maximum: the most “irregular” days of the entire signal are thus, as expected, situated in the weeks which followed the crash. It is worthwhile noting that this fact is much more apparent on the regularity signal than on the original log, where only the crash appears clearly, with the subsequent period not displaying remarkable features.

Figure 1.5.The Nikkei index between 1st January 1980 and 5th November 2000

Let us now consider another area which contains many points of low regularity along with some isolated regular points (i.e. having αl > 1). It corresponds to the zone between abscissae 4,450 and 4,800: this period approximatively corresponds to the Asian crisis that took place between January 1997 and June 1998 (analysts do not agree upon the exact dates of the beginning and the ending of this crisis: some of them date its beginning in mid-1997 or its end towards the end of 1999, or much later). On the graph of the log, we can observe that this period seems slightly more irregular than others. In terms of exponents, we notice that it contains two maxima, with values greater than 1, both followed by low regularity points: this area comprises a high proportion of irregular points, since 12% of its points have an exponent lower than 0.15. This proportion is three times higher than that observed in the whole log.

Figure 1.6.Local Hölder function of the Nikkei index

The analysis just performed has remained at an elementary level. However, it has allowed us to show that major events have repercussions on the evolution of the local Hölder exponent and that the graph of αl emphasizes features not easily visible on the original log.

1.4. Multifractal analysis

1.4.1. What is the purpose of multifractal analysis?

In the previous section, it has been observed that the Hölder functions provide precise information on the regularity at each point of a signal. In applications, this information is often useful as such, but there exists many cases where it is necessary to go further. Here are three examples that highlight this necessity. In image segmentation, we expect that edges correspond to low regularity points and hence to small exponents. However, the precise value of the Hölder exponents of contour points cannot be universal: non-linear transformations of an image, for instance, might preserve edges while modifying the exponent value. In this first situation, we see that the pointwise regularity does not provide all the relevant information and that it is necessary to supplement it with structural information. Let us now consider the issue of image texture classification. It is clear that classifying a pixel based on its local exponent would not give satisfactory results. A more relevant approach would be to use the statistical distribution of exponents with zones. The same comment applies to the characterization of Internet traffic according to its degree of sporadicity. In this second situation, the Hölder function provides information that is too rich, and which we would like to balance in a certain sense. The last situation is when the exponents are too difficult to calculate: there exists, in particular, continuous signals, easy to synthesize, whose Hölder function is everywhere discontinuous. In this case, the pointwise regularity information is too complex to be used under its original form.