116,99 €
This book is the first comprehensive and methodologically rigorous analysis of earthquake occurrence. Models based on the theory of the stochastic multidimensional point processes are employed to approximate the earthquake occurrence pattern and evaluate its parameters. The Author shows that most of these parameters have universal values. These results help explain the classical earthquake distributions: Omori's law and the Gutenberg-Richter relation.
The Author derives a new negative-binomial distribution for earthquake numbers, instead of the Poisson distribution, and then determines a fractal correlation dimension for spatial distributions of earthquake hypocenters. The book also investigates the disorientation of earthquake focal mechanisms and shows that it follows the rotational Cauchy distribution. These statistical and mathematical advances make it possible to produce quantitative forecasts of earthquake occurrence. In these forecasts earthquake rate in time, space, and focal mechanism orientation is evaluated.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 517
Veröffentlichungsjahr: 2013
Table of Contents
Series Page
Title Page
Copyright
Dedication
Preface
Acknowledgments
List of Abbreviations
List of Mathematical Symbols
Part 1: Models
Chapter 1: Motivation: Earthquake science challenges
Chapter 2: Seismological background
2.1 Earthquakes
2.2 Earthquake catalogs
2.3 Description of modern earthquake catalogs
2.4 Earthquake temporal occurrence: quasi-periodic, Poisson, or clustered?
2.5 Earthquake faults: one fault, several faults, or an infinite number of faults?
2.6 Statistical and physical models of seismicity
2.7 Laboratory and theoretical studies of fracture
Chapter 3: Stochastic processes and earthquake occurrence models
3.1 Earthquake clustering and branching processes
3.2 Several problems and challenges
3.3 Critical continuum-state branching model of earthquake rupture
Part II: Statistics
Chapter 4: Statistical distributions of earthquake numbers: Consequence of branching process
4.1 Theoretical considerations
4.2 Observed earthquake numbers distribution
Chapter 5: Earthquake size distribution
5.1 Magnitude versus seismic moment
5.2 Seismic moment distribution
5.3 Is β≡ 1/2?
5.4 Seismic moment sum distribution
5.5 Length of aftershock zone (earthquake spatial scaling)
5.6 Maximum or corner magnitude: 2004 Sumatra and 2011 Tohoku mega-earthquakes
Chapter 6: Temporal earthquake distribution
6.1 Omori's law
6.2 Seismic moment release in earthquakes and aftershocks
6.3 Random shear stress and Omori's law
6.4 Aftershock temporal distribution, theoretical analysis
6.5 Temporal distribution of aftershocks: Observations
6.6 Example: The New Madrid earthquake sequence of 1811–12
6.7 Conclusion
Chapter 7: Earthquake location distribution
7.1 Multipoint spatial statistical moments
7.2 Sources of error and bias in estimating the correlation dimension
7.3 Correlation dimension for earthquake catalogs
7.4 Conclusion
Chapter 8: Focal mechanism orientation and source complexity
8.1 Random stress tensor and seismic moment tensor
8.2 Geometric complexity of earthquake focal zone and fault systems
8.3 Rotation of double-couple (DC) earthquake moment tensor and quaternions
8.4 Focal mechanism symmetry
8.5 Earthquake focal mechanism and crystallographic texture statistics
8.6 Rotation angle distributions
8.7 Focal mechanisms statistics
8.8 Models for complex earthquake sources
Part III: Testable Forecasts
Chapter 9: Global earthquake patterns
9.1 Earthquake time-space patterns
9.2 Defining global tectonic zones
9.3 Corner magnitudes in the tectonic zones
9.4 Critical branching model (CBM) of earthquake occurrence
9.5 Likelihood analysis of catalogs
9.6 Results of the catalogs' statistical analysis
Chapter 10: Long- and short-term earthquake forecasting
10.1 Phenomenological branching models and earthquake occurrence estimation
10.2 Long-term rate density estimates
10.3 Short-term forecasts
10.4 Example: earthquake forecasts during the Tohoku sequence
10.5 Forecast results and their discussion
10.6 Earthquake fault propagation modeling and earthquake rate estimation
Chapter 11: Testing long-term earthquake forecasts: Likelihood methods and error diagrams
11.1 Preamble
11.2 Log-likelihood and information score
11.3 Error diagram (ED)
11.4 Tests and optimization for global high-resolution forecasts
11.5 Summary of testing results
Chapter 12: Future prospects and problems
12.1 Community efforts for statistical seismicity analysis and earthquake forecast testing
12.2 Results and challenges
12.3 Future developments
References
Index
Book Series: Statistical Physics of Fracture and Breakdown
Bikas K. Chakrabarti and Purusattam Ray
Why does a bridge collapse, an aircraft or a ship break apart? When does a dielectric insulation fail or a circuit fuse, even in microelectronic systems? How does an earthquake occur? Are there precursors to these failures? These remain important questions, even more so as our civilization depends increasingly on structures and services where such failure can be catastrophic. How can we predict and prevent such failures? Can we analyze the precursory signals sufficiently in advance to take appropriate measures, such as the timely evacuation of structures or localities, or the shutdown of facilities such as nuclear power plants?
Whilst these questions have long been the subject of research, the study of fracture and breakdown processes has now gone beyond simply designing safe and reliable machines, vehicles and structures. From the fracture of a wood block or the tearing of a sheet of paper in the laboratory, the breakdown of an electrical network on an engineering scale, to an earthquake on a geological scale, one finds common threads and universal features in failure processes. The ideas and observations of material scientists, engineers, technologists, geologists, chemists and physicists have all played a pivotal role in the development of modern fracture science.
Over the last three decades, considerable progress has been made in modeling and analyzing failure and fracture processes. The physics of nonlinear, dynamic, many-bodied and non-equilibrium statistical, mechanical systems, the exact solutions of fibre bundle models, solutions of earthquake models, numerical studies of random resistor and random spring networks, and laboratory-scale innovative experimental verifications have all opened up broad vistas of the processes underlying fracture. These have provided a unifying picture of failure over a wide range of length, energy and time scales.
This series of books introduces readers—in particular, graduate students and researchers in mechanical and electrical engineering, earth sciences, material science, and statistical physics—to these exciting recent developments in our understanding of the dynamics of fracture, breakdown and earthquakes.
This edition first published 2014 © 2014 by John Wiley & Sons, Ltd
This work is a co-publication between the American Geophysical Union and Wiley
Registered office: John Wiley & Sons, Ltd, The Atrium, Southern Gate, Chichester, West Sussex,
PO19 8SQ, UK
Editorial offices: 9600 Garsington Road, Oxford, OX4 2DQ, UK
The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK
111 River Street, Hoboken, NJ 07030-5774, USA
For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com/wiley-blackwell.
The right of the author to be identified as the author of this work has been asserted in accordance with the UK Copyright, Designs and Patents Act 1988.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher.
Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book.
Limit of Liability/Disclaimer of Warranty: While the publisher and author(s) have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. It is sold on the understanding that the publisher is not engaged in rendering professional services and neither the publisher nor the author shall be liable for damages arising herefrom. If professional advice or other expert assistance is required, the services of a competent professional should be sought.
Library of Congress Cataloging-in-Publication Data
Kagan, Yan Y., author.
Earthquakes : models, statistics, testable forecasts / Yan Y. Kagan.
pages cm — (Statistical physics of fracture and breakdown)
Includes bibliographical references and index.
ISBN 978-1-118-63792-0 (hardback)
1. Earthquake prediction. 2. Earthquake hazard analysis. I. Title.
QE538.8.K32 2014
551.2201′12— dc23
2013033255
A catalogue record for this book is available from the British Library.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books.
Cover image: Earthquake. Men looking at cracks in the ground caused by a magnitude 7.1 earthquake that struck near the city of Van, Turkey, on 23rd October 2011. Photographed in near Alakoy, Turkey, on 30th November 2011. Ria Novosti/Science Photo Library.
Structural damage on an apartment building during the earthquake of February 27, 2010 in Chile (Santiago). © iStockphoto.com/erlucho (Front cover).
Location of shallow earthquakes in the Global Centroid Moment Tensor (GCMT) catalog, 1976/1/1– 2012/12/31. Courtesy of Göran Ekström and the GCMT project (Back cover).
Cover design by Steve Thompson
Preface
Quantitative prediction is the aim of every science. As Ben-Menahem (1995, p. 1217) puts it:
[T]he ultimate test of every scientific theory worthy of its name, is its ability to predict the behavior of a system governed by the laws of said discipline.
Accordingly, the most important issue in earthquake seismology is earthquake prediction. This term, however, has been been the topic of scientific debate for decades. For example, Wood and Gutenberg (1935) write:
To have any useful meaning the prediction of an earthquake must indicate accurately, within narrow limits, the region or district where and the time when it will occur—and, unless otherwise specified, it must refer to a shock of important size and strength, since small shocks are very frequent in all seismic regions.
Because earthquake prediction is complicated by a number of factors, Wood and Gutenberg propose the term earthquake forecast, as an alternative, where in effect the earthquake occurrence rate is predicted.
Long-term studies, however, indicate that the prediction of individual earthquakes, as suggested in the first definition by Wood and Gutenberg, is impossible (Geller 1997; Geller et al. 1997; Kagan 1997b). Furthermore, as we show in Chapters 2 and 3, even the notion of individual earthquakes or individual faults cannot be properly defined because of earthquake process fractality. Therefore, below we treat the terms earthquake prediction and earthquake forecast as synonyms.
Available books on seismology primarily discuss the problems of elastic wave propagation and study the Earth structure. This book will take a different approach, focusing instead on earthquake seismology, defined as rigorous quantitative study of the earthquake occurrence. Even though several books on earthquake seismology and some books on earthquake prediction are available, there are no in-depth monographs considering the stochastic modeling of the fractal multidimensional processes and the rigorous statistical analysis of the earthquake occurrence. In this book the results of modeling and statistical analysis are applied to evaluate the short- and long-term occurrence rates of future earthquakes, both regionally and globally, and, most importantly, to test these forecasts according to stringent criteria.
The subject of this book could therefore be roughly defined as “Statistical Seismology” (Vere-Jones 2009, 2010). There has been significant interest in the problems of statistical seismology recently: since 1998, the International Workshops on Statistical Seismology (Statsei2–Statsei7) have provided researchers with an opportunity to evaluate recent developments in statistical seismology, as well as define future directions of research (see http://www.gein.noa.gr/statsei7/). Problems explored in these meetings include the statistical behavior of earthquake occurrence and patterns, time-dependent earthquake forecasting, and forecast evaluations. In addition, in this book we investigate geometrical properties of the earthquake fault system and the interrelations of earthquake focal mechanisms.
Thus, this book is a comprehensive and methodologically rigorous analysis of earthquake occurrence. Earthquake processes are inherently multidimensional: in addition to the origin time, 3-D locations, and the measures of size for each earthquake, the orientation of the rupture surface and its displacement requires for its representation either second-rank symmetric tensors or quaternions. Models based on the theory of stochastic multidimensional point processes were employed here to approximate the earthquake occurrence pattern and evaluate its parameters. The terms “moment” or “moment tensor” used in seismology to signify “the seismic moment” or “the seismic moment tensor” (see Section 1.2), will throughout this book be distinguished from moments used in statistics.
Adequate mathematical and statistical techniques have only recently become available for analyzing fractal temporal, spatial, and tensor patterns of point process data generally and earthquake data in particular. Furthermore, only in the past 20–30 years have the processing power of modern computers and the quality, precision, and completeness of earthquake datasets been sufficient to allow a detailed, full-scale investigation of earthquake occurrence.
Since the early nineteenth century, the Gaussian (normal) distribution was used almost exclusively for the statistical analysis of data. However, the Gaussian distribution is a special, limiting case of a broad class of stable probability distributions. These distributions, which, with the exception of the Gaussian law, have a power-law (heavy) tail, have recently become an object of intense mathematical investigation. These distributions are now applied in physics, finance, and other disciplines. One can argue that they are more useful in explaining natural phenomena than the Gaussian law. For the stable distributions with the power-law tail exponent 1.0 < β < 2.0, the variance is infinite; if 1.0 ≥ β, the mean is infinite (see Section 5.4). The application of these distributions to the analysis of seismicity and other geophysical phenomena would significantly increase our quantitative understanding of their fractal patterns.
After careful analysis of systematic and random effects in earthquake registration and interpretation of seismograms, we show that most of these statistical distribution parameters have universal values. These results help explain such classical distributions as Omori's law and the Gutenberg-Richter relation, used in earthquake seismology for many decades. We show that the parameters of these distributions are universal constants defined by simple mathematical models. We derived a negative-binomial distribution for earthquake numbers, as a substitute for the Poisson distribution. The fractal correlation dimension for spatial distributions of earthquake hypocenters was determined. We also investigated the disorientation of earthquake focal mechanisms and showed that it follows the rotational Cauchy distribution. We evaluated the parameters of these distributions in various earthquake zones, and estimated their systematic and random errors.
These statistical and mathematical advances made it possible to produce quantitative forecasts of earthquake occurrence. The theoretical foundations for such forecasts based on multidimensional stochastic point processes were first proposed by Kagan (1973). Later we showed how the long- and short-term forecasts can be practically computed and how their efficiency can be estimated. Since 1999, daily forecasts have been produced, initially for several seismically active regions and more recently expanded to cover the whole Earth. The recent mega-earthquake in Tohoku, Japan, which caused manydeaths and very significant economic losses, demonstrates the importance of forecasts in terms of a possible earthquake size, its recurrence time, and temporal clustering properties.
An important issue in the study of earthquake occurrence and seismic hazard is the verification of seismicity models. Until recently seismic event models and predictions were based exclusively on case histories. It was widely believed that long-term earthquake occurrence, at least for large earthquakes, was quasi-periodic or cyclic (seismic gap and characteristic earthquake hypotheses). The Parkfield earthquake prediction experiment and many other forecasts were therefore based on these models. However, when we tested the seismic gap models against the earthquake record, it turned out that the performance of the gap hypothesis was worse than a similar earthquake forecast (null hypothesis) based on a random choice (temporal Poisson model). Instead of being quasi-periodic, large earthquakes are clustered in time and space (Section 1.4). The Tohoku event consequences underscore that all statistical properties of earthquake occurrence need to be known for correct prediction: the extent of the losses was to a large degree due to the use of faulty models of characteristic earthquakesto evaluate the maximum possible earthquake size (Section 5.6).
Earthquake occurrence models that are too vague to be testable, or are rejected by rigorous objective statistical tests (see above) are not discussed in detail here. In our opinion, the only models worthy of analyses, are those which produce testable earthquake forecasts.
Since this book is an initial attempt to thoroughly and rigorously analyze earthquake occurrence, many unresolved issues still remain. In the final Section (1.3), we list some challenging questions that can now be addressed by thorough theoretical studies and observational statistical analysis. There is, of course, the possibility that some of these problems have been solved in other scientific disciplines; in this case, we will need to find out how to implement these solutions in earthquake science.
This book is the result of my work over many years. I am grateful to the various scientists throughout the world with whom I have collaborated. There have been far too many people for me to list them individually here. There are nevertheless a few individuals I do want to especially thank.
First of all, I would like to mention the seismologists and mathematicians I worked with in the former Soviet Union from 1957–1974: Anatoliy A. Zhigal'tsev, Stanislav S. Andreev, Yuriy N. Godin, Michail S. Antsyferov, Nina G. Antsyferova (Goykhman), Igor M. Lavrov, Zinoviy Gluzberg (Zinik), Yuriy V. Riznichenko, Alexandr A. Gusev, and George M. Molchan. The work of some of these individuals sparked my interest in earthquake seismology and applying mathematical and statistical methods to the solution of seismological problems.
I wish to express deep gratitude to Leon Knopoff, who brought me to UCLA in 1974 and who was my coauthor for many years. I would also like to gratefully acknowledge my long-term collaborator Dave Jackson. About half of my papers in the United States were developed in cooperation with these colleagues, and I learned a lot from them.
I was also greatly influenced by my collaboration with statisticians David Vere-Jones and George Molchan, from whom I learned about many issues in mathematical statistics. Their recommendations have been used throughout my work over the years.
I have also benefited from the advice of and very useful discussions with many coauthors of my papers, including Peter Bird, Frederick Schoenberg, Robert Geller, Heidi Houston, Max Werner, Agnés Helmstetter, Didier Sornette, Zhengkang Shen, Paul Davis, Ilya Zaliapin, Francesco Mulargia, Qi Wang, Silvia Castellaro, and Yufang Rong among others.
Several individuals have, through my reading of their work and through conversations with them, significantly influenced my approach to solving the problems described in this book. Of these I would like to note with special gratitude Benoit Mandelbrot, Per Bak, Akiva Yaglom, George Backus, Vladimir Zolotarev, Adam Morawiec, Yosi Ogata, Cliff Frohlich, Andrei Gabrielov, Fred Schwab, Vlad Pisarenko, Philip Stark, Tokuji Utsu, Göran Ekström, Jiancang Zhuang, Ritsuko Matsu'ura, Jeremy Zechar, Yehuda Ben-Zion, William Newman, David Rhoades, Danijel Schorlemmer, David Harte, and Peiliang Xu.
I am grateful to Kathleen Jackson who helped me to become a better writer by editing many of my published papers.
I also want to offer profound thanks to several computer specialists who helped me in my calculations and in other computer-related tasks: John Gardner, Robert Mehlman, Per Jögi, Oscar Lovera, and Igor Stubailo.
Reviews by an anonymous reviewer and by Peter Bird have been very helpful in revising and improving the book manuscript.
Finally, I would like to thank several people from John Wiley & Sons publishing company for guiding me through the process of the book production. I am grateful to Ian Francis, Delia Sandford, and Kelvin Matthews, all of Oxford, UK. I thank Alison Woodhouse (Holly Cottage, UK) for tracking down permissions for my previously published figures and tables. The copy-editing work by Susan Dunsmore (Glasgow, UK) is appreciated. I am also grateful to Production Editor Audrie Tan (Singapore) and Project Manager Sangeetha Parthasarathy (Chennai, India) for their work in producing and typesetting the book.
AIC
Akaike Information Criterion
ANSS
Advanced National Seismic System (catalog)
CBM
Critical Branching Model
CDF
Cumulative distribution function
CLVD
Compensated Linear Vector Dipole
CSEP
Collaboratory for Study of Earthquake Predictability
DC
Double-couple
ED
Error diagram
ETAS
Epidemic Type Aftershock Sequence
GCMT
Global Centroid Moment Tensor (catalog)
GPS
Global Positioning System
G-R
Gutenberg-Richter (relation)
GSRM
Global Strain Rate Map
IGD
Inverse Gaussian Distribution
i.i.d.
independent identically distributed
INSAR
Interferometric Synthetic Aperture Radar
MLE
Maximum likelihood estimate
NBD
Negative binomial distribution
PDE
Preliminary Determinations of Epicenters (catalog)
Probability density function
PF
Probability function
RELM
Regional Earthquake Likelihood Models
ROC
Relative Operating Characteristic
SCEC
Southern California Earthquake Center
SOC
Self-Organized Criticality
STF
Source Time Function
TGR
Tapered G-R (distribution)
USGS
U.S. Geological Survey
3-D
three-dimensional
CLVD
Γ-index, Eq.
8.15
DC1
double-couple earthquake source with no symmetry, p. 160
DC2
double-couple source with
C
2
, order 2 cyclic symmetry, p. 160
DC4
double couple source with
D
2
, order 2 dihedral symmetry, p. 160
I
Information score, Eqs.
11.11
,
11.13
I
0
Forecast Information score (specificity), Eq.
11.14
I
1
Information score (success) for cell centers of forecasted events, Eq.
11.17
I
1
′
Information score for earthquakes which occurred in the training period
I
2
Information score for forecasted events, Eq.
11.18
I
3
Information score for simulated events, Eq.
11.19
I
4
Information score based on forecasted events curve, Eq.
11.23
m
b
body-wave magnitude
m
L
local magnitude
M
S
surface-wave magnitude
M
t
seismic moment detection threshold of a seismographic network
m
t
magnitude threshold
R
3
or
R
2
Euclidian space
S
2
two-dimensional (regular) sphere
S
3
three-dimensional sphere
SO
(3)
group of 3-D rotations
Tr(.)
square matrix trace
Our purpose is to analyze the causes of recent failures in earthquake forecasting, as well as the difficulties of earthquake investigation. It is widely accepted that failure has dogged the extensive efforts of the last 30 years to find “reliable” earthquake prediction methods, the efforts which culminated in the Parkfield prediction experiment (Roeloffs and Langbein 1994; Bakun et al. 2005 and their references) in the USA and the Tokai experiment in Japan (Mogi 1995). Lomnitz (1994), Evans (1997), Geller et al. (1997), Jordan (1997), Scholz (1997), Snieder and van Eck (1997), and Hough (2009) discuss various aspects of earthquake prediction and its lack of success. Jordan (1997) comments that
The collapse of earthquake prediction as a unifying theme and driving force behind earthquake science has caused a deep crisis.
Why does theoretical physics fail to explain and predict earthquake occurrence? The difficulties of seismic analysis are obvious. Earthquake processes are inherently multidimensional (Kagan and Vere-Jones 1996; Kagan 2006): in addition to the origin time, 3-D locations, and measures of size for each earthquake, the orientation of the rupture surface and its displacement requires for its representation either second-rank tensors or quaternions (see more below). Earthquake occurrence is characterized by extreme randomness; the stochastic nature of seismicity is not reducible by more numerous or more accurate measurements. Even a cursory inspection of seismological datasets suggests that earthquake occurrence as well as earthquake fault geometry are scale-invariant or fractal (Mandelbrot 1983; Kagan and Vere-Jones 1996; Turcotte 1997; Sornette 2003; Kagan 2006; Sornette and Werner 2008). This means that the statistical distributions that control earthquake occurrence are power-law or stable (Lévy-stable) distributions. See also http://www.esi-topics.com/earthquakes/interviews/YanYKagan.html.
After looking at recent publications on earthquake physics (for example, Kostrov and Das 1988; Lee et al. 2002; Scholz 2002; Kanamori and Brodsky 2004; Ben-Zion 2008), one gets the impression that knowledge of earthquake process is still at a rudimentary level. Why has progress in understanding earthquakes been so slow? Kagan (1992a) compared the seismicity description to another major problem in theoretical physics: turbulence of fluids. Both phenomena are characterized by multidimensionality and stochasticity. Their major statistical ingredients arescale-invariant, and both have hierarchically organized structures. Moreover, the scale of self-similar structures in seismicity and turbulence extends over many orders of magnitude. The size of major structures which control deformation patterns in turbulence and brittle fracture is comparable to the maximum size of the region (see more in Kagan 2006).
Yaglom (2001, p. 4) commented that turbulence status differs from many other complex problems which twentieth-century physics has solved or has considered.
[These problems] deal with some very special and complicated objects and processes relating to some extreme conditions which are very far from realities of the ordinary life … However, turbulence theory deals with the most ordinary and simple realities of the everyday life such as, e.g., the jet of water spurting from the kitchen tap. Therefore, the turbulence is well-deservedly often called “the last great unsolved problem of the classical physics.”
Although solving the Navier-Stokes equations, describing turbulent motion in fluids is one of the seven millennium mathematical problems for the twenty-first century (see http://www.claymath.org/millennium/), the turbulence problem is not among the ten millennium problems in physics presented by the University of Michigan, Ann Arbor (see http://feynman.physics.lsa.umich.edu/-strings2000/millennium.html), or among the 11 problems by the National Research Council's board on physics and astronomy (Haseltine 2002). In his extensive and wide-ranging review of current theoretical physics, Penrose (2005) does not include the turbulence or Navier-Stokes equations in the book index.
Like fluid turbulence, the brittle fracture of solids is commonly encountered in everyday life, but so far there is no real theory explaining its properties or predicting outcomes of the simplest occurrences, such as a glass breaking. Although computer simulations of brittle fracture (for example, see O'Brien and Hodgins 1999) are becoming more realistic, they cannot yet provide a scientifically faithful representation. Brittle fracture is a more difficult scientific problem than turbulence, and while the latter has attracted first-class mathematicians and physicists, no such interest has been shown in the mathematical theory of fracture and large-scale deformation of solids.
In this book we first consider multidimensional stochastic models approximating earthquake occurrence. Then we apply modern statistical methods to investigate distributions of earthquake numbers, size, time, space, and focal mechanisms. Statistical analysis of earthquake catalogs based on stochastic point process theory provides the groundwork for long- and short-term forecasts. These forecasts are rigorously tested against future seismicity records. Therefore, here statistical study of earthquake occurrence results in verifiable earthquake prediction.
The book has 12 chapters. In this chapter, we discuss the fundamental challenges which face earthquake science. In Chapter 2 we review the seismological background information necessary for further discussion as well as basic models of earthquake occurrence. Chapter 3 describes several multidimensional stochastic models used to approximate earthquake occurrence. They are all based on the theory of branching processes; the multidimensional structure of earthquake occurrence is modeled. Chapter 4 discusses the distribution of earthquake numbers in various temporal-spatial windows. In Chapters 5–8 some evidence for the scale-invariance of earthquake process is presented, in particular, one-dimensional marginal distributions for the multidimensional earthquake process are considered. Fractal distributions of earthquake size, time intervals, spatial patterns, focal mechanism, and stress are discussed. Chapter 9 describes the application of stochastic point processes for statistical analysis of earthquake catalogs and summarizes the results of such analysis. Chapter 10 describes the application of the results of Chapter 9 for long- and short-term prediction of an earthquake occurrence. Methods of quantitative testing of earthquake forecasts, and measuring their effectiveness or skill are discussed in Chapter 11. The final discussion (Chapter 12) summarizes the results obtained thus far and presents problems and challenges still facing seismologists and statisticians.
This chapter discussion mainly follows Kagan (2006). Since this book is intended for seismologists, as well as statisticians, physicists, and mathematicians, we briefly describe earthquakes and earthquake catalogs as primary objects of the statistical study. A more complete discussion can be found in Bullen (1979), Lee et al. (2002), Scholz (2002), Bolt (2003), Kanamori and Brodsky (2004). As a first approximation, an earthquake may be represented by a sudden shear failure—appearing as large quasi-planar dislocation loop in rock material (Aki and Richards 2002).
Figure 2.1a shows a fault-plane diagram. Earthquake rupture starts on the fault-plane at a point called the “hypocenter” (the “epicenter” is a projection of the hypocenter on the Earth's surface), and propagates with a velocity close to that of shear waves (2.5–3.5 km/s). The “centroid” is in the center of the ruptured area. Its position is determined by a seismic moment tensor inversion (Ekström et al. 2012, and references therein). As a result of the rupture, two sides of the fault surface are displaced by a slip vector along the fault-plane. For large earthquakes, such displacement is on the order of a few meters.
Fig. 2.1 Schematic diagrams of earthquake focal mechanism. (a) Fault-plane diagram—final rupture area (see text). (b) Double-couple source, equivalent forces yield the same displacement as the extended fault (see item a) rupture in a far-field. (c) Equal-area projection on the lower hemisphere (Aki and Richards 2002, p. 110) of quadrupole radiation patterns. The null ( or ) axis is orthogonal to the - and -axes, or it is located on the intersecting fault and auxiliary planes, that is, perpendicular to the paper sheet in this display.
Source: Kagan (2006), Fig. 1.
The earthquake rupture excites seismic waves which are registered by seismographic stations. The seismograms are processed by computer programs to obtain a summary of the earthquake's properties. Routinely, these seismogram inversions characterize earthquakes by their origin times, hypocenter (centroid) positions, and second-rank symmetric seismic moment tensors.
Equivalent to the earthquake focus is a quadrupole source of a particular type (Fig. 2.1b) known in seismology as a “double-couple” or (Burridge and Knopoff 1964; Aki and Richards 2002; Kagan 2005b; Okal 2013). Figure 2.1c represents a “beachball”—the quadrupolar radiation pattern of earthquakes. The focal plots involve painting on a sphere the sense of the first motion of the far-field primary, P-waves: solid for compressional motion and open for dilatational. Two orthogonal planes separating these areas are the fault and the auxiliary planes. During routine determination of the focal mechanisms, it is impossible to distinguish these planes. Their intersection is the null-axis (-axis or -axis), the -axis is in the middle of the open lune, and the -axis in the middle of the closed lune. These three axes are called the “principal axes of an earthquake focal mechanism,” and their orientation defines the mechanism.
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
