139,99 €
Under certain scenarios on the subject of CO2 emissions, by the end of the century the atmospheric concentration could triple its pre-industrial level. The very large numerical models intended to anticipate the corresponding climate evolutions are designed and quantified from the laws of physics. However, little is generally known about these: genesis of clouds, terms of the greenhouse effect, solar activity intervention, etc. This book deals with the issue of climate modeling in a different way: using proven techniques for identifying black box-type models. Taking climate observations from throughout the millennia, the global models obtained are validated statistically and confirmed by the resulting simulations. This book thus brings constructive elements that can be reproduced by anyone adept at numerical simulation, whether an expert climatologist or not. It is accessible to any reader interested in the issues of climate change.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 181
Veröffentlichungsjahr: 2014
Contents
1 Introduction
1.1. Context
1.2. Identification
1.3. Expectations and results
1.4. Contents of the work
2 Climatic Data
2.1. Sources
2.2. Global temperature
2.3. Concentration of CO2 in the atmosphere
2.4. Solar activity
2.5. Volcanic activity
3 The War of the Graphs
3.1. History
3.2. Inconsistent controversies
3.3. Usable data
4 Formulating an Energy Balance Model
4.1. State models and transmittance
4.2. Structure of an energy balance model
4.3. Specificity of EBMs
4.4. Dynamic parametrization
5 Presumed Parameters
5.1. Terminology
5.2. Climate sensitivity Sclim
5.3. Coefficient of radiative forcing α1
5.4. The climate feedback coefficient λG
5.5. Sensitivity to irradiance S2
5.6. Sensitivity to volcanic activity S3
5.7. Climate or anthropogenic sensitivity
5.8. Review of uncertainties
6 Identification Method
6.1. The current state of affairs
6.2. Output error method
6.3. Estimating the error variance
6.4. Hypothesis test and confidence regions
6.5. Conditions of application
7 Partial Results
7.1. A selection of data
7.2. Free identification
7.3. Forced identifications
7.4. Statistical analysis
8 Overall Results
8.1. Preliminary comments
8.2. Regions and intervals of confidence
8.3. Hypothesis test
8.4. Comments
9 Historic Minuscule Simulations
9.1. Overview of IPCC simulations
9.2. Comparative simulations
9.3. Representative concentration pathways (RCPs)
9.4. Comparative radiative forcing
10 Long-term Climate Projections
10.1. IPCC scenarios and projections
10.2. EBM compatible scenarios
10.3. Long-term projections
10.4. A disaster scenario
11 Short-term Predictions
11.1. Decadal time scale predictions by GCM
11.2. The climate’s natural variability
11.3. State estimate and prediction
11.4. Decadal time scale predictions by EBM
11.5. A posteriori predictions
12 Conclusions
12.1. On the identification
12.2. Climate sensitivity
12.3. Solar activity
12.4. Predictive capacity
12.5. The climate change in question
12.6. Prospects
Bibliography
Index
First published 2014 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:
ISTE Ltd
27-37 St George’s Road
London SW19 4EU
UK
www.iste.co.uk
John Wiley & Sons, Inc.
111 River Street
Hoboken, NJ 07030
USA
www.wiley.com
© ISTE Ltd 2014
The rights of Philippe de Larminat to be identified as the author of this work have been asserted by him in
accordance with the Copyright, Designs and Patents Act 1988.
Library of Congress Control Number: 2014950500
British Library Cataloguing-in-Publication Data
A CIP record for this book is available from the British Library
ISSN 2051-2481 (Print)
ISSN 2051-249X (Online)
ISBN 978-1-84821-777-5
The IPCC (Intergovernmental Panel on Climate Change) was created in 1988 under the auspices of the UN. Its aim is the scientific study of the causes of the global warming observed over the course of the 20th Century, how it is likely to evolve in the future, its human and environmental consequences, and, subsequently, to give rise to appropriate policy decisions1.
At the end of September 2013, at a plenary meeting in Stockholm, the IPCC presented a draft version of its fifth assessment report from working group 1: “Climate Change 2013, The Physical Science Basis”. The summary for policymakers, as it is known, was debated and approved, in turn anticipating the approval of the report as a whole (October 2014, Copenhagen). These two documents, AR5 (Fifth Assessment Report) and SPM (Summary for Policy Makers)2 embody the current expression of consensus in the scientific community. They are available on the IPCC website and are regularly referred to throughout this work3.
According to the final press release, taken from the SPM (p. 17): “It is extremely likely that human influence has been the dominant cause of the warming observed since the mid-20th Century”. Per the fourth report (AR4, 2007) this statement was only qualified as “very likely”. In the highly standardized language of the IPCC, this means that its confidence in attributing such warming to human influence has increased from 90% to 95%.
This confidence is less evident in the texts. Of all the quantified evaluations in the SPM, without a doubt, the most significant is what is known as the planet’s climate sensitivity. It quantifies the equilibrium temperature change that would be caused by a possible doubling of the concentration of CO2 in the atmosphere4. According to the SPM (p. 14): “equilibrium climate sensitivity is likely in the range of 1.5°C to 4.5°C (high confidence), extremely unlikely less than 1°C (high confidence) and very unlikely greater than 6°C (medium confidence).”
According to the IPCC’s future scenarios regarding concentration, the rate of CO2 may well increase four-fold in the course of the next century. In the extreme cases, (1°C to 6°C), the respective consequences range from the minor to the catastrophic: 2°C or 12°C. Moreover, the likely range has broadened since 2007. The IPCC highlights the fact that “the lower limit of the likely range evaluated (1.5°C) is therefore lower than the 2°C stated in the AR4”. Aware that the alarmist nature of the message may be diluted, scientists justified this modification to governmental delegates (p. 14): “this assessment reflects improved understanding of climate sensitivity, the extended temperature record in the atmosphere and the ocean, and new estimates of radiative forcing”. Therefore, uncertainty has increased as knowledge has broadened, despite the 95% confidence stated. It is on this basis that international agreements are entered into, which involve the annual expenditure of several thousands of billions of dollars (a number of global GDP points).
Scientifically, the likelihoods mentioned above must not be taken at face value. Their levels are debated over so as to reach a consensus5 with political figures (more than 190 governmental delegations were represented in Stockholm). Moreover, the IPCC states (AR5, 1.4.4) that they do not necessarily come from actual statistical calculations, but simply express the confidence experts have in their own judgments.
With this in mind, anything which can help to give a more exact evaluation of the planet’s climatic parameters would be greatly appreciated. This is the case for model identification techniques, which is this author’s field of expertise.
Identifying a process consists of determining a mathematical model, often reduced to external behavior, using the observation of input and output data (causes and effects). In the case of the climatic process, the relevant input data are: the atmospheric concentration of CO2, the solar activity and the volcanic activity. The output is the overall surface temperature of the Earth. The theory on the identification of dynamic systems has been highly developed for several decades [LJU 87, LJU 99]. Seemingly all ingredients are available to be able to apply it to the overall climate system of the Earth: simple usable models, with a limited number of parameters, observations of input and output signals, and proven software toolboxes (Matlab®: System Identification toolbox).
One would therefore expect to find reams of studies on the subject. Yet this is not the case. The term identification (in the systemic sense) does not appear once in the 1,550 pages of the AR5, nor in the title of any of the 9,200 publications surveyed. On the internet, a key word search (identification, climate, model, etc.) does not provide any links. The only publication on the subject, that we are aware of, is entitled: “A fractal climate response function can simulate global average temperature trends of the modern era and the past millennium” by Van Hateren [VAN 13]. However, nothing in this title refers to identification. The key word which gets the closest is “modeling”, and none of the bibliographic references given refer to the great Masters of identification theory (Aström, Ljung, Soderström, etc.). It is quite possible that the author is unknowingly applying identification, just as Jourdain used prose. With the exception of the excellent paper mentioned above, we couldn’t find any other significant work on the global modeling of climatic process through identification.
Indeed, the IPCC has long been checking its models against the historical climate data available: its large numerical models, based on the laws of physics, as well as its simplified models, based on energy balances. Yet identification is not involved. At most, these models involve partial adjustments (closure parameters tuning) or fingerprinting (detection and attribution of anthropogenic impact).
According to Hervé le Treut (2004), Director of the Institut Pierre Simon Laplace6: “numerical models (i.e. large-scale physical models, simulated by digital calculators) play a key role in studies of the greenhouse effect because they are the only tool which can be used to evaluate future climates: the analogy with climates of past eras which experienced different CO2 levels and the extrapolation toward the future of climatic data collected during the 20th Century provide unarguably precious information, but can only be interpreted with the help of physical models”.
In this work, we look instead to push forward with the logic of identification, allowing the climatic data to speak for itself, using it as “black box” input and output data (causes and effects), without constraining it to any type of prior knowledge. This is not without its difficulties: the Earth’s climatic process is at the limit of what can be identified. To achieve this goal, identification requires that input data be sufficiently accurate, with a suitable number of significant events. In this case, the effects caused by the input are partly obscured by the random fluctuations of the climate. Regarding CO2, the first significant changes go back to less than a century ago, and their effects are difficult to distinguish from natural variations, both having the same order of magnitude, in terms of size and duration. Furthermore, to observe relatively large-scale temperature variations, it is necessary to look over more than a millennium, where uncertainties regarding paleoclimatic reconstructions are added to natural fluctuations. Moreover, the structure of the model must be finely-tuned to the objectives as well as to the identification method, otherwise the data will remain unreadable and analysis of uncertainties will remain difficult.
Nevertheless, this text shows that it is possible to obtain significant results in this way. It is therefore surprising that the community of climatologists is ignoring a technique which is taught in the first cycle of university courses, despite the fact that all the ingredients and application tools are readily available. It is also possible that in trialing such an approach, incoherent results were obtained, and therefore not published, or that results were self-censored as they were poorly-aligned with the other mainstream results presented by the IPCC.
Above we criticized the fact that the current situation regarding physical knowledge does not allow us to accurately assess the planet’s fundamental climatic parameters. Although unable to work miracles, identification can nevertheless provide results which can call into question the current scientific consensus on what is commonly referred to as “climate change”.
Firstly, it will be argued that the assertion that the warming seen over the previous century is caused by human action is not confirmed, nor is it contradicted by observations. It therefore remains based solely on physical considerations, with a number of uncertainties to be addressed later (section 5.8). At the very least, identification can help to eliminate the extreme high values of climate sensitivity which have been forecast. This result falls short of expectations, but it would serve to counter the IPCC’s familiar argument that the simple observation of climatic data gives evidence of the human influence on global warming.
Subsequently, the estimate of the sensitivity coefficient for solar activity and its range of uncertainty clearly show that fluctuations in solar activity constitute the predominant cause of recent global warming. The IPCC is opposed to this hypothesis, arguing that the variations in solar irradiance are too weak, and denying that there is any other way which the sun may have an effect. However, statistical analysis is clear: the sun can explain both large and small climatic variations, which can be observed despite the natural variability of the climate. This analysis relies on climatic databases which are, as a whole, accepted by the IPCC (AR5, Chapter 5).
Beyond statistical analysis, the predictive power of the models identified helps to confirm their validity. Solely on the basis of informations known in 2000, our models were able to provide a remarkably accurate reproduction of the “climatic pause” which occurred shortly after and which is continuing even now. It is not so with IPCC models: global temperatures observed systematically fall below the lower end of the range of short-term projections produced by these models, even when updated in 2006.
Regarding long-term predictions, they are highly dependent on the future of solar activity, and the author does not have the expertise necessary to assess the projections made by solar physics specialists. He is also unable to confirm or contradict the hypotheses on the forms of greenhouse effect, and the climate sensitivity coefficient which results from them. With the models identified, there is a wide range of extrapolations from the climate of the past millennium whereby, in the worst case scenario, a warming of two degrees compared to pre-industrial temperatures is unlikely to be reached by the end of the 21st Century, and the current climatic pause may be but an indication of a return to the little ice age of the 17th and 18th Centuries.
The goal of this work is to describe our methodological approach with enough accuracy so that the reader, equipped with some knowledge on the theory of systems, in modeling and simulation7, can initially verify its validity, and if necessary, reproduce and use them. Only Chapters 4 and 6 pose any problems to such a reader. Nevertheless, they are within reach of any Bachelor’s degree-level student of physics and mathematics. The reader with a basic scientific background may benefit from a quick overview.
Chapter 2 presents the climatic variables and data. The large institutes and organizations (NOAA, GISS, CRU) make climatic data available. “Historic” temperatures overlap neatly from 1850 or 1880, eras when thermometric measurements started to become widespread around the world. Data from before this time can be qualified by paleoclimatology, and are given in the form of reconstructions, created using traces, markers or substitution measurements (proxies) left by the climate on the Earth and in the oceans. These two types of data (instruments and proxies) cannot be used in their raw form for the purposes of identification. They need to be linked together in order to be processed as a continuous data stream through time.
Chapter 3 discusses a regrettable debate, a war of graphs where parties exchange inconsistent arguments, often reduced to the display of climatic signal lines. At times, these lines are contested to the point where the credibility of paleoclimatic data in general is in doubt, thus jeopardizing the very principle behind the identification of a climatic model. In practice, we are able to totally disregard this controversy by processing all data available without taking one side or the other. This collection of data is not exhaustive, but its diversity is such that our conclusions cannot be accused of being obtained through “cherry picking”.
Chapter 4 introduces the structure of the models which we would like to identify. This is taken from the class of models known as EBM, or Energy Balance Models. The simplest are static models, reduced to three or four coefficients. They are too basic and unable to give an accurate picture of the reality. The most complex models already constitute the first drafts of GCMs, or General Circulation Models, for the atmosphere or oceans. These have too many parameters and cannot be identified since there are many which are redundant in terms of input/output. The characteristic feature of the structure which is used is that each piece of input data is assigned a balance sensitivity coefficient, but all are subject to the same transients of heat transfer. A “black box” is created as a result, in which certain physical coefficients appear in the form of combinations and remain out of reach in their individually. This is the compromise made to find the right balance between too many and too few parameters.
Chapter 5 brings together the assumptions relating to fundamental climatic parameters of energy balance models, as well as their uncertainty ranges. These are taken, directly or indirectly, from official IPCC publications: SPM and AR5.
Chapter 6 examines the identification method. It is the simplest and most reliable possible, that of the least square of output error (OE: Output Error Method). Given the nature of the data, the model is not statistically optimal, but there is no reason to assume that it is far from it. Moreover, as the uncertainty calculation does not result from an estimate of the maximum likelihood, a method to calculate the uncertainties is specifically developed – without giving rise to any particular difficulties. Finally, the end product is a reliable instrument, both in terms of determining parametric estimations and the associated uncertainties.
Chapter 7 gives a first overview of the results of identification. In our climatic archive catalogue, we have selected8 an initial combination out of sixteen (four temperature reconstructions and the same amount for solar irradiance). Two identification methods are shown. The first is a “free” identification, whereby the six parameters of the structure chosen in Chapter 4 minimize the error between the overall simulated temperature and the historic temperature data, without any a priori constraints being imposed. The second is a “forced” identification, where some parameters are fixed to comply with the assumptions given in Chapter 5. Some of the parameters in the free identification are located very far outside the IPCC range, especially with regard to the climate’s sensitivity to solar irradiance. As long as we are dealing with recent warming (end of the 20th Century), the visual examination of the simulated output temperature from both equally reproduce this warming. The difference being that with free identification, the contribution of solar irradiance is highly predominant over that of CO2 levels, while the opposite is true for forced identification. However, the IPCC experts claim that it is physically impossible for solar irradiance to have a significant impact on the climate. It is therefore important to go further, and not simply rely on a visual impression. Statistical analysis starts by assessing the autocorrelation function of the output error, and the cross-correlation with input signals. This confirms that the constraints of forced correlation lead to a strong correlation with solar irradiance. This points towards a cause-effect relationship which is not taken into account. Even more importantly, areas of related uncertainty show that the IPCC’s assumption of weak sensitivity to solar irradiance must be rejected. There is a very low likelihood of error in such a result. This rejection is based, not on considerations from theoretical physics, which are excluded from our study, but on the statistical processing of observations. If the observations and processing are correct, one must therefore conclude that the assumptions given above are false.
Chapter 8 extends this analysis to the sixteen possible combinations among the four paleotemperatures and the four reconstructions of solar irradiance. The overwhelming majority of these confirm the previous analysis. The exceptions all arise from the same temperature reconstruction, that of Phil Jones and Michael Mann [JON 04], who are active protagonists in the graph war mentioned above. Even though processing their reconstruction does not allow us to reject the hypothesis of weak sensitivity to solar irradiance, it does not confirm it either. On the other hand, high sensitivity to solar activity, statistically validated, cannot be contested with the argument that the mechanisms of action are not accurately known. In terms of sensitivity to CO2, the IPCC window is very wide, with the extreme values differing by a factor of six. Unfortunately, the nature of historic and paleoclimatic data is such that identification is unable to narrow this window. Instead, it moves the whole range downwards, reducing the highest, and seemingly most exaggerated, values. It cannot even be excluded that human activity has a negative impact on global temperatures.
Chapter 9 compares results of the IPCC simulation, over the historical period mentioned above (1850 to today), with simulations from the identified models. Temperature reconstructions observed are similar, but contributions of natural and anthropogenic factors are turned upside-down. It would appear that the human influence conclusion for global warming is predetermined by the mode of generation of the input data as defined by the IPCC.
Chapter 10 offers long-term climate projections. To do so, scenarios created by the IPCC itself are used. Representative Concentration Pathways, or RCPs, put forward a series of profiles for future CO2 concentrations. Unsurprisingly, simulations with constraints are alarming, especially in the worst case scenario, the so-called “business as usual” (RCP 8.5). Projections which result from free identification are much less worrying. Only a minority of cases exceed the allegedly critical threshold of a temperature increase of two degrees above pre-industrial levels by 2100.
Chapter 11
