138,99 €
Calibration in Analytical Science
Designed to help analytical chemists save time and money by selecting the best calibration method in a quality control, substance monitoring, or research setting
Univariate analytical calibration is a vital step in every chemical procedure that involves determining the identity or concentration of a particular substance. Depending on the type of instrument and measurement, analytical chemists need to follow different calibration strategies and protocols to ensure their instruments yield accurate readings.
Calibration in Analytical Science systematically classifies and describes a wide range of calibration methods and procedures based on mathematical and empirical models for use in qualitative and quantitative analysis. Focusing on the chemical aspects of analytical calibration, this much-needed reference uses a set of equipment-independent terms and definitions that are easily transferable to the calibration strategies of any analytical process. The theoretical basis for calibration of each analytical mode is described and applied to common analytical tasks of increasing levels of difficulty and complexity. Throughout the book, the author illustrates how to combine different calibration approaches to create new calibration strategies with extended capabilities.
Calibration in Analytical Science: Methods and Procedures is a must-have reference for analytical chemists working in academia and industry, chemists of various specialties involved in chemical analysis, and advanced undergraduate and graduate students taking courses in advanced analytical chemistry.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 747
Veröffentlichungsjahr: 2023
Paweł Kościelniak
Author
Professor Dr. Paweł Kościelniak
Jagiellonian University
Department of Analytical Chemistry
Gronostajowa St. 2
30‐387 Krakow
Poland
Cover Design and Image: Wiley
All books published by WILEY‐VCH are carefully produced. Nevertheless, authors, editors, and publisher do not warrant the information contained in these books, including this book, to be free of errors. Readers are advised to keep in mind that statements, data, illustrations, procedural details or other items may inadvertently be inaccurate.
Library of Congress Card No.: applied for
British Library Cataloguing‐in‐Publication Data
A catalogue record for this book is available from the British Library.
Bibliographic information published by the Deutsche Nationalbibliothek
The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at <http://dnb.d-nb.de>.
© 2023 WILEY‐VCH GmbH, Boschstr. 12, 69469 Weinheim, Germany
All rights reserved (including those of translation into other languages). No part of this book may be reproduced in any form – by photoprinting, microfilm, or any other means – nor transmitted or translated into a machine language without written permission from the publishers. Registered names, trademarks, etc. used in this book, even when not specifically marked as such, are not to be considered unprotected by law.
Print ISBN: 978‐3‐527‐34846‐6
ePDF ISBN: 978‐3‐527‐83110‐4
ePub ISBN: 978‐3‐527‐83112‐8
oBook ISBN: 978‐3‐527‐83111‐1
Analytical chemistry is an exceptionally beautiful scientific area. Behind such a description stands not only the author's undoubtedly subjective view but also completely objective observations. In probably no other chemical discipline is the purpose of theoretical and experimental work so clearly and unambiguously defined as in analytical chemistry. This aim is simply to look deep into the matter and to determine the type or amount of components contained in it. Taking into account that the guiding principle of all scientific research is the pursuit of truth, it can be said that every chemical analysis (and there are many thousands of such analyses carried out every day in the world) is in fact the fulfillment of this principle, and every analytical chemist can have the feeling of getting as close to the “truth” as possible during his work, if only he does it correctly and carefully. This is certainly a motivating and rewarding aspect.
The specificity of analytical chemistry also lies in the fact that the scientific principles, rules, and methods developed over the years can be used in practice extremely directly, rapidly, and usefully, which on the other hand promotes the development of new theoretical and instrumental concepts. This coupling is the reason why, especially in recent decades, analytical chemistry has developed rapidly and become increasingly important in all areas of business and society. Through the application of new analytical methods and techniques, innovative chemical and biochemical materials, as well as specialized, high‐tech apparatus, analysts are able to penetrate deeper and deeper into matter, detecting and determining the components contained in it in ever smaller quantities and in a variety of chemical forms.
In the current of this progress, however, it is easy to succumb to the fascination of its technical and instrumental aspects, gradually forgetting that the mere creation of new analytical methods and inventions – that is, the search for paths to “truth” – is insufficient, even if these paths are the most ingenious and innovative. It is equally important that the analytical results obtained by these routes should, as far as possible, have the characteristics of the “true,” which, in analytical language, means above all their maximally high accuracy and precision.
No one needs to be convinced of the importance of high‐quality chemical analysis. Several years ago, it was calculated that repetitions of analyses performed in industrial laboratories in the USA, which are necessary due to incorrect analytical results, generate losses of several billion dollars a year. But even more important is the fact that only on the basis of reliable results of various types of analysis, in particular clinical, pharmaceutical, environmental or forensic analysis, it is possible to make a reliable diagnosis, which consequently determines our health and living conditions today and in the future.
What is the meaning and role of analytical calibration in this context? The answer to this question can be given in one word – enormous. One need only realize that a calibration process must accompany almost every analytical proceeding regardless of whether the analysis is qualitative or quantitative in nature. In other words, without this process, achieving the analytical goal – or, if you prefer, getting closer to the analytical truth – is simply impossible. Moreover, the proper choice of the calibration path and its correct adaptation to the different stages of the analytical procedure can contribute significantly to the maximum approximation of the true result. Against this background, it seems obvious that, among the various analytical issues, the subject of calibration requires special attention and interest.
Unfortunately, the reality contradicts this thesis – interest in calibration issues among analysts is relatively low both scientifically and practically. First of all, there are no book positions entirely devoted to this topic, except perhaps for multivariate calibration, which, however, is not widely used in analytical laboratories. In analytical chemistry textbooks, little is usually said about calibration methods, usually limiting themselves to basic, customary approaches and solutions. On the other hand, over the years many articles have appeared in which new calibration solutions can be found, testifying to progress in this analytical field as well. Mostly, however, these reports are usually treated as purely academic and are generally not applicable in laboratory practice.
It must also be said that in the field of calibration there is an extremely large nomenclatural chaos, concerning not only the nomenclature and classification of calibration methods but also the concept of the analytical calibration process as such. This state of affairs obviously has negative consequences. Above all, it is not conducive to teaching purposes, since it is difficult to reliably convey specific analytical knowledge using a language that is not standardized and generally accepted. The lack of a common ground for communication in this area can also become a source of misunderstandings and ambiguities leading to erroneous and incorrect analytical procedures. And yet, no one but an analyst should be particularly sensitive to “order” and “purity” in his work.
The main purpose of this book is to fill, at least to some extent, these gaps and backlogs. It collects and describes a variety of calibration methods and procedures for determining the nature and quantity of sample components in different ways. These approaches are tailored to the specific chemical and instrumental conditions of the qualitative and quantitative analyses performed, as well as to the specific objectives the analyst wishes to achieve in addition to the overarching goal. Based on the calibration properties of these methods, their nomenclature and classification are proposed. It is also shown how calibration approaches can be combined and integrated mainly to diagnose, evaluate, and eliminate analytical errors and thus achieve results with increased precision and accuracy.
The contents of this book are largely based on the author's many years of experience. This is to some extent the result of both the layout of the book and the detailed selection of the issues covered, which certainly does not exhaust the entire calibration subject matter. For the same reason, one can find here original, authorial approaches to this subject, which – although previously published in scientific articles and thus verified – may be further debatable. Therefore, I apologize in advance to those who have slightly different views on the issues raised in the book. I understand this and at the same time invite opponents to such a discussion. I believe, however, that despite all possible reservations and doubts, the book will be a useful source of information on analytical calibration and, for many, a valuable addition to analytical knowledge and a helpful tool in scientific and laboratory work.
As mentioned, the calibration process is inextricably linked to the analytical process. Wandering through the various avenues of performing calibrations is thus also an opportunity to learn or recall various analytical methods and general problems related to analytical chemistry and chemical analysis. With this in mind, the author also sees this book as a supplement to general analytical knowledge delivered in a slightly different way and from a different angle than typical analytical science textbooks.
Finally, I would like to express my warm gratitude to Professor Andrzej Parczewski for “infecting” me many years ago with the subject of calibration. I would also like to thank my colleagues from the Department of Analytical Chemistry of the Jagiellonian University in Krakow for accompanying me on exciting analytical adventure and for providing me with many of their research results for it.
But I am most grateful to my beloved Wife – for motivation, words of support, and time which, at the expense of being with her, I could devote to this work. Without you, Ania, this book would not have been written.
Kraków, March 2022
Paweł Kościelniak
The general understanding of the term “calibration” is far from what applies to the concept in an analytical sense. Leaving aside colloquial connotations, such as calibrating a weapon, the term is generally associated with the adjustment of specific parameters of an object to fixed or desired quantities, and in particular with the adjustment of a specific instrument to perform a correct function. It is, therefore, understood more as a process of instrumental standardization or adjustment. This is reinforced by publicly available nomenclatural sources. For example, in the Cambridge Advanced Learner's Dictionary [1] calibration is defined as “ … the process of checking a measuring instrument to see if it is accurate,” and in the http://Vocabulary.com online dictionary as “the act of checking or adjusting (by comparison with a standard) the accuracy of a measuring instrument” [2]. Even in a modern textbook in the field of instrumental analysis, you can read: “In analytical chemistry, calibration is defined as the process of assessment and refinement of the accuracy and precision of a method, and particularly the associated measuring equipment…” [3].
The ambiguity of the term “calibration” makes it difficult to understand it properly in a purely analytical sense. To understand the term in this way, one must of course take into account the specificity of chemical analysis.
The analyst aims to receive the analytical result, i.e. to identify the type (in qualitative analysis) or to determine the quantity (in quantitative analysis) of a selected component (analyte) in the material (sample) assayed. To achieve this goal, he must undertake a series of operations that make up the analytical procedure, the general scheme of which is shown in Figure 1.1.
When starting an analysis, the sample must first be prepared for measurement in such a way that its physical and chemical properties are most suitable for measuring the type or amount of analyte in question. This step consists of such processes as, e.g. taking the sample from its natural environment and then changing its aggregate state, diluting it, pre‐concentrating it, separating the components, changing the temperature, or causing a chemical reaction.
Figure 1.1 Analytical procedure alone (a) and supplemented by analytical calibration (b).
The measurement is generally performed by the chosen using an instrument that operates on the principle of a selected measurement method (e.g. atomic absorption spectrometry, potentiometry, etc.). The instrument should respond to the presence of the analyte studied in the form of measurement signals. From a calibration point of view, the most relevant signal is the so‐called analytical signal, i.e. the signal corresponding to the presence of analyte in the sample.
An analytical procedure carried out in a defined manner by a specific measurement method forms an analytical method.
The basic analytical problem is that the analytical signal is not a direct measure of the type and amount of analyte in the sample, but only information indicating that a certain component in a certain amount is present in the sample. To perform a complete analysis, it is necessary to be able to transform the analytical signal into the analytical result and to perform this transformation. This is the role of analytical calibration. As seen in Figure 1.3, the analytical calibration process is an integral part of the analytical procedure and without analytical calibration, qualitative and quantitative analysis cannot be performed. Realizing this aspect allows one to look at the subject of calibration as a fundamental analytical issue.
However, there is still the question of what the process of transforming an analytical signal to an analytical result consists of, i.e. how analytical calibration should be defined. In this regard, there is also no unified approach, so it is best to rely on official recommendations.
The process of analytical calibration is largely concerned with the making of measurements and the interpretation of measurement data and therefore falls within the scope of metrology. In the Joint Committee for Guides in Metrology (JCGM) document on basic and general terms in metrology, calibration is defined as “… operation that, under specified conditions, in a first step, establishes a relation between the quantity values with measurement uncertainties provided by measurement standards and corresponding indications with associated measurement uncertainties and, in a second step, uses this information to establish a relation for obtaining a measurement result from an indication” [4]. At the same, the document makes it clear that “calibration should not be confused with adjustment of a measuring system …”.
The metrological term, although it allows for a deeper understanding of the concept of calibration, is still rather general because it is inherently applicable to different measurement systems and different types of results obtained. The concept of calibration in the analytical sense is more closely approximated by publications issued by the International Union of Pure and Applied Chemistry (IUPAC). In the paper [5], the IUPAC definition is aligned with the JCGM definition in that it defines analytical calibration as “... the set of operations which establish, under specified conditions, the relationship between value indicated by the analytical instrument and the corresponding known values of an analyte,” and in a subsequent IUPAC publication [6] we find an express reference of analytical calibration to both quantitative and qualitative calibration: “Calibration in analytical chemistry is the operation that determines the functional relationship between measured values (signal intensities at certain signal positions) and analytical quantities characterizing types of analytes and their amount (content, concentration).”
Such a purely theoretical approach is too general, even abstract, and unrelated to analytical practice. In particular, it does not provide guidance on how the functional relationship (calibration model) should be formulated in different analytical situations and how it relates to the different types of methods used in qualitative and quantitative analysis. Nor does it say anything about the relative nature of the calibration process that the term “measurement standard” gives to the concept in metrological terms.
To extend the definition of analytical calibration, the author proposes to introduce the concept of three functions that relate the signal to the analytical result: the true function, the real function, and the model function [7]. This approach is illustrated in Figure 1.2.
If a sample that an analyst takes for qualitative or quantitative analysis contains a component (analyte) of interest, then before any action is taken with the sample, the type of analyte and its quantity in the sample can be referred to as the true value (type or quantity), xtrue, of the analyte. If it were possible to measure the analytical signal for that analyte at that moment, then the relationship between the resulting signal and its true type or quantity, Ytrue = T(xtrue) could be called the true function.
However, the determination of the true function and the true value of the analyte is not possible in practice because it requires the analyst's intervention in the form of preparing the sample for measurement and performing the measurement. The initiation of even the simplest and shortest analytical steps results in a change of the true analyte concentration in the sample that continues until the analytical signal is measured. Thus, the concepts of true function and true analyte value are essentially unrealistic and impossible to verify experimentally or mathematically.
Figure 1.2 Concept of analytical calibration based on the terms of true, Y = T(x), real, Y = F(x), and model, Y = G(x), functions (virtual analytical steps and terms are denoted by dotted lines; for details see text).
When the sample is prepared for analysis, the type or amount of analyte in the sample to be analyzed takes on a real value, x0. The relationship between the analytical signal and the type or amount of analyte is described at this point by the real function, Y = F(x), which takes the value Y0 for the value x0:
Although the value of Y0 is measurable, the exact form of the real function is unknown because it depends on a number of effects and processes that led to the current state of this relationship during the preparation of the sample for measurement. Consequently, the determination of the real result x0 by means of the real function is impossible.
This situation forces the formulation of an additional auxiliary model function, Y = G(x). The role of this function is to replace the real function in the search for the true value, x0. It should therefore meet two basic conditions: to be known and well‐defined and to be the most accurate approximation of the real function (G(x) ↔ F(x)). To fulfill these conditions, a calibration standard (one or more) should be used, which should be similar to the sample and properly prepared for measurement.
Assuming that the approximation of the real function by the model function, Y = G(x), is accurate, then the inverse form of the model function, x = G−1(Y), has to be created, which is called the evaluation function [6]. Theoretically, it allows the value of Y0 to be transformed into the real result, x0:
In practice, the approximation of the real function by the model function is never accurate because the real function is essentially unknown. Therefore, transformation (1.2) leads to a certain value xx:
which is an approximate measure of the real result, x0. This result can also be considered as the final analytical result.
The processes of creating a model function and its approximation and transformation are fundamental, integral, and necessary elements of analytical calibration. Thus, it can be said that analytical calibration consists of approximating the real relationship between the signal, Y, and the type, b, or amount, c, of an analyte in a sample by means of a model function, and then applying this function to transform the signal obtained for the analyte in the sample to the analytical result.
Note the natural logic of the above description of analytical calibration. Such quantities as “sample” (considered as a collection of unknown chemical constituents), “real function” and “real type or amount of analyte” have their counterparts in the terms of “standard”, “model function” and “obtained type or amount of analyte”, which are associated with analytical calibration. The former are largely hypothetical, unknown in fact to the analyst, while the latter are known and are approximations of the former. Just as the composition and properties of a sample can never be faithfully reproduced in a standard, the form of the real function cannot be accurately approximated by a model function, and the real type or amount of analyte in the sample at the time the analytical signal is measured can only be approximated by the analytical result obtained.
Depending on the type of univariate model function used, analytical calibration can be broadly divided into empirical calibration and theoretical calibration[7]. In some cases, the calibration is also of a complex nature to varying degrees (empirical–theoretical or theoretical–empirical) when, to better represent the real function, empirical information is supported by theoretical information or vice versa.
An essential part of any calibration process is the use of calibration standards, which can be of different nature: chemical, biological, physical, or mathematical[7]. A common feature of calibration standards is that they directly or indirectly enable the assignment of a measurement signal to a known, well‐defined type or amount of analyte. These standards are therefore used to formulate a model function. According to the principle of analytical calibration, a standard should be able to formulate a model function that approximates the true function as closely as possible.
In empirical calibration, the model function is formulated on the basis of the performed experiment, sensory perception, or observation. The sources of information needed to create this type of empirical model function, Y = G(x), are measurements of analytical signals obtained directly or indirectly for chemical, biological, or physical standards. In this case, the analyst does not go into the theoretical aspects of the dependence of the analytical signal on the type or amount of analyte (although in some cases the laws and rules underlying this dependence, e.g. Nernst's or Lambert Beer's law, may be helpful).
A widely recognized and used method of analytical calibration is the empirical calibration performed with a chemical standard. This is a synthetic or (less commonly) natural material, single or multicomponent, containing an analyte of known type or amount. In special cases, a chemical standard contains a known type or amount of a substance that reacts with the analyte or a known type or amount of an isotope of the element being determined. Calibration with chemical standards is a universal procedure in the sense that it does not depend on the chosen measurement method. The model function formulated is mathematically usually simple and its graphical form is called a calibration graph.
In theoretical calibration, the model function is formulated on the basis of a mathematical description of physicochemical phenomena and processes occurring during the analysis using a given analytical and measurement method. Such a description includes phenomenological quantities based on physical or chemical measurements (electrochemical potentials, diffusion coefficients, etc.), universal quantities (molar mass, atomic number, stoichiometric factors), and/or fundamental physical constants (Faraday constant, Avogadro constant, etc.). The individual elements of the mathematical description act as mathematical standards, and the function created with them, Y = G(x), is a theoretical model function.
In analytical chemistry, there are relatively few cases of well‐defined theoretical models of relatively simple mathematical form. However, in the literature, one can find many new proposals of such functions formulated for various measurement methods. As a rule, they have a very complex mathematical structure, which results from the desire to approximate the real function as accurately as possible. A strong motivation for these scientific efforts is that the theoretical model allows the calculation of the analytical result without the need to prepare chemical standards and perform measurements for the analyte in these standards.
As mentioned, other types of calibration standards can be found in chemical analysis, as well as model functions of a different nature formulated with them, as discussed in Chapter 2 of this chapter. It can be hypothesized that analytical calibration is inherently connected with the use of standards and the creation of model functions with their help.
The implications of this approach to analytical calibration are interesting. Qualitative or quantitative analysis performed on the basis of a theoretical model function is often referred to in the literature as calibration‐free analysis or absolute analysis. From the point of view of the accepted definition of analytical calibration, this term is misleading, because the formulation of the theoretical model function, like the empirical model, is part of the full calibration procedure. Thus, the questions arise: can chemical analysis be performed in practice without analytical calibration and what conditions must an analytical method meet to be called a “absolute method”? The discussion of this issue will be the subject of Chapter 2 of this book.
The concept of analytical calibration presented above perhaps do not yet give a clear picture of this process. How, then, does the full empirical and theoretical calibration procedure look in general?
As already stated, the calibration process is essential to the performance of chemical analysis – both qualitative and quantitative – and is an integral, inseparable part of any analytical method. What the calibration process contributes to the analytical procedure is the handling of the calibration standard necessary to formulate the model function and use it to transform the measurement signal to the analytical result. Thus, the calibration procedure consists of three steps: preparative, measurement, and transformation.
The preparative step consists in preparing the sample and the standard in such a suitable way that the true function, Y = F(x), and the model function, Y = G(x), are similar to each other as much as possible. In the case of empirical z‐calibration, there are two main routes to this goal:
the sample and standard are prepared separately
, taking care that the chemical composition of the standard is similar to that of the sample and that the preparation of the sample and standard for measurement is similar,
the standard is added to the sample
prior to measurement (less frequently prior to sample processing).
In the case of theoretical calibration, separate treatment of the sample and the standard is obvious and natural. Appropriate preparation of the standard in relation to the sample consists in introducing such mathematical standards to the theoretical model that most adequately describe the state of the sample and the phenomena and processes that the sample undergoes under the conditions of the specific measurement method.
In the measurement stage, signal measurements are made using a selected measurement method. If the calibration is empirical, measurements are related to the sample and standard or on the sample and sample with the addition of the standard (depending on their preparation at the preparative stage). In either case, the measurements involving the standard are used to formulate an empirical model function. In the case of a theoretical calibration, measurements are made only for the sample and the formulated theoretical model is considered as the model function.
In the transformation step, the value of the signal obtained for the sample is entered into an empirical or theoretical model function and thus the final analytical result (type or amount of analyte in the sample) is determined.
Referring to the formulated extended definition of analytical calibration, it can be noticed that the preparative and measurement stages are used to approximate the model function to the real function, and the key, transformational calibration process takes place at the last stage. A schematic diagram of the procedures of empirical and theoretical calibration is shown in Figure 1.3.
Figure 1.3 General scheme of empirical and theoretical calibration.
Calibration procedures with specific preparation of sample and standard for measurement form calibration methods. In general, therefore, two groups of methods can be distinguished in analytical calibration, which can be called comparative methods (when the sample and standard are treated separately) and additive methods (when the standard is added to the sample). Within each of these two groups, it is possible to distinguish methods that differ more specifically on the preparative side (e.g. external standard method, internal standard method, standard addition method, etc.). These names are mostly customary and do not always correspond to the specifics of the individual methods. Therefore, another, more essential criterion for the division of the calibration methods in terms of the mathematical way of transforming the measurement signal into the analytical result will also be proposed.
The role of analytical calibration is not only to make it possible to identify or determine an analyte in a sample, but also to do so with as much accuracy and precision as possible. The measure of accuracy is the statistically significant difference between the analytical result obtained, xx, and the true type or amount of analyte, xtrue, in the sample before it was subjected to any analytical process. The measure of precision is the random difference in analytical results obtained in so‐called parallel analyses, that is, performed in the same way and under the same experimental conditions. The accuracy and precision of an analytical result are thus determined by any systematic and random changes in the true function before it becomes, at the time of measurement, the true function, and then by the systematic and random difference between the true function and its representation, the model function.
Changes in the analytical signal that occur both during sample preparation for measurement and during measurement, resulting in the transformation of the true function to the model function, can be called analytical effects [7]. They can be controllable and uncontrollable. Controlled analytical effects include, for example, changes caused by a targeted action by the analyst to decrease or increase the concentration of an analyte in a sample by dilution or concentration, respectively. Effects of this type can usually be calculated and corrected at the stage of analytical result calculation.
During qualitative and quantitative analysis, however, there are also such changes in the analytical signal that are partially or completely out of the analyst's control. These uncontrolled analytical effects can be both random and systematic. Although the analyst is usually aware of the risk of their occurrence and usually tries to prevent them accordingly, he or she may overlook or even neglect them while performing the analysis. As a result, control over the entire analytical process is lost in a sense. Uncontrolled effects manifest themselves by changing the position and intensity of the analytical signal, i.e. they are important in both qualitative and quantitative analysis.
Uncontrolled effects can be caused by many factors manifesting themselves at different stages of the analytical process. The classification of these effects covering all possible factors is, of course, a matter of convention. The division presented below is the author's proposal [7].
