Quantification, Validation and Uncertainty in Analytical Sciences - Max Feinberg - E-Book

Quantification, Validation and Uncertainty in Analytical Sciences E-Book

Max Feinberg

0,0
115,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

This book provides foundational and expert knowledge by building on the sequence of operations starting from the quantification in analytical sciences by defining the analyte and its link to the calibration function. It empowers the reader to apply Method Accuracy Profile(MAP) efficiently as a statistical tool in measuring uncertainty. In this respect, this book is unique as it proposes a comprehensive approach to MU (Measurement Uncertainty) estimation. It elucidates several examples and template worksheets explaining the theoretical aspects of the procedure. It also allows the reader by providing practical insights to improve decision making by accurately evaluating and comparing different analytical methods

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 574

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Quantification, Validation and Uncertainty in Analytical Sciences

An Analyst’s Companion

 

Max Feinberg

Serge Rudaz

 

 

 

 

Authors

Dr. Max FeinbergParisFrance

Prof. Serge RudazSection des sciences pharmaceutiquesUniversité de GenèveRue Michel Servet 11211 GenèveSwitzerland

Cover Image: © Max Feinberg

All books published by WILEY‐VCH are carefully produced. Nevertheless, authors, editors, and publisher do not warrant the information contained in these books, including this book, to be free of errors. Readers are advised to keep in mind that statements, data, illustrations, procedural details or other items may inadvertently be inaccurate.

Library of Congress Card No.: applied for

British Library Cataloguing‐in‐Publication DataA catalogue record for this book is available from the British Library.

Bibliographic information published by the Deutsche NationalbibliothekThe Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at <http://dnb.d-nb.de>.

© 2024 WILEY‐VCH GmbH, Boschstraße 12, 69469 Weinheim, Germany

All rights reserved (including those of translation into other languages). No part of this book may be reproduced in any form – by photoprinting, microfilm, or any other means – nor transmitted or translated into a machine language without written permission from the publishers. Registered names, trademarks, etc. used in this book, even when not specifically marked as such, are not to be considered unprotected by law.

Print ISBN: 978‐3‐527‐35332‐3ePDF ISBN: 978‐3‐527‐84525‐5ePub ISBN: 978‐3‐527‐84526‐2oBook ISBN: 978‐3‐527‐84527‐9

List of Figures

Figure 1

How to read this book.

Figure 1.1

Schematic representation of the quantification principle.

Figure 1.2

Schematic representation of absolute, semi, and relative quantification modes.

Figure 1.3

Contribution to the reproducibility of two quantification methods in liquid chromatography of saccharides.

Figure 1.4

Two‐run standard addition method.

Figure 1.5

Calibration modes in analytical sciences.

Figure 1.6

LC‐MS on endogenous metabolites: proposed workflow for selecting a calibration operating procedure.

Figure 2.1

THEOPHYLLINE – illustration of the calibration data of series 1.

Figure 2.2

Direct calibration and inverse calibration.

Figure 2.3

Principles of ordinary least‐squares (OLS) method.

Figure 2.4

ELISA – determination of interleukin 6.

Figure 2.5

SAM – multiple point standard addition method.

Figure 3.1

Graphical representation of diverse total variance decomposition, affording diverse sources of variation.

Figure 3.2

LEAD – illustration of interlaboratory study.

Figure 3.3

Geometric interpretation of the general ANOVA.

Figure 3.4

(a) ANOVA – observed model. (b) ANOVA – theoretical model.

Figure 3.5

Graphical representation of the experimental design. (a) Balanced and (b) unbalanced.

Figure 3.6

Diverse types of outliers in an interlaboratory study.

Figure 3.7

LEAD – interlaboratory study after outlier deletion.

Figure 4.1

Example of systematic error generated by the integration mode of poorly resolved chromatographic peaks.

Figure 4.2

Geometric interpretation of additive and multiplicative bias.

Figure 4.3

ALFALFA

Figure 4.4

NITROGEN – examples of anomalies in a wheat flour control chart.

Figure 4.5

Possible location of different Qcontrol charts in a routine laboratory.

Figure 5.1

Frequency of terms used in validation guides.

Figure 5.2

Example of a multicriteria validation procedure.

Figure 5.3

Example of single‐criterion validation procedure.

Figure 5.4

Number of publications with “accuracy profile” in the title ratioed to all “validation” published papers.

Figure 5.5

THEOPHYLLINE – MAP with six validation materials. Inverse‐predicted concentrations are obtained with WLS quadratic model.

Figure 5.6

THEOPHYLLINE – validation and validated ranges (

β

% = 80%).

Figure 5.7

THEOPHYLLINE – lower part of the MAP expressed as absolute inverse‐predicted concentration (

β

% = 80%).

Figure 5.8

Schematic representation for LOQ calculation.

Figure 5.9

Comparison tolerance and confidence intervals calculated for 18 replicates assumed to be normally distributed.

Figure 5.10

THEOPHYLLINE – accuracy profile with the two tolerance intervals (

β

% = 80%,

γ

% = 95%). Inverse‐predicted concentrations are obtained with WLS quadratic models.

Figure 5.11

THEOPHYLLINE – accuracy profile (MAP) with the two tolerance intervals obtained for OLS quadratic models (

β

% = 80%,

γ

% = 95%).

Figure 5.12

Schematic representation of the experimental design to be used to build a relevant accuracy profile.

Figure 5.13

Influence of the experimental design parameters on the number of effective measurements.

Figure 5.14

Influence of the experimental design on the coverage factor with

β

% = 80%.

Figure 5.15

Coverage factor as a function of the number of efficient measurements.

Figure 5.16

THEOPHYLLINE – half tolerance intervals for different probability values.

Figure 6.1

The 4‐step GUM general procedure for measurement uncertainty (MU) estimation.

Figure 6.2

LEAD – cause to effect diagram of the sources of uncertainty when determining lead by ICP‐ID‐MS.

Figure 6.3

Triangular distribution law applied to a digitized reading rounded to 90.

Figure 6.4

LEAD – uncertainty budget.

Figure 6.5

Schematic representation of the main concepts used for method validation.

Figure 6.6

Comparison between the total analytical error (

TAE

) model and the measurement uncertainty (MU) model.

Figure 6.7

Modeling: moving from the real world to idealized world.

Figure 6.8

CORTISOL – accuracy profiles from four validation studies.

Figure 6.9

CORTISOL – uncertainty functions of four accuracy profiles.

Figure 7.1

Different MU estimation procedures proposed for the analytical sciences.

Figure 7.2

Generic cause‐to‐effect diagram with eight main classic sources of uncertainty.

Figure 7.3

THEOPHYLLINE – 95% coverage intervals.

Figure 7.4

ALBUMIN – control chart and QC.

Figure 7.5

(a) LEAD – laboratory coverage intervals including outliers, (b) LEAD – individual coverage intervals after removing outliers.

Figure 7.6

Horwitz (solid line) and Thompson (dashed line) models.

Figure 7.7

Precision and trueness acceptance criteria proposed in various official guidelines.

Figure 7.8

(a) THEOPHYLLINE – standard uncertainty function, (b) THEOPHYLLINE – relative uncertainty function.

Figure 7.9

Different power functions when power coefficient

b

varies, and

a

= 0.2

remains constant.

Figure 7.10

Coverage interval and measurement uncertainty.

Figure 8.1

Total circulating testosterone reference values for normal and pathological states with associated MU.

Figure 8.2

Three ways to assess sample conformity to a unilateral or bilateral specification interval.

Figure 8.3

The guard band concept introduced by the JCGM.

Figure 8.4

Influence of MU on acceptability or rejection intervals.

Figure 8.5

Basic sampling vocabulary.

Figure 8.6

Main types of spatial distribution of an analyte in a batch or population.

Figure 8.7

Cause to effect diagram of the sampling operation.

Figure 8.8

COPPER – distribution of measurements and control number.

Figure 8.9

PARACETAMOL – method accuracy profiles using two calibration models on the same data.

Figure 8.10

PARACETAMOL – uncertainty functions for the two calibration models.

Figure 8.11

THEOPHYLLINE – comparison of uncertainty functions obtained with two calibration models.

Figure 8.12

NICOTINIC – accuracy profiles before and after correction.

Figure 8.13

(a) NICOTINIC – relationship between the average correction factor and the concentration. (b) NICOTINIC –Relative uncertainty function before and after correction.

Figure 8.14

Four possible replicate definitions according to the sample preparation starting step among the sample preparation steps.

Figure 9.1

Plausible classic and alternative approaches to estimate LOD and LOQ.

Figure 9.2

Definitions of decision limit and detection capability according EU regulation [7] for a substance with a maximum residue limit (

MRL

) of 100 μg/kg.

Figure 9.3

Detection capacity for the calibration curve method according to ISO 11843‐2.

Figure 10.1

Example of assay obtained by SAM on a pharmaceutical product containing TDF.

Figure 10.2

Determination of FTC, by standard additions to a pharmaceutical product announced at 200 mg/tablet.

Figure 10.3

Average levels found (mg/tablet) with the coverage intervals for six medicines noted from A to F.

Figure 10.4

MU estimates of six drug lots for two nominal strengths of 200 and 245 mg/tablet.

Figure 10.5

Accuracy profile of Dumas’s method applied to dairy products using Kjeldahl method as reference.

Figure 10.6

Comparison of the uncertainty functions of both methods used to determine total nitrogen in foods.

Figure 10.7

Accuracy profile of Dumas’s method applied to dairy products compared to improved Kjeldahl method.

Figure 11.1

Relationship between risk consequences and measurement uncertainty.

List of Resources

Resource A

Linear and quadratic calibration (Excel).

Resource B

Calibration using OLS and WLS (Python).

Resource C

Nonlinear calibration (Python).

Resource D

Standard addition method (Excel).

Resource E

Precision parameters for a balanced design (Excel).

Resource F

Precision parameters for an unbalanced design (Excel).

Resource G

Algorithm A (Python).

Resource H

β‐Expectation tolerance interval (Excel).

Resource I

β‐γ content tolerance interval (Excel).

Resource J

Probability of non‐acceptable measurements (Excel).

Resource K

Iterative algorithm applied to LEAD (Python).

Resource L

Rounding a result (Excel).

Resource M

Calculation of the coefficients of a power function (Excel).

Resource N

Coverage interval for a given concentration (Excel).

Resource O

Coverage interval for given relative uncertainty (Excel).

Resource P

Decision limit – calibration curve procedure of ISO 11843‐2.

Resource Q

Calculation of the SAM extrapolated concentration (Excel).

Preface

Why an Analyst’s Companion? Millions of analyses are carried out every day in laboratories for all sectors of industry and science. Many people are willing to pay for these analyses because they are considered effective in making a scientifically sound decision. Though few publications address the economics of analytical sciences, nonetheless, a report by the European Commission concluded in 2002 that “for every euro devoted to measurement activity, nearly three euros are generated” [1]. But is it easy and simple to use an analytical result, and does it always allow you to make the right decision? Some questions illustrate the risks involved in relying on a result:

– How do you know that the laboratory used the method that gave the exact result?

– Like any measurement, analysis is subject to error. How can you estimate them?

– How can a spurious measurement be used effectively?

This is the right time to explain why and how the concept of measurement uncertainty (MU) can be used to better manage these risks. This also means that a new challenge for analysts is to develop an appropriate method for estimating MU more explicitly applicable to analytical sciences. In this perspective, a tool based on the statistical dispersion intervals called method accuracy profile (MAP) is proposed as the backbone of the book. The theoretical aspects of the MAP procedure and MU estimation are presented in several examples and template worksheets to help analysts quickly grasp this tool.

At the turn of the 1970s, three analytical chemists, Bruce Kowalski, Luc Massart and Svante Wold, conceptualized a discipline they called Chemometrics [2]. Unfortunately, they all have passed away since, but their work is still vivid. Many chemometrics books have been published, proving the added value of statistics to analytical sciences. Some are globally addressing chemometrics [3–5] other are more focused on statistics [6, 7], and others on method validation [8, 9].

This book contributes to the application of chemometrics, but the obvious aim is not to repeat what is available in many valuable publications. Only a few books precisely address measurement uncertainty in analytical sciences [10–12]. They present limited facets and do not propose a more comprehensive approach. The aim of this book is to describe a global procedure for MU estimation, easily applicable in analytical laboratories. In a recent publication, we have exposed in a condensed manner our view of the link between validation and measurement uncertainty [13]. This book develops more extensively and practically our viewpoint.

However, it is not satisfactory to simply propose a modus operandi (even if it is claimed to be universal) for estimating MU when this parameter is still new in analytical sciences and not always well identified by end‐users. Therefore, several chapters are dedicated to its practical use in decision‐making, demonstrating its advantages. These remarks indicate that this book is primarily intended for professional analysts, although researchers and students may find it of interest.

In order to reach this goal, the book is organized around practical responses covering three major questions daily put to analysts when they develop a new method or routinely apply it to unknown samples:

– How to quantify the analyte?

– How to validate the method?

– How to estimate the measurement uncertainty?

How does this book give answers these questions? We use as a roadmap a tool based on the application of statistical dispersion intervals called MAP. The latter was initially conceived for method validation, but it can easily be used for MU estimation. While method validation is often reduced to computing a set of disconnected parameters to be estimated, the MAP approach is more global. It consists in defining the interval where the method is able to produce a given proportion of acceptable results. This perspective is in harmony with the uncertainty approach proposed by metrologists some decades ago that consists in computing the so‐called coverage interval of the result.

The chapters of the book can be read independently. This may explain some redundancies in the quoted publications. But they are structured according to a reading thread illustrated in Figure 1. The thick grey arrow is the backbone. Six main chapters are characterized as rounded angle boxes. Three of them are devoted to measurement uncertainty, as it is a key issue of the book.

Figure 1 How to read this book.

Additional chapters appear as ellipses. They bring two kinds of information. On the one hand, theoretical background, such as precision and trueness parameter estimation and how to compute them, may be useful to better understand statistical developments involved in the method accuracy profile. On the other hand, specific examples of MU applications. One is devoted to the limits of quantification and the challenging question of controlling samples with low analyte concentration, another to method comparison.

Several data sets provide the link between the different chapters. They are used throughout for practical data handling and real software application. The aim of this data‐oriented presentation is to help the analyst apply the proposed techniques in the laboratory, in keeping with the title “Companion.” This also practicality means that numerical applications for all topics covered are presented and illustrated alongside the theoretical considerations. These are based on detailed Microsoft Excel® worksheets or free equivalent, such as OpenOffice® Calc, included with the book. This software is user‐friendly and does not require much explanation, and probably everyone in the laboratory knows how to use it. Although criticized by professional statisticians (for good reasons), this software is extremely helpful for quick and simple statistical computation in a laboratory, and several pitfalls can easily be avoided:

– Worksheet cell content is easily modified without any warning. Thus, once created and validated, the best initiative is to protect the worksheet or whole workbook.

– The formula inside cell is not visible unless the option to show formulas is on. To help the understanding of the template worksheets developed for this book, all formulas are made visible in the cell next to the resulting. The built‐in function

FORMULATEXT

is used for this aim. It is only available in the most recent Excel releases.

– Confusion may exist between a worksheet and a text editor. Fancy presentation must be avoided, and it is better to embed a worksheet within a text editor rather than trying to do everything with a single software.

The basic use of worksheet software does not allow complex statistical calculation though it contains many built‐in functions, which are used in the following examples. It is possible to use the development environment called Visual Basic for Applications coming with Excel to build more complex programs, but it requires some practice. For the most sophisticated applications, we preferred to provide Python program examples. This software is increasingly popular, and the accuracy of statistical functions is widely recognized. For instance, complex techniques, such as non‐linear or weighted regression techniques, are easily implemented. Python is simpler than professional statistical software. It is developed under a free license, and there is an exceptionally large community of users who can help. The drawback is that it is a patchwork, and many additional modules must be imported to apply some methods. The simplest way to install Python is to download a free package called Anaconda [14] and select the Spyder development environment. Presented examples were programmed in this environment.

References

1

G. Williams (2002). The assessment of the economic role of measurements and testing in modern society.

European Measurement Project

, Pembroke College, University of Oxford.

2

Wold, S. and Sjöström, M. (1998). Chemometrics, present and future success.

Chemometrics and Intelligent Laboratory Systems

44

: 3–14.

3

Kowalski, B.R. (1984).

Chemometrics: Mathematics and Statistics in Chemistry

. Dordrecht: Springer.

4

Massart, D.L. (1997).

Handbook of Chemometrics and Qualimetrics Part A

. Amsterdam: Elsevier.

5

Vandeginste, B.G.M., Massart, D.L., Buydens, L.M.C. et al. (1998).

Handbook of Chemometrics and Qualimetrics Part B

. Amsterdam: Elsevier.

6

Ellison, S.L.R., Barwick, V.J., and Farrant, T.J.D. (2009).

Practical Statistics for the Analytical Scientist a Bench Guide

, 2e. Middlesex: LGC.

7

Miller, J.N., Miller, J.C., and Miller, R.D. (2018).

Statistics and Chemometrics for Analytical Chemistry

, 6e. England: Pearson Education Limited.

8

Ermer, J. and Miller, J.H.M.B. (2006).

Method Validation in Pharmaceutical Analysis

. Weinheim: Wiley‐VCH Verlag GmBH.

9

Swartz, M.E. and Krull, I.S. (2012).

Handbook of Analytical Validation

. Boca Raton, FL: CRC Press.

10

De Bièvre, P. and Günzler, H. (2013).

Measurement Uncertainty in Chemical Analysis

. Berlin, Heidelberg: Springer.

11

Bulska, E. (2018).

Metrology in Chemistry

,

Lecture Notes in Chemistry Series

, vol. 101. Springer.

12

Hrastel, N. and da Silva, R.B. (2019).

Traceability, Validation and Measurement Uncertainty in Chemistry: Vol. 3: Practical Examples

. Springer International Publishing.

13

Rudaz, S. and Feinberg, M. (2018). From method validation to result assessment: established facts and pending questions.

Trends in Analytical Chemistry

105

: 68–74.

14

Anon. (2020). Anaconda Software Distribution. Anaconda Inc.

https://docs.anaconda.com/

(accessed 30 July 2023).

Glossary of Symbols

Symbol

Term

β

Coverage probability of the tolerance interval

u

(

Z

)

Standard uncertainty

Z

r

Coefficient of correlation

Z

Inverse‐predicted concentration in the working sample

Z

*

Extrapolated sample concentration (standard addition method)

Y

Measured instrumental response

Predicted instrumental response

X

Concentration of the (authentic) analyte in the working sample

Average

Grand average

Concentration of the (surrogate or not) analyte in the calibrant

UR%

Relative expanded standard uncertainty

U

(

Z

)

Expanded uncertainty

Z

AIC

Akaike Information Coefficient

A

Variance ratio

δ

Bias

E

Random error variable

f

Any calibration or uncertainty function

f

−1

Inverse of any function

β

γ

‐CTI

β

γ

‐Content Tolerance interval

β

‐ETI

β

‐Expectation Tolerance interval

CF

Correction factor

p%

Proportional correction factor

AA

Authentic analyte (used as subscript)

IS

Internal Standard (used as subscript)

Symbol

Term

1 − 

α

Level of confidence (also noted

γ

)

[A−, A+]

Acceptance interval

u

c

(

Z

)

Combined standard uncertainty

Z

u

2

(

Z

)

Standard variance of

Z

RF

Response Factor

SP

Sum of crossed products of deviations to the mean

SS

Sum of squared deviations to the mean

Repeatability variance

Within‐series variance

Reproducibility variance

Between‐laboratories variance

Intermediate precision variance

Between‐series variance

r

2

Coefficient of determination

k

TI

Tolerance factor of a tolerance interval

k

GUM

Coverage factor

k

GUM

Standardized coverage factor (GUM)

a

0

,

a

1

,

a

2

, …

Coefficients of the calibration model

[

Z

 ± 

U

(

Z

)]

Coverage interval

G

n

Input quantity of the measurement model

[

X

L

,

X

U

]

Measuring interval or working interval

Acknowledgments

The authors wish to thank Professor Douglas Rutledge from the University Paris‐Saclay for his careful and helpful revision of this book.

1Quantification

1.1 Define the Measurand (Analyte)

The initial question for the analyst is to define what is expected to be measured. According to the International Vocabulary of Metrology [1], the “quantity intended to be measured”1 is called the measurand, or more specifically, the analyte, when considering measurement methods applied to chemical and biochemical substances. But this simple definition may be misleading while an analyte may have variable forms during the analytical process. It is not always certain that the substance finally measured is initially intended to be measured. For example, during sample preparation, the initial organic form of the analyte may change to inorganic, and what was intended to be measured is finally modified. For instance, in living organisms, heavy metal is present combined with proteins, such as mercury to metallothionein. Still, when analyzed after mineralization, it can be transformed into sulfate, perchlorate, or nitrate.

A well‐known catastrophic example is the Minamata disease; when looking for mercury in food samples, the oldest methods were based on the complete sample mineralization to obtain mercury nitrate. Soon after, it was realized that the toxic forms of mercury were organic derivates. Hence, so‐called total mercury had no great toxicological interest compared to the different organic forms. Speciation techniques in mineral analysis or chiral chromatographic methods are good examples of innovative approaches devoted to better maintaining the analyte in its expected form. Therefore, quantification in analytical sciences is often less straightforward than claimed. From the metrological point of view, the difficult traceability of chemical substances to international standards is one of these obstacles.

This is detailed in Section 6.3 as an introduction to the estimation of measurement uncertainty (MU) among many other sources of uncertainty. The encapsulated conception of modern and highly computerized instruments may also prevent the analyst from assessing what is measured. Digits displayed on the instrument screen represent what is “intended to be measured.” The paradoxical consequence is that discussing the true nature of the analyte is often avoided, while more attention should be paid to this question. The goal of this chapter is to propose things to consider on this topic. Many examples are based on mass spectrometry (MS) hyphenated methods because several are now considered highly compliant from a metrological point of view.

1.1.1 Quantification and Calibration

The metrology motto could be measuring is comparing. Therefore, when quantifying an analyte, the comparison principle must be previously defined. This preliminary step is usually called calibration. In modern analytical sciences, most methods use measuring instruments ranging from simple, specific electrodes to sophisticated devices; therefore, calibration procedure may enormously vary according to the nature of the instrumentation. This chapter attempts to classify the different quantification/calibration strategies applied in analytical laboratories. Because this subject is not harmonized, the employed vocabulary may vary from one domain of analysis to another and be confusing. For each term, we tried to give a definition, but it may be incomplete due to the considerable number of analytical techniques. Many suggested definitions are listed in the glossary at the end of the book.

Whatever the measuring domain, classic differences are made between direct and indirect measurement techniques. Direct method can usually refer to a measurement standard, for instance, when measuring the weight of an object on a two‐pan balance with standard weights. Indirect measurements are performed using a transducer, a “device, used in measurement, which provides an output quantity with a specified relation to the input quantity.”

Reversely, with a one‐pan balance, measurements are indirect. At the same time, result is obtained by means of a mathematical model linking the calibrated piezoelectrical effect on the beam to the weight. In analytical sciences, methods are usually indirect. Some exceptions are set apart, classified as direct primary operating procedures by BIPM (Section 4.2.1). For most chemical or biological analytical techniques, the measuring instrument must be calibrated with known reference items before use. Finally, quantification involves three elements, as outlined in Figure 1.1:

Figure 1.1 Schematic representation of the quantification principle.

– The analyte is in the working sample. Its concentration is denoted

X

. The searched compound (chemical or biological) is embedded within the sample matrix. It is only before any treatment that the analyte is present in the intended form. The role of sample preparation is to eliminate a large part of the matrix and concentrate on the analyte. But it may change the analyte chemical form; for instance, with the speciation of organic forms of heavy metals, sample preparation is quite different from classic mineralization.

– The calibration items are also called

calibration standards

or

calibrators

. They are prepared by the analyst to contain a known amount of a calibrant as similar as possible to the analyte. To underline this difference, it is denoted

X

c

. The selection of the adequate calibrant is a key‐issue of quantification extensively addressed in the rest of this chapter.

– The calibration function that links the instrumental response

Y

to the known quantity

X

c

, denoted

Y

=

f

(

X

c

)

.

Figure 1.1 is an attempt to recapitulate a generic quantification procedure. Most of the time, calibrators are artificially prepared and used to build the calibration function f which generally is inverted when analyzing an unknown sample. The three elements may be subjected to variations. Mathematical notation underlines the dissimilar roles they play for the statistical modeling of calibration and possible relationships that link the instrumental signal to the calibrant concentration. Denoting Z the predicted concentration of a sample emphasizes the role of inversing calibration function as discussed in Section 2.1. Finally, considering a given calibration dataset, distinct functions f can be fitted. A principal issue will be to select the best one because it deeply affects the global method performance. The goal of the present chapter is to describe some classical or new quantification procedures.

1.1.2 Authentic versus Surrogate

To be explicit, it is convenient to define some terms. If the chemical substance sought in the sample is called authentic, obviously, for many methods it is possible to prepare the calibrators with the authentic analyte. But other quantification methods exist based on a different calibration compound, which will be called surrogate standard or calibrant. It would be paradoxical to call it surrogate analyte, whereas the analyte can only be authentic. Therefore, when the analyte and the calibrant are different, it is necessary for the analyst to cautiously verify if they have equivalent analytical behavior and define an eventual adjustment method, such as a correction factor.

The measuring instrument is a transducer that converts the amount or the concentration of a chemical substance into a signal – usually electrical – according to a physical or chemical principle. How quantitative analyses are achieved varies from simple color tests for detecting anions and cations through complex and expensive instrumentation for determination of trace amounts of a compound or substance in a complex matrix. Increasingly, such instrumentation is a hybrid of techniques for separation and detection that requires extensive data processing.

The subject of analytical sciences has become so wide that complete coverage, providing clear information to an interested scientist, can only be achieved in a multi‐volume encyclopedia. For instance, Elsevier published in 2022 the volume n°98 of the Comprehensive Analytical Chemistry handbook started in the 1980s.

The major obstacle in analytical sciences is the structural or chemical differences that exist between the analyte present in the working sample and the substance used as a calibrant. The instrument signal may depend on the authentic or surrogate structure of the analyzed substance: this dependence is marked with modern instrumentation such as mass spectrometers. On the other hand, the analyte present in a working sample is embedded with other chemicals, customarily called a matrix by the analysts. It is not always possible or easy to use the sample matrix when preparing the calibrators. These remarks lead to the definitions of four different quantification elements that can be combined to prepare or selecting calibrators and consequently obtain the calibration curve:

Authentic analyte

The same molecule or substance present in the working sample may be available for calibrator preparation, considering a high degree of purity.

Surrogate standard or calibrant

This is a reference substance that is assessed and used as a reasonable substitute for the authentic analyte. For instance, in bioanalysis, it is frequent to have metabolites or derivates of the analyte that must be quantified without the reference molecule. Labeled molecules used in many methods involving isotopic dilution have recently been considered appropriate calibrants.

Authentic matrix

The simplest situation for using an authentic matrix is to prepare calibrants by spiking test portions of the working sample. For some applications, such as drug control, it is also possible to prepare synthetic calibrants with the same ingredients as the products to be controlled.

Surrogate matrix

This medium is considered and used as a substitute for the sample matrix. For instance, bovine serum is used in place of human serum. Then, it is assumed its behavior should be similar to the authentic matrix throughout the analytical process, including sample preparation and instrumental response.

When the surrogate matrix does not behave as the authentic or when calibration is achieved without the sample matrix, matrix effects may produce bias of trueness, as explained in Section 4.1.3. More precisely, calibration standards can be prepared with several classes of matrices. Matrix classification is widely based on analyst expertise and depending on the application domain, matrix grouping is extremely variable. For instance, broad definitions applicable to biological analysis can be as follows:

Authentic matrix (or real)

For biological analysts, serum, urine, saliva, or stool are different classes of matrices. In food chemistry, when determining the total protein, fatty and starchy foods are classified as different, or drinking water and surface water is different for water controllers.

Surrogate matrix

Matrix used as a substitute for authentic matrix.

Neat solution

Water, reagents used for extraction or elution, etc.

Artificial matrix

Pooled and homogenized samples, material prepared by weighting when the composition of the authentic matrix is fully known, etc.

Stripped matrix

Specially prepared materials are free of impurities or endogenous chemicals. They are mainly used for biomedical analysis.

It can be assumed that the combined use of surrogate standard and/or surrogate matrix may induce bias. It is necessary to cautiously verify if their analytical behavior is comparable to authentic ones. At least four combinations of the above‐defined quantification elements are possible, each having pros and cons as explained later. It is possible to categorize different quantification modes depending on the selected combination:

Quantitative

Calibrators are prepared with authentic analytes and an authentic matrix. The amount or concentration of the analyte may be determined and expressed as a numerical value in appropriate units. The final expression of the result can be absolute, as a single concentration value; non‐absolute, as a range or above or below a threshold.

Semi‐quantitative

Surrogate standards and matrix are used. Some authors consider semi‐quantitative analyses the ones performed when reference standards or the blank matrix are not readily available.

Relative

Sample is analyzed before and after an alteration or compared to a control situation. The relative analyte concentration is expressed as a signal intensity fold change. It is ratioed to another sample used as a reference and expressed as a signal/concentration.

It must be clearly stated that it is impossible to strictly separate quantification from calibration since they are interdependent. According to the nature of the calibration standard used, which can be authentic or surrogate, and the matrix, which can be authentic, surrogate, neat, etc., different quantification strategies were developed to obtain the effective calibration function. A schematic overview of the differences between principal quantification modes is summarized in Figure 1.2 and more extensively explained in the rest of the chapter.

Figure 1.2 Schematic representation of absolute, semi, and relative quantification modes.

1.1.3 Signal Pretreatment and Normalization

Nowadays, it is quite uncommon to use the analogic electrical signal output from the measuring instrument to build a calibration model. Digitalizing signals in modern instruments opened the way to many pretreatments, such as filtering, background correction, and smoothing. It is sometimes invisible to the analyst, although this can modify the method’s performance. The outcome of many methods can be complex signals such as absorption bands or peaks in spectrophotometry or elution peaks in chromatography.

This raw information is not directly used as Y variable to build the calibration model; it is preprocessed. When dealing with absorption peaks, it is classic to select one or several wavelengths considered to be most informative. For instance, in biochemistry, protein concentration can be quickly estimated by measuring the UV absorbance at 280 nm; proteins show a strong peak here due to tryptophan and tyrosine residue absorbance. This can readily be converted into the protein concentration using Beer’s law.

When obtaining poorly resolved absorption bands, as in near infrared spectroscopy (NIRS), the selection of one specific wavelength is difficult, and the use of a multivariate approach has been promoted. Many publications in chemometrics literature are addressing this issue. The multivariate calibration based on partial least‐squares regression (PLS) has now become a routine procedure.

If the output signal is time‐resolved, such as liquid or gas chromatographic peaks, they are always pretreated by an integrator. Initially, it was a separate device, but now it is included in the monitoring software. It can determine several parameters characterizing the elution peak, such as retention time at the highest point, skewness, peak height, but mainly peak area. The peak area is in the favor with analysts. But several publications demonstrated that for some methods, peak height is preferable to peak area and that when standardizing a method, the integration conditions must be carefully harmonized [2].

For some methods, such as MS‐coupled methods, the measured response Y can strongly vary according to the detector performance, such as mass analyzer type, ionization modes, ion source parameters, system contamination, ionization enhancement or suppression due to the sample matrix effect, along with other operational variables related to the analytical workflow.

Thus, the analyte relative response is standardized to compare performance over time. A common operation is adding an internal standard (IS) to the study and calibration samples at fixed concentrations. For instance, two official inspection bodies advise evaluating the matrix effects when a complex surrogate matrix is used [3, 4]. For the latter, the Food and Drug Administration (FDA) suggests investigating the matrix effect by performing parallelism testing between linear calibration curves computed with the authentic and surrogate matrices. This method is not always effective, while parallelism statistical testing is conservative, i.e. depending on the data configuration significant difference may be considered nonsignificant and only applicable to linear models.

Conversely, the European Medicines Agency (EMA) provides full instructions on how to do it and recommends comparing the extraction recovery between the spiked authentic matrix and surrogate matrix used for the calibration, along with the inclusion of IS as an easy and effective method to correct biases between these two matrices. When the analyte and the IS are affected similarly during the analytical process, instrument signals can be correctly standardized. A comprehensive approach is proposed further using the method accuracy profile (MAP); it is also an effective approach to detect and control matrix effects.

Structural analog (carbonitrile derivate)

Stable isotope‐labeled

(

SIL

)

Two main categories of IS, namely structural analogs and stable SIL, can be identified. The molecule of pregnenolone is used to exemplify this. The first category, visible on the molecule on the left, is related to compounds that generally share structural or physicochemical properties similar to the authentic analyte.

The second category, exemplified by the molecule on the right, includes stable isotopic forms of the analyte, usually by replacing hydrogen 1H, carbon 12C, or nitrogen 14N with deuterium 2H, 13C, or 15N, respectively. Obviously, using labeled IS requires the coupling to a mass spectrometer. Deuterated IS are widely used due to their lower cost. Still, their lipophilicity increases with the number of substituted 2H, leading to differences in their chromatographic retention times with the corresponding authentic analyte. This phenomenon, known as deuterium effect, can also impact the instrumental response or behavior (e.g. the electrospray ionization process in MS) compared to unlabeled compounds.

Even if an increasing number of high‐quality SIL are commercially available, they are limited to the most commonly used chemical compounds. When many analytes must be simultaneously quantified, the possibility of using one IS for multiple analytes should be carefully evaluated. For quantification purposes, using one IS per target compound is generally recommended when available because they are assumed to compensate for specific differences in matrix effect and extraction recovery between the calibration methodology and working samples.

To complete this rapid overview, when compatible with the analytical method, the use of standards linked to the International System of Units (SI) is a convenient means of standardizing the instrumental response and correcting the overall variation in the measurement process resulting from diverse sources of uncertainty, such as sample preparation or interfering compounds, also known as the matrix effects. The absolute instrumental response is then normalized as a response ratio:

Normalized response ratio

(1.1)

In this formula, YA and YIS are the responses obtained with the analyte and the IS, respectively. This formula gives a relative instrumental response but does not consider the respective concentrations. To be more in harmony with Figure 1.1, YIS is equivalent to Yc. This new notation is used because the IS is a particular example of a compound used for calibration.

The influence of signal preprocessing, such as peak integration, was experimentally demonstrated during an interlaboratory study on determining fructose, maltose, glucose, lactose, and sucrose in several foods by liquid chromatography [5]. A specific experimental design was developed to achieve this demonstration. Participants were requested to send their results calibrated as both peak heights and areas. Considering the mean values obtained with the two approaches, differences ranged from −18% up to +5%. This indicates that trueness may be affected by the quantification mode. Precision, expressed as the reproducibility variance, was computed using both sets of results.

More details about this common parameter of precision are given in Section 3.2.1. In Figure 1.3, a subset of interlaboratory results is reported. Food types are indicated by an uppercase letter ranging from A to L; they are saccharide‐containing processed foods, such as soft drinks, baked foods, or candies. Precision for peak area appears as vertical red bars and peak height as light green bars. The role of the signal processing method is expressed as a relative contribution to the reproducibility variance. The contributions and their differences are sometimes ridiculously small, such as fructose in food C where it is below 10%. But sometimes very impressive, such as glucose in food I. If some food is not present on the diagram, the analyte was not detected. For instance, L is a chocolate bar that contains no fructose. Peak area is not always the best way to quantify the analyte. In the publication, an explanation is given why the discrepancies exist. It mainly depends on the resolution of peaks and their relative values.

Figure 1.3 Contribution to the reproducibility of two quantification methods in liquid chromatography of saccharides.

Detecting a peak beginning and end is a contingent subject and a source of uncertainty for the surface integration, as explained in Section 4.1.2. Finally, integrator settings can be used to optimize the integration algorithm and accordingly influence the global performance of the method.

1.2 Calibration Modes

Two major calibration modes are used in laboratories, namely:

External calibration (EC)

A calibration curve is established independently from the working samples, whatever the calibrant nature and preparation. A single calibration function is used to quantify many samples. This is the most classical procedure, and several variants exist.

Internal calibration (IC)

The term is applied to diverse procedures. The calibration is achieved with a calibrant under different forms in the working samples. Conversely, one calibration function is obtained for each working sample to be quantified. Recently novel procedures have been developed for MS‐based analysis and are detailed in

Section 1.5

.

As briefly mentioned before, the analyte nature, the availability of the working sample material and the calibration material influence the selected type of calibration. This can be summarized by this simple table leading to at least four different basic configurations.

Matrix

Authentic

Surrogate

Analyte

Authentic

Yes

Yes

Surrogate

Yes

Yes

Table 1.1 attempts to classify different calibration modes, external versus internal, commonly used in the laboratory, including the advantages (pros) and limitations (cons) for each. As illustrated, external calibration (EC) methodologies depend on the availability of both analyte and matrix. For the procedure called in‐sample calibration (ISC) there is no need to select a particular calibration matrix as the working sample matrix is used. Still remains the question of the analyte’s availability. The abbreviation ISC is introduced to make the difference with internal calibration.

1.3 External Calibration (EC)

1.3.1 Authentic Analyte in Authentic Matrix: MMEC

External calibration (EC) corresponds to the most often‐used operating procedure because it allows the rational determination of several routine samples with one pre‐determined calibration function Y = f(X). The first situation, sometimes called matrix‐matched external calibration (MMEC), represents a good metrological quantification approach and is extensively discussed in the major international guidelines to validate bioanalytical methods [6].

With exogenous substances, such as rare pollutant chemicals, a blank matrix is generally available and permits EC with authentic analyte in a representative matrix. On the other hand, with endogenous compounds at endogenous concentration, such as vitamins in foods, other approaches should be explored to overcome the absence of an analyte‐free matrix. In this complicated context, alternative procedures have been proposed, such as background subtraction or the use of surrogate matrices and/or analytes as described below.

Table 1.1 Proposals for a classification of calibration procedures.

External calibration (EC)

Ref.

Authentic analyte

Surrogate standard

a)

Matrix

Authentic

Surrogate

Authentic

Surrogate

Method

Matrix‐matched (MMEC)

b)

Surrogate matrix

Surrogate analyte

Surrogate analyte and matrix

Pros

Matrix effect and selectivity close to sample.

Suitable for low concentration compounds.

LOQ Lower than the background subtraction.

When authentic analyte difficult to obtain.

Cons

LOQ define by endogenous concentration.

Production of analyte free matrix. Possible differences in extraction recovery and matrix effect.

Accuracy depends on surrogate specificity. Additional experiment for linearity and LOQ.

Accuracy depends on surrogate specificity.

High differences for recovery yield to be expected.

In‐sample calibration (ISC)

Ref.

Authentic analyte

Surrogate standard (calibrant)

Partially labelled isotope analogue

Fully labelled isotope or structural analogue

Matrix

Authentic

Authentic

Authentic

Method

Standard addition method (SAM)

Isotopic pattern deconvolution (IPD)

Internal calibration (IC)

Pros

Same matrix effect and selectivity as the sample.

High potential for accuracy

High potential for accuracy (SIL)

Relying on isotopic distribution alteration.

Reduced numbers of calibrators.

Cons

Need for large initial specimen volume.

Depends on analogue concentration and stability.

Depends on analogue concentration and stability.

Not easy implemented for high throughput.

Additional experiment for linearity and LOQ.

Structural analogues cannot compensate for differences in ionization.

Additional experiment for linearity and LOQ.

a) Isotope labelled or structural analogue.

b) With or without background subtraction.

The use of