85,99 €
Written by an expert in the field of instrumentation and measurement device design, this book employs comprehensive electronic device and circuit specifications to design custom defined-accuracy instrumentation and computer interfacing systems with definitive accountability to assist critical applications.
Advanced Instrumentation and Computer I/O Design, Second Edition begins by developing an understanding of sensor-amplifier-filter signal conditioning design methods, enabled by device and system mathematical models, to achieve conditioned signal accuracies of interest and follow-on computer data conversion and reconstruction functions. Providing complete automated system design analyses that employ the Analysis Suite computer-assisted engineering spreadsheet, the book then expands these performance accountability methodscoordinated with versatile and evolving hierarchical subprocesses and control architecturesto overcome difficult contemporary process automation challenges combining both quantitative and qualitative methods. It then concludes with a taxonomy of computer interfaces and standards including telemetry, virtual, and analytical instrumentation.
Advanced Instrumentation and Computer I/O Design, Second Edition offers:
Written for international engineering practitioners who design and implement industrial process control systems, laboratory instrumentation, medical electronics, telecommunications, and embedded computer systems, this book will also prove useful for upper-undergraduate and graduate-level electrical engineering students.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 224
Veröffentlichungsjahr: 2013
Contents
Cover
Half Title page
Title page
Copyright page
Preface
Chapter 1: Thermal, Mechanical, Quantum, and Analytical Sensors
1-0 Introduction
1-1 Instrumentation Error Interpretation
1-2 Temperature Sensors
1-3 Mechanical Sensors
1-4 Quantum Sensors
1-5 Analytical Sensors
Problems
Bibliography
Chapter 2: Instrumentation Amplifiers and Parameter Errors
2-0 Introduction
2-1 Device Temperature Characteristics
2-2 Differential Amplifiers
2-3 Operational Amplifiers
2-4 Instrumentation Amplifiers
2-5 Amplifier Parameter Error Evaluation
Problems
Bibliography
Chapter 3: Filters for Measurement Signals
3-0 Introduction
3-1 Bandlimiting Instrumentation Filters
3-2 Active Filter Design
3-3 Filter Error Evaluation
3-4 Bandpass Instrumentation Filters
Problems
Bibliography
Chapter 4: Signal Conditioning Design and Instrumentation Errors
4-0 Introduction
4-1 Low-Level Signal Acquisition
4-2 Signal Quality in Random and Coherent Interference
4-3 DC, Sinusoidal, and Harmonic Signal Conditioning
4-4 Analog Signal Processing
Problems
Bibliography
Chapter 5: Data Converstion Devices and Parameters
5-0 Introduction
5-1 Analog Multiplexers
5-2 Sample-Hold Devices
5-3 Digital-to-Analog Converters
5-4 Analog-to-Digital Converters
Problems
Bibliography
Chapter 6: Sampled Data and Reconstruction with Intersample Error
6-0 Introduction
6-1 Sampled Data Theory
6-2 Aliasing of Signal and Noise
6-3 Sampled Data Intersample and Aperture Errors
6-4 Output Signal Interpolation Functions
6-5 Video Sampling and Reconstruction
Problems
References
Chapter 7: Instrumentation Analysis Suite, Error Propagation, Sensor Fusion, and Interfaces
7-0 Introduction
7-1 Aerospace Computer I/O Design With Analysis Suite
7-2 Measurement Error Propagation in Example Airflow Process
7-3 Homogeneous and Heterogeneous Sensor Fusion
7-4 Instrumentation Integration and Interfaces
Problems
Bibliography
Appendix
Chapter 8: Instrumented Processes Decision and Control
8-0 Introduction
8-1 Process Apparatus Controller Variability and Tuning
8-2 Model Reference to Remodeling Adaptive Control
8-3 Empirical to Intelligent Process Decision Systems
Problems
Bibliography
Chapter 9: Process Automation Applications
9-0 Introduction
9-1 Ashby Map Guided Equiaxed Titanium Forging
9-2 Z-Fit Modeled Spectral Control of Exfoliated Nanocomposites
9-3 Superconductor Production with Adaptive Decision and Control
9-4 Neural Network Attenuated Steel Annealing Hardness Variance
9-5 Ultralinear Molecular Beam Epitaxy Flux Calibration
Bibliography
Index
Advanced Instrumentation and Computer I/O Design
IEEE Press 445 Hoes Lane Piscataway, NJ 08854
IEEE Press Editorial Board John B. Anderson, Editor in Chief
Ramesh Abhari George W. Arnold Flavio Canavero Dmitry Goldgof Bernhard M. Haemmerli David Jacobson Mary Lanzerotti Om P. Malik Saeid Nahavandi Tariq Samad George Zobrist
Kenneth Moore, Director of IEEE Book and Information Services (BIS)
Copyright © 2013 by the Institute of Electrical and Electronics Engineers, Inc.
Published by John Wiley & Sons, Inc., Hoboken, New Jersey. All rights reserved. Published simultaneously in Canada.
No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permission.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representation or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
For general information on our other products and services please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print, however, may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com.
Library of Congress Cataloging-in-Publication Data:
Garrett, Patrick H. Advanced instrumentation and computer I/O design : defined accuracy decision and control with process applications / Patrick H. Garrett. — Second edition. pages cm ISBN 978-1-118-31708-2 (pbk.) 1. Computer interfaces. 2. Computer input-output equipment—Design—Data processing. 3. Computer-aided engineering. I. Title. TK7887.5.G368 2013 621.39’81—dc23
2012030698
Preface
Widespread need continues across aerospace, biomedical, commercial, and federal domains for the systematic design of instrumented processes aided by advanced decision-and-control methodologies. Realizations have evolved beneficially incorporating defined accuracy data with process automation designs that enable effective attainment of system goals with nominal variability. In this book, real-world applications are accordingly developed, illustrated by a dozen case studies performed for technology enterprises, including the Air Force Materials and Manufacturing Directorate, General Electric Aviation, General Motors Technical Center, Goodyear Tire & Rubber, U.S. Environmental Protection Agency, and Wheeling-Pittsburgh Steel.
The initial sixty percent of this book consecutively develops from input sensor signal conditioning to output sampled-data linear signal reconstruction designs, at data accuracies of interest, limited primarily by the residual errors of included electronic devices. Real-time computer I/O systems are traditionally circuit-design based. Defined accuracy I/O designs alternatively employ device and system models developed in Chapters 1 through 6, as concentrated by the featured instrumentation analysis suite workbook of Chapter 7, that includes a modifiable user interface to exercise chosen device data and system parameters for the evaluation of end-to-end system signal accuracy. This may be downloaded at http://booksupport.wiley.com.
The remaining forty percent of the book evolves process design methods culminating in a hierarchical subprocess control architecture. This includes an upper-level feedforward planner containing models of product features that outputs references to intermediate-level in-situ subprocesses, enabling more definitive feedback control, separate from physical apparatus regulation. Processing effectiveness is achieved by means of focused subprocess decoupling, sensor fusion, accountable data attribution, sensed process migration control planner remodeling, and computational-intelligence reasoning when quantitative models are incomplete. End-of-chapter problems with separate solutions are included as exercises.
PATRICK H. GARRETT
Automation Center University Of Cincinnati
Automated laboratory systems, manufacturing process controls, analytical instrumentation, and aerospace systems all would have diminished capabilities without the availability of contemporary computer-integrated data systems with multisensor information structures. This text accordingly develops supporting error models that enable a unified performance evaluation for the design and analysis of linear and digital instrumentation systems with the goal of compatibility of integration with enterprise quality requirements. These methods then serve as a quantitative framework supporting the design of high-performance automation systems.
This chapter specifically describes the front-end electrical sensor devices for a broad range of applications from industrial processes to scientific measurements. Examples include environmental sensors for temperature, pressure, level, and flow; optical sensors for measurements beyond apparatus boundaries, including spectrometers for chemical analytes; and material and biomedical assays sensed by microwave microscopy. It is notable that owing to advancements in higher attribution sensors they are increasingly being substituted for process models in many applications.
Measured and modeled electronic device, circuit, and system error parameters are defined in this text for combination into a quantitative end-to-end instrumentation performance representation for computer-centered measurement and control systems. It is therefore axiomatic that the integration and optimization of these systems may be achieved by design realizations that provide total error minimization. Total error is graphically described in Figure 1-1, and analytically expressed by Equation (1-1), as a composite of mean error contributions (barred quantities) plus the root-sum-square (RSS) of systematic and random uncertainties. Total error thus constitutes the deviation of a sensor-based measurement from its absolute true value, which is traceable to a standard value harbored by the National Institute of Standards and Technology (NIST). This error is traditionally expressed as 0–100% of full scale (%FS), where the RSS component represents a one-standard-deviation confidence interval, and accuracy is defined as the complement of error (100% − εtotal). Figure 1-2 illustrates generic sensor elements and the definitions describe relevant terms:
Figure 1-1. Instrumentation error interpretation.
Figure 1-2. Generic sensor elements.
Accuracy:
The closeness with which a measurement approaches the true value of a measurand, usually expressed as a percent of full scale
Error:
The deviation of a measurement from the true value of a measurand, usually expressed as a percent of full scale
Tolerance:
Allowable deviation about a reference of interest
Precision:
An expression of a measurement over some span described by the number of significant figures available
(1-1)
Resolution:
An expression of the smallest quantity to which a quantity can be represented
Span:
An expression of the extent of a measurement between any two limits
Range:
An expression of the total extent of measurement values
Linearity:
Variation in the error of a measurement with respect to a specified span of the measurand
Repeatability:
Variation in the performance of the same measurement
Stability:
Variation in a measurement value with respect to a specified time interval
Technology has advanced significantly as a consequence of sensor development. Sensor nonlinearity is a common source of error that can be minimized by means of multipoint calibration. Practical implementation often requires the synthesis of a linearized sensor that achieves the best asymptotic approximation to the true value over a measurement range of interest.
The cubic function of Equation (1-2) is an effective linearizing equation demonstrated over the full 700°C range of a commonly applied Type-J thermocouple, which is tabulated in Table 1-1. Solution of the A and B coefficients at judiciously spaced temperature values defines the linearizing equation with a 0°C intercept. Evaluation at linearized 100°C intervals throughout the thermocouple range reveals temperature values nominally within 1°C of their true temperatures, which correspond to typical errors of 0.25%FS. It is also useful to express the average of discrete errors over the sensor range, obtaining a mean error value of FS for the Type-J thermocouple. This example illustrates a design goal proffered throughout this text of not exceeding one-tenth percent error for any contributing system component. Extended polynomials permit further reduction in linearized sensor error while incurring increased computational burden, where a fifth-order equation can beneficially provide linearization to 0.1 °C, corresponding to FS mean error.
Table 1-1. Sensor cubic linearization
(1-2)
Coefficient for 10.779 mV at 200°C:
Coefficient for 27.393 mV at 500°C:
Thermocouples are widely used temperature sensors because of their ruggedness and broad temperature range. Two dissimilar metals are used in the Seebeck-effect temperature-to-emf junction with transfer relationships described by Figure 1-3. Proper operation requires the use of a thermocouple reference junction in series with the measurement junction to polarize the direction of current flow and maximize the measurement emf. Omission of the reference junction introduces an uncertainty evident as a lack of measurement repeatability equal to the ambient temperature.
Figure 1-3. Temperature–millivolt graph for thermocouples.
(Courtesy Omega Engineering, Inc., an Omega Group Company.)
An electronic reference junction that does not require an isolated supply can be realized with an Analog Devices AD590 temperature sensor as shown in Figure 4-5. This reference junction usually is attached to an input terminal barrier strip in order to track the thermocouple-to-copper circuit connection thermally. The error signal is referenced to the Seebeck coefficients in mV/°C of Table 1-2, and provided as a compensation signal for ambient temperature variation. The single calibration trim at ambient temperature provides temperature tracking with a few tenths of a °C.
Table 1-2. Thermocouple comparison data
Resistance-thermometer devices (RTDs) provide greater resolution and repeatability than thermocouples, the latter typically being limited to approximately 1°C. RTDs operate on the principle of resistance change as a function of temperature, and are represented by a number of devices. The platinum resistance thermometer is frequently utilized in industrial applications because it offers accuracy with mechanical and electrical stability. Thermistors are fabricated from a sintered mixture of metal alloys, forming ceramics exhibiting a significant negative temperature coefficient. Metal film resistors have an extended and more linear range than thermistors, but thermistors exhibit approximately ten times the sensitivity. RTDs require excitation, usually provided as a constant-current source, in order to convert their resistance change with temperature into a voltage change. Figure 1-4 presents the temperature–resistance characteristics of common RTD sensors.
Figure 1-4. RTD devices.
Optical pyrometers are utilized for temperature measurement when sensor physical contact with a process is not feasible, but a view is available. Measurements are limited to energy emissions within the spectral response capability of the specific sensor used. A radiometric match of emissions between a calibrated reference source and the source of interest provides a current analog corresponding to temperature. Automatic pyrometers employ a servo loop to achieve this balance, as shown in Figure 1-5. Operation to 5000°C is available.
Figure 1-5. Automatic pyrometer.
Fluid pressure is defined as the force per unit area exerted by a gas or a liquid on the boundaries of a containment vessel. Pressure is a measure of the energy content of hydraulic and pneumatic (liquid and gas) fluids. Hydrostatic pressure refers to the internal pressure at any point within a liquid directly proportional to the liquid height above that point, independent of vessel shape. The static pressure of a gas refers to its potential for doing work, which does not vary uniformly with height as a consequence of its compressibility. Equation (1-3) expresses the basic relationship between pressure, volume, and temperature as the general gas law. Pressure typically is expressed in terms of pounds per square inch (psi) or inches of water (in H2O) or mercury (in Hg). Absolute pressure measurements are referenced to a vacuum, whereas gauge pressure measurements are referenced to the atmosphere.
A pressure sensor responds to pressure and provides a proportional analog signal by means of a pressure–force summing device. This usually is implemented with a mechanical diaphragm and linkage to an electrical element such as a potentiometer, strain gauge, or piezoresistor. Quantities of interest associated with pressure–force summing sensors include their mass, spring constant, and natural frequency. Potentiometric elements are low in cost and have high output, but their sensitivity to vibration and mechanical nonlinearities combine to limit their utility. Unbonded strain gauges offer improvements in accuracy and stability, with errors to 0.5% of full scale, but their low output signal requires a preamplifier. Present developments in pressure transducers involve integral techniques to compensate for the various error sources, including crystal diaphragms for freedom from measurement hysteresis. Figure 1-6 illustrates a microsensor-circuit pressure transducer for enhanced reliability, with an internal vacuum reference, chip heating to minimize temperature errors, and a piezoresistor bridge transducer circuit with on-chip signal conditioning.
Figure 1-6. Integrated pressure microsensor.
(1-3)
Liquid levels are frequently required process measurements in tanks, pipes, and other vessels. Sensing methods of various complexity are employed, including float devices, differential pressure, ultrasonics, and bubblers. Float devices offer simplicity and various means of translating motion into a level reading. A differential-pressure transducer can also measure the height of a liquid when its specific weight W is known, and a ΔP cell is connected between the vessel surface and bottom. Height is provided by the ratio of ΔP/W.
Accurate sensing of position, shaft angle, and linear displacement is possible with the linear variable-displacement transformer (LVDT). With this device, an ac excitation introduced through a variable-reluctance circuit is induced in an output circuit through a movable core that determines the amount of displacement. LVDT advantages include overload capability and temperature insensitivity. Sensitivity increases with excitation frequency, but a minimum ratio of 10: 1 between excitation and signal frequencies is considered a practical limit. LVDT variants include the induction potentiometer, synchros, resolvers, and the microsyn. Figure 1-7 describes a basic LVDT circuit with both ac and dc outputs.
Figure 1-7. Basic LVDT.
Fluid-flow measurement generally is implemented either by differential-pressure or mechanical-contact sensing. Flow rate F is the time rate of fluid motion, with dimensions typically in feet per second. Volumetric flow Q is the fluid volume per unit time, such as gallons per minute. Mass flow rate M for a gas is defined for example, in terms of pounds per second. Differential-pressure-flow-sensing elements also are known as variable-head meters because the pressure difference between the two measurements ΔP is equal to the head. This is equivalent to the height of the column of a differential manometer. Flow rate is therefore obtained with the 32 ft/sec2 gravitational constant g and differential pressure by Equation (1-4). Liquid flow in open channels is obtained by head-producing devices such as flumes and weirs. Volumetric flow is obtained with the flow cross-sectional area and the height of the flow over a weir, as shown by Figure 1-8 and Equation (1-5).
Figure 1-8. (a) Flow rate, (b) volumetric flow, (c) mass flow.
(1-4)
(1-5)
(1-6)
where
Acceleration measurements are principally of interest for shock and vibration sensing. Potentiometric dashpots and capacitive transducers have largely been supplanted by piezoelectric crystals. Their equivalent circuit is a voltage source in series with a capacitance, as shown in Figure 1-9, which produces an output in coulombs of charge as a function of acceleration excitation. Vibratory acceleration results in an alternating output typically of very small value. Several crystals are therefore stacked to increase the transducer output. As a consequence of the small quantities of charge transferred, this transducer usually is interfaced to a low-input-bias-current charge amplifier, which also converts the acceleration input to a velocity signal. An ac-coupled integrator will then provide a displacement signal that may be calibrated, for example, in millinches of displacement per volt.
Figure 1-9. Vibration measurement.
A load cell is a transducer whose output is proportional to an applied force. Strain-gauge transducers provide a change in resistance due to mechanical strain produced by a force member. Strain gauges may be based on a thin metal wire, foil, thin films, or semiconductor elements. Adhevise-bonded gauges are the most widely used, with a typical resistive strain element of 350 Ω that will register full-scale changes to 15 Ω. With a Wheatsone-bridge circuit, a 2-V excitation may therefore provide up to a 50-mV output signal change, as described in Figure 1-10. Semiconductor strain gauges offer high sensitivity at low strain levels, with outputs of 200 mV to 400 mV. Miniature tactile-force sensors can also be fabricated from scaled-down versions of classic transducers by employing MEMS technology. A multiplexed array of these sensors can provide sense feedback for robotic part manipulation and teleoperator actuators.
Figure 1-10. Strain gauge.
Ultrasound ranging and imaging systems are increasingly being applied for industrial and medical purposes. A basic ultrasonic system is illustrated by Figure 1-11 consisting of a phased-array transducer and associated signal processing, including aperture focusing by means of time delays, that is employed in both medical ultrasound and industrial nondestructive-testing applications. Multiple frequency emissions in the 1–10 MHz range are typically employed to prevent spatial multipath cancellations. B-scan ultrasonic imaging displays acoustic reflectivity for a focal plane, and C-scan imaging provides integrated volumetric reflectivity of a region around the focal plane.
Figure 1-11. Phased-array ultrasound system.
Hall-effect transducers, which usually are silicon-substrate devices, frequently include an integrated amplifier to provide a high-level output. These devices typically offer an operating range from −40 to +150°C and a linear output. Applications include magnetic-field sensing and position sensing with circuit isolation, such as the Micro Switch LOHET device, which offers a 3.75-mV/gauss response. Figure 1-12 describes the principle of Hall-effect operation. When a magnetic field Bz is applied perpendicular to a current-conducting element, a force acts on the current Ix creating a diversion of its flow proportional to a difference of potential. This measurable voltage Vy is pronounced in materials such as InSb and InAs, and occurs to a useful degree in Si. The magnetic field usually is provided as a function of a measurand.
Figure 1-12. Hall-effect transducer.
Quantum sensors are of significant interest as electromagnetic spectrum transducers over a frequency range extending from the far-infrared region of 1011 Hz through the visible spectrum about 1014 Hz to the far-ultraviolet region at 1017 Hz. These photon sensors are capable of measurements of a single photon whose energy E equals hv, or watt-seconds in radiometry units from Table 1-3, where h is Planck’s constant of 6.62 × 10−34 joule-seconds and v is frequency in hertz. Frequencies lower than infrared represent the microwave region and higher than ultraviolet constitute X-rays, which require different transducers for measurement. In photometry, one lumen represents the power flux emitted over one steradian from a source of one candela intensity. For all of these sensors, incident photons result in an electrical signal by an intermediate transduction process.
Table 1-3. Quantum sensor units
Table 1-4 describes the relative performance between principal sensors, whereby in photo diodes photons generate electron–hole pairs within the junction depletion region. Photo transistors offer signal gain at the source for this transduction process over the basic photo diode. In photoconductive cells, photons generate carriers that lower the sensor bulk resistance, but their utility is limited by a restricted frequency response. These sensors are shown in Figures 1-13 and 1-14. In all applications, it is essential to match sources and sensors spectrally in order to maximize energy transfer. For diminished photon sources, the photomultiplier excels owing to a photoemissive cathode followed by high multiplicative gain to 106 from its cascaded dynode structure. The high gain and inherent low noise provided by coordinated multiplication results in a widely applicable sensor, except for the infrared region. Presently, the photomultiplier vacuum electron ballistic structure does not have a solid-state equivalent.
Figure 1-13. Photodiode characteristics.
Figure 1-14. Photoconductive characteristics.
Table 1-4. Sensor relative performance
Figure 1-15. Quantum sensor array.
A property common to all nuclear radiation is its ability to interact with the atoms that constitute all matter. The nature of the interaction with any form of matter varies with the different components of radiation, as illustrated in Figure 1-16. These components are responsible for interactions with matter that generally produce ionization of the medium through which they pass. This ionization is the principal effect used in the detection of the presence of nuclear radiation. Alpha and beta rays often are not encountered because of their attenuation. Instruments for nuclear radiation detection therefore most commonly constructed to measure gamma radiation and its scintillation or luminescent effect. The rate of ionization in roentgens per hour is a preferred measurement unit, and represents the product of the emanations in curies and the sum of their energies in MeV, represented as gamma energies. A distinction also should be made between disintegrations in counts per minute and ionization rate. The count-rate measurement is useful for half-life determination and nuclear detection, but does not provide exposure rate information for interpretation of degree of hazard. The estimated yearly radiation dose to persons in the United States is 0.25 roentgen (R). A high-radiation area is one in which radiation levels exceed 0.1 R per hour, and requires posting of a caution sign.
Figure 1-16. Nuclear radiation characteristics.
Methods for detecting nuclear radiation are based on means for measuring the ionizing effects of these radiations. Mechanizations fall into the two categories of pulse-type detectors of ionizing events, and ionization-current detectors that employ an ionization chamber to provide an averaged radiation effect. The first category includes Geiger–Mueller tubes and more sensitive scintillation counters capable of individual counts. Detecting the individual ionizing scintillations is aided by an activated crystal such as sodium iodide optically coupled to a high-amplification photomultiplier tube, as shown in Figure 1-17. Ionization-current detectors primarily are employed in health and physics applications such as industrial areas subject to high radiation levels. An ion chamber is followed by an amplifier whose output is calibrated in roentgens per hour ionization rate. This method is necessary where pulse-type detectors are inappropriate because of a very high rate of ionization events. Practical industrial applications of nuclear radiation and detection include thickness gauges, nondestructive testing such as X-ray inspection, and chemical analysis such as by neutron activation.
Figure 1-17. Scintillation detector.
Figure 1-18. Optical spectrometer structure.
Figure 1-19. Mass spectrometer structure.
Online measurements of industrial processes and chemical streams often require the use of selective chemical analyzers for the control of a processing unit. Examples include oxygen for boiler control, sulfur oxide emissions from combustion processes, and hydrocarbons associated with petroleum refining. Laboratory instruments such as gas chromatographs generally are not used for online measurements primarily because they analyze all compounds present simultaneously rather than a single one of interest.
The dispersive infrared analyzer is the most widely used chemical analyzer, owing to the range of compounds it can be configured to measure. Operation is by the differential absorption of infrared energy in a sample stream in comparison to that of a reference cell. Measurement is by deflection of a diaphragm separating the sample and reference cells, which in turn detunes an oscillator circuit to provide an electrical analog of compound concentration. Oxygen analyzers usually are of the amperometric type, in which oxygen is chemically reduced at a gold cathode, resulting in a current flow from a silver anode as a function of this reduction in oxygen concentration. In a paramagnetic wind device, a wind effect is generated when a mixture containing oxygen produces a gradient in a magnetic field. Measurement is derived by the thermal cooling effect on a heated resistance element forming a thermal anemometer. Table 1-5 describes basic electrochemical analyzer methods, and Figure 1-20 a basic gas-analyzer system with calibration.
Table 1-5. Chemical analyzer methods
Compound
Analyzer
CO, SO
x
, NH
x
Infrared
O
2
Amperometric, paramagnetic
HC
Flame ionization
NO
x
Chemiluminescent
H
2
S
Electrochemical cell
Figure 1-20. Calibrated gas analyzer.
Also in this group are pH, conductivity, and ion-selective electrodes. pH defines the balance between the hydrogen ions H+ of an acid and the hydroxyl ions OH−
