72,99 €
Bioelectronics is a rich field of research involving the application of electronics engineering principles to biology, medicine, and the health sciences. With its interdisciplinary nature, bioelectronics spans state-of-the-art research at the interface between the life sciences, engineering and physical sciences.
Introductory Bioelectronics offers a concise overview of the field and teaches the fundamentals of biochemical, biophysical, electrical, and physiological concepts relevant to bioelectronics. It is the first book to bring together these various topics, and to explain the basic theory and practical applications at an introductory level.
The authors describe and contextualise the science by examining recent research and commercial applications. They also cover the design methods and forms of instrumentation that are required in the application of bioelectronics technology. The result is a unique book with the following key features:
Supplying the tools to succeed, this text is the best resource for engineering and physical sciences students in bioelectronics, biomedical engineering and micro/nano-engineering. Not only that, it is also a resource for researchers without formal training in biology, who are entering PhD programmes or working on industrial projects in these areas.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 834
Veröffentlichungsjahr: 2012
Contents
Cover
Title Page
Copyright
About the Authors
Foreword
Preface
Acknowledgements
Chapter 1: Basic Chemical and Biochemical Concepts
1.1 Chapter Overview
1.2 Energy and Chemical Reactions
1.3 Water and Hydrogen Bonds
1.4 Acids, Bases and pH
1.5 Summary of Key Concepts
Problems
References
Further Readings
Chapter 2: Cells and their Basic Building Blocks
2.1 Chapter Overview
2.2 Lipids and Biomembranes
2.3 Carbohydrates and Sugars
2.4 Amino Acids, Polypeptides and Proteins
2.5 Nucleotides, Nucleic Acids, DNA, RNA and Genes
2.6 Cells and Pathogenic Bioparticles
2.7 Summary of Key Concepts
References
Further Readings
Chapter 3: Basic Biophysical Concepts and Methods
3.1 Chapter Overview
3.2 Electrostatic Interactions
3.3 Hydrophobic and Hydration Forces
3.4 Osmolarity, Tonicity and Osmotic Pressure
3.5 Transport of Ions and Molecules across Cell Membranes
3.6 Electrochemical Gradients and Ion Distributions Across Membranes
3.7 Osmotic Properties of Cells
3.8 Probing the Electrical Properties of Cells
3.9 Membrane Equilibrium Potentials
3.10 Nernst Potential and Nernst Equation
3.11 The Equilibrium (Resting) Membrane Potential
3.12 Membrane Action Potential
3.13 Channel Conductance
3.14 The Voltage Clamp
3.15 Patch-Clamp Recording
3.16 Electrokinetic Effects
References
Chapter 4: Spectroscopic Techniques
4.1 Chapter Overview
4.2 Introduction
4.3 Classes of Spectroscopy
4.4 The Beer-Lambert Law
4.5 Impedance Spectroscopy
Problem
References
Further Readings
Chapter 5: Electrochemical Principles and Electrode Reactions
5.1 Chapter Overview
5.2 Introduction
5.3 Electrochemical Cells and Electrode Reactions
5.4 Electrical Control of Electron Transfer Reactions
5.5 Reference Electrodes
5.6 Electrochemical Impedance Spectroscopy (EIS)
Problems
References
Further Readings
Chapter 6: Biosensors
6.1 Chapter Overview
6.2 Introduction
6.3 Immobilisation of the Biosensing Agent
6.4 Biosensor Parameters
6.5 Amperometric Biosensors
6.6 Potentiometric Biosensors
6.7 Conductometric and Impedimetric Biosensors
6.8 Sensors Based on Antibody–Antigen Interaction
6.9 Photometric Biosensors
6.10 Biomimetic Sensors
6.11 Glucose Sensors
6.12 Biocompatibility of Implantable Sensors
References
Further Readings
Chapter 7: Basic Sensor Instrumentation and Electrochemical Sensor Interfaces
7.1 Chapter Overview
7.2 Transducer Basics
7.3 Sensor Amplification
7.4 The Operational Amplifier
7.5 Limitations of Operational Amplifiers
7.6 Instrumentation for Electrochemical Sensors
7.7 Impedance Based Biosensors
7.8 FET Based Biosensors
Problems
References
Further Readings
Chapter 8: Instrumentation for Other Sensor Technologies
8.1 Chapter Overview
8.2 Temperature Sensors and Instrumentation
8.3 Mechanical Sensor Interfaces
8.4 Optical Biosensor Technology
8.5 Transducer Technology for Neuroscience and Medicine
Problems
References
Further Readings
Chapter 9: Microfluidics: Basic Physics and Concepts
9.1 Chapter Overview
9.2 Liquids and Gases
9.3 Fluids Treated as a Continuum
9.4 Basic Fluidics
9.5 Fluid Dynamics
9.6 Navier-Stokes Equations
9.7 Continuum versus Molecular Model
9.8 Diffusion
9.9 Surface Tension
Problems
References
Further Readings
Chapter 10: Microfluidics: Dimensional Analysis and Scaling
10.1 Chapter Overview
10.2 Dimensional Analysis
10.3 Dimensionless Parameters
10.4 Applying Nondimensional Parameters to Practical Flow Problems
10.5 Characteristic Time Scales
10.6 Applying Micro- and Nano-Physics to the Design of Microdevices
Problems
References
Appendix A: SI Prefixes
Appendix B: Values of Fundamental Physical Constants
Appendix C: Model Answers for Self-study Problems
C.1 Chapter 1
C.2 Chapter 4
C.3 Chapter 5
C.4 Chapter 7
C.5 Chapter 8
C.6 Chapter 9
C.7 Chapter 10
Index
This edition first published 2013
© 2013, John Wiley & Sons, Ltd
Registered office
John Wiley & Sons, Ltd, The Atrium, Southrn Gate, Chichester, West Sussex, PO19 8SQ, United Kingdom
For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com.
The right of the author to be identified as the author of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books.
Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold on the understanding that the publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional should be sought.
Library of Congress Cataloging-in-Publication Data
Pethig, Ronald.
Introductory bioelectronics : for engineers and physical scientists / Ronald
Pethig, Stewart Smith.
p. cm.
Includes bibliographical references and index.
ISBN 978-1-119-97087-3 (cloth)
1. Bioelectronics–Textbooks. I. Smith, Stewart, 1975- II. Title.
QH509.5.P48 2012
572′.437–dc23
2012016834
A catalogue record for this book is available from the British Library.
Print ISBN: 9781119970873
About the Authors
Ronald Pethig is Professor of Bioelectronics in the School of Engineering, University of Edinburgh, and holds PhD degrees in electrical engineering (Southampton) and physical chemistry (Nottingham) with a DSc awarded for work in biomolecular electronics from the University of Southampton. He has enjoyed a long association with the Marine Biological Laboratory, Woods Hole, being elected a Corporation Member in 1982 and an Adjunct Senior Scientist in 2005. Ron is a Fellow of the Institution of Engineering and Technology, and of the Institute of Physics, and is author of Dielectric and Electronic Properties of Biological Materials published by John Wiley & Sons, Ltd. He has received several awards, including being the first recipient in 2001 of the Herman P. Schwan Award, and serves as editor-in-chief of the IET journal Nanobiotechnology and editor of the Wiley Microsystem and Nanotechnology series.
Stewart Smith is an RCUK Academic Fellow with the School of Engineering, University of Edinburgh. He completed his PhD in 2003 at the University of Edinburgh and since then has worked as a researcher at the Scottish Microelectronics Centre working on research ranging from microelectronic test and measurement to microsystems for biomedical applications. Prior to his appointment to the RCUK fellowship, he was lead researcher on an industrially funded project that developed a prototype implantable drug delivery device for the treatment of ocular disease. A paper resulting from this work later won the 2008 IET Nanobiotechnology Premium Award. Stewart is a member of the technical committee for the IEEE International Conference on Microelectronic Test Structures. His research interests include the design and fabrication of biomedical microsystems, test structures for MEMS processes and the electrical characterisation of advanced photolithography.
Foreword
There is no doubt that the continued convergence of engineering, science and medicine in the 21st century will drive new treatments, devices, drugs, diagnostics and therapies for healthcare. Worldwide there is a desperate need for effective and economical medical interventions to care for an ageing population that is growing in number and to help lessen the burden on healthcare systems of the frightening rise in chronic diseases and conditions such as diabetes and cardiovascular disease. The rise in chronic illness is to a great extent being driven by lifestyle changes and as countries become more prosperous and industrialised they see the burden of chronic illness rise. The numbers of people affected are notable. For example, the World Health Organisation (WHO) estimates that 346 million people worldwide have diabetes and that diabetes related deaths are set to double between 2005 and 2030. Type II Diabetes is growing because of sedentary lifestyles and obesity. It does not simply bring problems with blood sugar but complications of uncontrolled glucose levels can lead to cardiovascular disease, eyesight problems, renal problems and wound care problems, creating a complex and growing patient load for healthcare providers. Cardiovascular disease is even more prevalent and claimed the lives of 17.6 million in 2008 and the WHO estimates that this will rise to 26.3 million by 2030.
Thus governments and healthcare providers know that changes must be made to reduce chronic disease where possible, and to deliver care effectively and economically to those who are affected by it.
Medical technology and medical devices have a crucial part to play in helping society care for these populations and interventions based on technology and devices are already widespread and growing. The portable glucose meters which diabetics can use to check their blood sugar levels at any time were developed from biosensor technology and have now become a reliable fixture of diabetes treatment. Current research in the field has produced sub- dermal sensors for glucose that can be left in place for up to a week and the future will bring transdermal sensors that will use, or modify, the permeability of the skin to extract glucose for analysis. As another example, there is interest in the use of stem cells to grow new tissue or to repair damaged tissue and many of these types of intervention will require tissue scaffolds to guide and nourish the stem cells, thus materials scientists, engineers and life scientists are exchanging information in multidisciplinary research projects for tissue repair.
In terms of healthcare provision, governments, health services and medical companies are embracing the concept of delivering much of the monitoring and therapy for patients within their own homes rather than in hospitals and clinics. Where telehealth systems have been adopted for monitoring they have been well received by patients who can receive daily reassurance about their conditions by taking and relaying their own measurements to their clinicians. Developing medical situations that cause concern can trigger earlier interventions and treatment through telehealth monitoring and both hospital admissions and mortality are reduced where telehealth is properly implemented. This growing demand for home monitoring requires not only the advanced telecommunications and wireless systems that engineers have developed but more advances in sensor and imaging technology to allow a wide range of conditions to be monitored. This poses a big challenge requiring more bioelectronics based research and development.
It is clear that our current healthcare problems support the need for the training of more engineers and physicists in bioelectronics for medical device and technology development. It is crucial that good training is provided by experienced practitioners in bioelectronics. The fields of medicine, medical technologies and devices are heavily regulated environments and research projects must be based on cognisance of the human body and medical science as well as technology. It is too easy for well meaning teams of engineers and scientists to create research projects that cannot deliver to the clinical interface because key elements of biology, toxicology and the inflammatory response have not been understood. Teams who will make real advances in this sector will include clinicians and engineers and physicists who have knowledge of medical science and bioelectronics.
Beyond medical devices and healthcare needs, the field of bioelectronics has expanded to produce devices with micro and nano scale features that allow the study of individual cells in vitro or in vivo. Thus, for example, the response of an individual cardiac or neural cell to a pharmacological agent may be studied via a microfabricated biosensor in contact with the cell. The study of individual and group behaviour of cells provides important information for a range of researchers including biologists, materials scientists and pharmacologists. However, this is again a challenging area for researchers and device development and implementation in this field requires an understanding of engineering principles combined with cell biology. Knowledge of bioelectronics is thus a key need for a student entering this field.
Given the wide range of students that can be drawn from the sectors described above and their different needs Professor Pethig and Dr Smith are to be commended for producing an excellent textbook as an introduction to bioelectronics. It is clear from the content and style of the book that in these authors we have real researchers and teachers who perfectly understand the needs of the new student in the subject. All of the key basic elements of cell biology, biophysics and chemistry are clearly set out to ensure that the student understands the basics before the book moves on to introduce the key technologies in the field for sensors, instrumentation and spectroscopy. The book does not shy away from discussing practical problems in systems and the discussion and teaching on the problems of implanting biosensors will shed light on the disappointing results already obtained by many who are already working in this field.
I will be recommending this excellent textbook to my own students and I congratulate Professor Pethig and Dr Smith on their achievement.
Professor Patricia ConnollyFRSE FIET FRSM CEng
Director, Strathclyde Institute of Medical DevicesUniversity of Strathclyde, Glasgow, Scotland
Preface
This book is written for engineering and physical science students studying courses in bioelectronics, biomedical engineering and micro/nano-engineering, at either an undergraduate or postgraduate level, as well as for researchers entering PhD programmes or working on projects in these subject areas. It aims to teach key topics in biology, chemistry, electrochemistry, biophysics, biosensors and microfluidics of relevance to bioelectronics, and also to place this subject into the context of modern biomedical engineering by examining the state of the art in research and commercial applications. Graduates and researchers wishing to bridge the interface between engineering and the life sciences may also find this book helpful.
The book content is derived from selected background material, lecture notes and tutorials provided to postgraduate students studying for the MSc Degree in Bioelectronics at the University of Edinburgh, and to undergraduates studying for the MEng Degree in Electronics with Bioelectronics. PhD students and postdoctoral researchers from different scientific and engineering backgrounds, working on various aspects of biosensors and lab-on-chip devices, also attend some of the lecture courses. Bioelectronics, as introduced to the students and in this textbook, involves the application of electronic engineering and biophysical principles to biology and medicine. An important aspect of this is the development of a communication interface between electronic components and biological materials such as cells, tissue, and organs. The interdisciplinary nature of the subject means that students and researchers will enter bioelectronics courses from different backgrounds, and to accommodate this some of the chapters cover material delivered to the Bioelectronics MSc students as either background revision notes or introductory material. The first two chapters cover basic chemical, biochemical, biological and thermodynamic concepts that are required for an understanding of the content of subsequent chapters. Condensing subjects that normally merit separate textbooks of their own into two chapters certainly risks the content appearing to be too shallow for readers having good background knowledge in chemistry and biology. We have learnt, however, not to underestimate the extent to which engineering graduates appreciate being reminded of such basic concepts as chemical bonds, pH and Avogadro's number, for example, and their background in biological subjects is often not extensive. Some electronic engineers even find it useful to be reminded of how operational amplifiers function, and we do this in Chapter, not only as an aid to them but also as introductory background to those having little background in electronics. To provide access to more basic or more extensive treatments of the book content, most chapters contain suggestions for further reading and other reference material.
Bioelectronics is an exciting and growing field of endeavour that will provide important advances for bioengineering and biomedicine. We hope that this textbook will help students and young researchers to become leading lights for such advances.
Acknowledgements
We thank Professor Andrew Mount of the School of Chemistry at the University of Edinburgh for his feedback after careful reading of Chapters 4 and 5. We also thank Laura Bell, Clarissa Lim, Peter Mitchell, Liz Wingett and the late Nicky Skinner in the various editorial and production offices of John Wiley & Sons, for their constant support, help, and above all, patience.
Chapter 1
Basic Chemical and Biochemical Concepts
This chapter presents the background concepts of chemistry and thermodynamics of relevance to the subject of bioelectronics, and which are discussed further in most chapters of this book. The level of the material covered in this chapter is probably comparable to that covered by most students in pre-university basic chemistry courses. Graduates in engineering and the physical sciences may need to dig deeply into their recollections of such courses, and may also face new concepts. One objective here is to provide an awareness of some basic concepts of the chemical and energetic functioning of biological systems, of which even a modest understanding will go a long way to mastering the interdisciplinary field of bioelectronics.
After reading this chapter readers will gain a refreshed or new understanding of:
A distinguishing characteristic of a living, rather than a nonliving, system is the ability to perform chemical transformations that produce fluxes of matter and energy. This process describes metabolism. Other characteristics that aid the identification of the living state are molecular organisation into systems of increasing complexity, and the abilities to self-produce and adapt to changes in environmental factors. The minimal level of organisation capable of exhibiting all these characteristics is the cell. The two principal forms of energy are kinetic and potential, associated with motion and stored energy, respectively. Kinetic energy in a molecular system can be interpreted in terms of the motions of its constituent molecules, which we term as heat. This heat can be determined indirectly by measuring the temperature of the molecular system. For heat to perform work (such as by an engine) it must flow from a region of higher to lower temperature. However, living systems are isothermal – they function at constant temperature and cannot utilise heat flow as a source of energy. Instead, living systems utilise the potential energy stored in the chemical bonds of molecules such as glucose or adenosine triphosphate (ATP). Cells continuously degrade such molecules, and the potential energy released when their chemical bonds are broken is used to perform various kinds of work, including the pumping of substances across membranes to produce chemical concentration gradients that in turn serve as sources of stored potential energy. This process, where chemical bond energy is converted into energy stored in the form of a chemical concentration gradient, is an example of the first law of thermodynamics which states that energy can neither be created nor destroyed. Other biological examples of this law include photosynthesis where the energy of sunlight absorbed by green leaves is converted into the chemical bond energy of glucose molecules, and in the conversion of chemical bond energy into mechanical and electrical energy by muscle cells and nerve cells, respectively.
All of the metabolic processes that produce the energy fluxes required for maintaining the living state involve the making and breaking of strong, covalent, chemical bonds between atoms in a molecule.
Most biological molecules contain only six different atoms, namely carbon, hydrogen, oxygen, nitrogen, phosphorus and sulphur. The locations of these atoms in the Periodic Table of Elements are shown in Table 1.1. The electron shells of the atoms are labelled K, L and M. Each shell is composed of one or more subshells that represent the electronic orbitals about the nucleus of the atom. The first shell, K, has one subshell called the 1s shell and can accommodate a maximum of two electrons. The second shell, L, has two subshells (2s, 2p) that can accommodate a maximum of eight electrons, with six in the 2p shell. The third shell, M, has three subshells (3s, 3p, 3d) and can accommodate a maximum of 18 electrons, with 10 in the 3d shell.
Table 1.1 The locations of hydrogen (H), carbon (C), nitrogen (N), oxygen (O), phosphorus (P) and sulphur (S) in the Periodic Table of Elements.
Electrons in the outer shells have higher average energies than those in the inner shells, and their electron orbitals can extend farther from the nucleus. This contributes to how chemically reactive a particular atom may be in its interaction with other atoms. We can schematically represent the number and arrangement of electrons in the outer electron shells of these atoms as follows [1, 2]:
A covalent bond is formed by the sharing of unpaired electrons, one from the outer electron shell of each atom, between the nuclei of two atoms. These shared electrons then enter an electronic orbital that is common to both atoms, acting to reduce the repulsive force between the two positively charged nuclei and to hold them closely together. Thus, the hydrogen atom with one unpaired electron can form only one covalent bond, whilst carbon with four electrons forms four bonds. An example of this is methane (CH4):
In the methane molecule the carbon atom is covalently bonded to four hydrogens.
In ethylene (C2H4) the two carbon atoms are held together by a double bond, and through the polymerisation of ethylene these double bonds are opened up to form the structure of polyethylene:
The nitrogen and phosphorus atoms possess five electrons in their outer electronic shells. These atoms can form either three covalent bonds (leaving a lone pair of unbonded electrons) or five covalent bonds. Examples include ammonia (NH3) and phosphoric acid (H3PO4):
Oxygen contains six electrons in its outer electronic shell (known as the p-shell) and requires just two more electrons to completely fill this shell. It can accomplish this by forming two covalent bonds with another atom, such as in molecular oxygen (O2) or in the carbonyl (C=O) chemical group:
The sulphur atom can also form two covalent bonds in this manner, as in hydrogen sulfide (H2S). The outer electronic shell of the oxygen atom has two pairs of electrons that are not involved in covalent bond formation. This, however, does not apply to the sulphur atom, which can form as many as six covalent bonds as in sulphuric acid (H2SO4):
Concentrations of substances dissolved in solutions are often given in terms of weight/volume (e.g. mg/L, or mg/100 mL – a common clinical unit). These units do not depend on knowledge of the molecular structure of the measured substance. For a substance with a known molecular structure, one can define a mole of that substance.
Table 1.2 Part of the simplified Periodic Table of Elements to give the mass, in atomic mass units (amu), of some atoms of biological importance.
Table 1.3 Activity coefficient values for some common compounds that dissociate into ions in solution. (Derived from the CRC Handbook of Chemistry and Physics, 87th edn, 2006–2007).
In a covalent bond formed between two identical atoms, such as the CߝC bond, the bonding electrons are equally shared between the atoms. Such a bond is termed nonpolar. Molecules such as Cl2, H2 and F2 are nonpolar, for example. In the description of the concentrations of ions in terms of their equivalents of charge concentration, we have alluded to the concept that different atoms exhibit different tendencies for the sharing of electrons. This tendency can be quantified by their electronegativity, using a scale measured from a hypothetical zero to a maximum value of 4.0 (close to that possessed by the most electronegative atom, fluorine). The electronegativity values of some atoms are listed in Table 1.4.
Table 1.4 The electronegativity values for the atoms listed in Table 1.2 based on the Pauling electronegativity scale [4].
We note from Table 1.4 that atoms toward the upper right of the Periodic Table of Elements are more electronegative, and those to the lower left are least electronegative. From this table we can judge that carbon disulphide (CS2) has almost equal sharing of its electrons when forming its CߝS covalent bonds, and so has nonpolar bonds. As a guideline, a maximum difference of 0.4 ~ 0.5 in electronegativity is often used to define the limit for the formation of a nonpolar bond. For the CߝCl bond there is an unequal sharing of electrons, with electronic charge on average spending more time nearest to the chlorine atom (giving it a slightly negative charge δ−) and less time near to the carbon atom (making it slightly positively charged δ+). We say that this bond is a polar bond. The HߝF bond is particularly polar. Molecules such as NH3 and H2O also possess polar bonds, and this lends to them the properties of an electric dipole moment (Figure 3.2, Chapter 3). They will tend to align themselves with an externally applied electric field. Typically, chemical bonds formed between atoms having an electronegativity difference less than 1.6 (but greater than 0.5) are considered to be polar. For larger differences we approach the situation where there is complete transfer of an electron from the least to the most electronegative atom. This type of bond is termed ionic. The guideline here is that when the electronegativity difference is greater than 2.0 the bond is considered to be ionic. Common salt (NaCl) is a good example, forming ionic crystals held together by the coulombic forces between the positively charged Na+ and negatively charged Cl− atoms. KCl and MgCl2 are other examples of an ionic solid.
If two highly electronegative atoms are bonded together, the bond between them is usually quite unstable. This occurs in hydrogen peroxide (HߝOߝOߝH), where the strong attractions of bonding electrons towards the two strongly electronegative oxygen atoms make it a highly reactive molecule.
Atoms can be defined by a characteristic ‘size’ known as their van der Waals radius. This radius can be determined from investigations of the mechanical properties of gases, from X-ray determinations of the atomic spacing between unbonded atoms in crystals, and from dielectric and optical experiments. The van der Waals radii for some atoms of biological relevance are given in Table 1.5.
Table 1.5 Van der Waals radii values for some atoms (derived from [5]).
If two nonbonding atoms are brought together they initially show a weak bonding interaction, produced by fluctuating electric interactions of the outer electrons of one atom with the positive charge of the other atom's nucleus, and vice versa. As depicted in Figure 1.1 this can be considered as a fluctuating dipole–dipole interaction between the atoms.
Figure 1.1 Van der Waals attractive interactions arise from dipole-dipole interactions between two nonbonding atoms. This interaction fluctuates in tune with how the outer electrons in each atom distribute themselves in their orbitals. In a covalent bond, where a pair of electrons occupies a common (molecular) orbital, the atoms are brought closer together than the sum of their van der Waals radii.
The attraction force between the two atoms increases until their separation distance begins to get less than twice the sum of their van der Waals radii, at which point the two atoms repel each other very strongly. A mathematically simple model, known as the Lennard-Jones 6–12 potential, can be used to approximate the interaction between a pair of electrically neutral atoms or molecules [3]. The attractive long-range interaction varies as 1/r6, where r is the interatomic distance, and the short-range repulsive force is assumed to vary as 1/r12. The resultant energy is taken as the sum of these two terms and, as shown in Figure 1.2, the equilibrium distance between the two atoms or molecule corresponds to the minimum of the potential energy curve. An insight into the origins of the 1/r6 and 1/r12 dependencies is given in Chapter 3. This model is often used to describe the properties of gases and to model the interatomic interactions in molecular models.
Figure 1.2 The resultant van der Waals force (solid line) can be approximated as the sum of the long-range attractive interaction, assumed to vary as r−6, and the short-range repulsive force which varies as r−12 [3]. The equilibrium distance, in radii units r, is located at the minimum of the resulting potential energy (PE) curve.
Van der Waals attraction between two atoms is weak, but when many atoms are involved, as occurs for two macromolecular ‘surfaces’ coming into intimate contact, it can become a significant force of attraction. For example, van der Waals interactions make an important contribution to the total force holding together the stable conformations of large molecules such as proteins. When two atoms form a covalent bond, their atomic centers are much closer together than the sum of their van der Waals radii. For example, a single-bonded carbon pair is separated by 0.15 nm, and double-bonded carbons are 0.13 nm apart.
Chemical reactions involve the breaking and forming of covalent bonds, which in turn entails the flow of electrons. When writing down the course of a chemical reaction, curved arrows (known as ‘electron pushing arrows’) are used to symbolise the electron flow. An example is shown below for the reaction between ammonia and formaldehyde, and is a step in the eventual production of hexamine:
In the reaction shown above, one of the lone pair of electrons on the nitrogen atom of the ammonia molecule is donated to form a covalent bond with the carbon atom of formaldehyde. This leaves a positive charge on the nitrogen atom. To maintain the situation that carbon forms four covalent bonds, the double bond is broken down to just a single bond. One of the two electrons so released is used to create the NߝC bond, whilst the other is donated to the oxygen atom (leaving it negatively charged).
In accordance with the first law of thermodynamics, the amount of energy released on the formation of a covalent bond is equal to that required for the breaking of it. The strengths of some chemical bonds, in terms of the amount of energy required to break the bond, are given in Table 1.6.
Table 1.6 The strengths of some chemical bonds [6].
To interpret the concept of chemical bond strength, let us consider the CߝC bond. From Table 1.6 we note that one mole of CߝC bonds has a bond energy of 346 kJ. The energy required to break a single CߝC bond is thus 3.46 × 105 J/(6.022 × 1023 bonds/mol) = 5.75 × 10−19 J. To break the CߝC bond let us assume we need to ‘stretch’ it by 0.2 nm, so that the carbon atoms are spaced 0.35 nm apart, a distance of just over twice the sum of their van der Waals radii. Energy E is expended when a force F moves an object through a distance d (E = F.d). The force required to break a CߝC bond can thus be estimated as the bond energy 5.75 × 10−19 J divided by the stretch distance 0.2 nm = 2.87 nN (1 Joule = 1 Newton.meter). To help place this into perspective we can use the concept of tensile strength, which is the force required (per unit area) to break a material. If we think of a CߝC bond as a carbon wire, it will have a cross-sectional area of ~2 × 10−20 m2. The tensile strength of a CߝC bond is thus ~2.87 nN/(2 × 10−20 m2) ≈ 1.44 × 1011 N/m2 = 144 GPa (1 Pascal (Pa) = 1 N/m2). By comparison, the tensile strength of materials such as stainless steel and titanium are less than 1 GPa.
Performing a bookkeeping exercise on the number and type of covalent bonds involved in the reaction we have depicted for ammonia and formaldehyde, we see that the C=O bond is replaced with a CߝO bond, and that a CߝN bond is created. From Table 1.6 we can deduce that if 1 mole of ammonia is reacted with 1 mole of formaldehyde to produce 1 mole of the intermediary product, there is a total bond energy loss of 67 kJ. This loss of bond energy will be given off as heat during the reaction. However, in practice, the amount of heat generated will depend on how much of the original reactants convert into the product. In other words, we need to know the equilibrium state of the reaction.
All chemical reactions reach an equilibrium state. When the reactants are first brought together they react at a rate determined by their initial concentrations. This follows from the Law of Mass Action, which states that the rate of a chemical reaction is proportional to the active masses of the reacting substances. We can represent a reaction as:
The concentrations of the reactants decrease as the reaction progresses and their rate of reaction decreases. At the same time, as the amounts of products accumulate they begin to participate in the reverse reaction:
Finally, the stage will be reached where the rates of the forward and reverse reactions become equal and the concentrations of the reactants and products do not change with time. We say that the reaction has reached chemical equilibrium. The equilibrium constant for the reaction is defined as the ratio of the equilibrium concentrations of the products and reactants. For a simple reaction A + B ↔ X + Y
(1.1)
The custom is to use square brackets to designate the equilibrium molar concentrations under standard conditions, namely at 25 °C (298 K) and at a pressure of 1 atmosphere. By convention, when water (H2O) or hydrogen ions (H+) are reactants or products, they are treated in Equation (1.1) as having effective concentrations (activities) of unity (see also Section 1.3). Thus, under standard conditions and starting with 1 M concentrations for all the components of the reaction, when Keq > 1.0 the reaction proceeds in the forward direction (A + B → X + Y), whilst for Keq < 1.0 the reactions proceeds in reverse (A + B ← X + Y).
Because biological systems function at constant temperature and pressure, we can use as a measure of the potential energy released or stored in chemical reactions the concept of Gibbs free-energy (named after Josiah Willard Gibbs, an early founder of the science of thermodynamics). Gibbs demonstrated that free-energy G is given by the relationship:
where H is the heat energy (also termed enthalpy) of the chemical system, T is the absolute temperature, and S is termed the entropy and provides a measure of the degree of disorder of the system. We are interested in the change of free-energy ΔG that results from a molecule's chemical structure being changed in a chemical reaction. Contributions to the ΔG of a reaction, at constant temperature and pressure, come from the change in heat content between reactants and products, and the change in entropy of the system:
Enthalpy H is released or absorbed in a chemical reaction when bonds are formed or broken. ΔH is thus equal to the overall change in bond energies. We can distinguish between a reaction in which heat is given off (an exothermic reaction) and one in which heat is absorbed (an endothermic reaction). In an exothermic reaction ΔH is negative and the products contain less energy than the original reactants. This is the situation we have deduced for the initial reaction of ammonia with formaldehyde. In an endothermic reaction (heat absorbed) ΔH is positive and the products contain more energy than the reactants.
By convention, ΔS is positive when entropy, and thus disorder, increases. The second law of thermodynamics states that the entropy of an isolated system, which is not in equilibrium, will tend to increase over time and to approach a maximum value at equilibrium. The change in Gibbs free-energy ΔG for a spontaneous chemical reaction is always negative, and so a negative value for ΔH (heat given off) and a positive ΔS tend to lead to a spontaneous reaction. Under certain conditions a chemical reaction having a positive ΔG can also occur spontaneously. For this to occur it must be strongly coupled with another reaction having a negative ΔG of larger absolute value than the nonspontaneous reaction. For example, suppose we have the following two reactions:
Formation of B + Y in the first reaction will not occur spontaneously, but any Y that is formed is converted spontaneously to Z in the second reaction. This lowers the equilibrium concentration of Y in the first reaction and has the effect of pushing the reaction to produce more B + Y. The way this feeds into the sum of the two components of the overall reaction A ↔ B + Z having a negative ΔG can be represented by:
Energetically unfavourable reactions of the type A ↔ B + Y are common in cells and are often coupled to reactions having a large negative ΔG. We will see that an important example of this is the coupling of reactions to the hydrolysis of the molecule adenosine triphosphate (ATP).
The concept of heat being generated or absorbed during a chemical reaction is straightforward – but how do we judge whether there is an increase or decrease in entropy? Determining whether a chemical reaction produces more molecules as products than were originally present in the reactants can provide a simple guide. If the reaction produces an increased number of molecules of the same physical state (i.e. solid, liquid or gas) there will be an increase of entropy, because their constituent atoms will be more randomly dispersed and thus more disordered. Likewise, if a solid reactant converts into either a liquid or gas, its constituent atoms will gain dynamic freedom and thus an increase of entropy.
Chemists define a standard set of conditions to describe the standard free-energy change ΔG° of a chemical reaction. The reaction is assumed to occur at 298 K (25 °C) for gases at 1 atmosphere (101.3 kPa) and solute concentrations of 1 M. It can be shown that the relationship between the equilibrium constant Keq and the standard free-energy change ΔG° is given by the expression
(1.2)
where R is the gas constant (8.314 J mol−1 K−1) and T is the standard temperature (298 K). Thus, Keq values of 10 and 0.1 litres/mol, for example, correspond to ΔG° values of −5.7 kJ/mol and +5.7 kJ/mol, respectively. Equation (1.2) can also be written in the form:
When a chemical reaction is not at equilibrium we can interpret the free-energy change for the reaction, ΔG°, as the driving force that tends to drive the reaction towards equilibrium. In this way, we can view ΔG° as an alternative description of the equilibrium constant Keq of a chemical reaction. We should note, however, that this tells us nothing about the rate of a chemical reaction. Many chemical reactions that have a large negative ΔG° value may not proceed at a measurable rate at all! We have seen that chemical reactions involve the breaking and making of covalent bonds. This can only occur if the reacting molecules come close enough together during a collision between them. Some of these collisions may provide the opportunity for the desired sharing of valence electrons between an atom and its ‘target’ reactant, but many other collisions may not. An effective way to increase the rate of a chemical reaction is to add a nonreacting compound, called a catalyst, that facilitates the breaking and making of new bonds between the reactants. Catalysts commonly perform this helpful task by causing a redistribution of the electrons in the reacting atoms, such that the bonds that need to be broken are weakened. This lowers the energy barrier in the reaction ‘pathway’ and increases the reaction rate. After the reaction occurs, the catalyst becomes available to repeat this procedure with another set of reactants. In this way, the catalyst is not consumed in the reaction. The catalysts that take part in biological reactants are large protein molecules called enzymes. As depicted in Figure 1.3 an enzyme catalysed reaction involves the reacting molecule (the substrate) ‘locking’ neatly into an enzyme's receptor site that is specifically configured for that specific substrate.
Figure 1.3 In an enzyme catalysed reaction the reactant (substrate S) first binds to a specific site on the enzyme to form an enzyme-substrate complex E-S. The bound substrate then reacts with another reactant to form a product P, leaving the enzyme E available for further reactions.
The overall reaction shown in Figure 1.3 can be written as:
We should also be aware that the standard Gibbs free-energy change ΔG° informs us how far, and in which direction, a reaction will go to reach equilibrium only when the starting concentration of each chemical component is 1 M and at the standard conditions for temperature (298 K) and pressure (101.3 kPa). However, for practically all cases we are only really interested in the actual free-energy change ΔG for reactions where the chemical concentrations are not all the same and not equal to 1 M, and for temperatures other than 298 K. It can be shown for any chemical reaction
that ΔG and ΔG° are related by the expression:
(1.3)
where the concentrations of the various components and the temperature T are those actually involved. When the reaction reaches the equilibrium state, at which point there is no driving force for the reaction and ΔG = 0, Equation (1.3) reduces to (1.2):
or
For almost all types of biological organism, the most important source of free-energy is generated by the breaking of a phosphate bond in adenosine triphosphate (ATP). ATP consists of adenosine (composed of an adenine group and a ribose sugar) and three phosphate groups (triphosphate), two of which are often referred to as ‘high-energy bonds’. One or both of these bonds can be broken by an enzyme-catalysed hydrolysis reaction. Hydrolysis is a reaction in which a molecule is cleaved into two parts by the addition of a water molecule. The case where one bond is broken, to form adenosine diphosphate (ADP) plus inorganic phosphate (Pi) and a proton (H+), is depicted below:
The standard Gibbs free-energy change ΔG° for this reaction is −30.5 kJ/mol, which from Equation (1.2) corresponds to an equilibrium constant:
From Equation (1.1) and noting we can by convention treat the water and hydrogen ion components as having activities equal to 1.0, the equilibrium concentrations of ATP, ADP and Pi satisfy the relationship:
However, this does not reflect the actual concentrations found in living cells. For example in the cytoplasm of a red blood cell the concentrations of ATP and ADP are typically 2.25 and 0.25 mM, respectively, and inorganic phosphorus Pi has a concentration of 1.65 mM. This gives [ADP][Pi]/[ATP] = 1.83 × 10−4, a value some nine orders of magnitude away from the equilibrium situation! The hydrolysis of ATP to ADP is being driven very hard, and this means that the free-energy change available to do useful work is high. We can calculate this free-energy change from Equation (1.3). Assuming a normal physiological temperature of 37 °C (310 K) we have:
Thus, the free-energy released by the conversion of ATP into ADP within a cell is significantly larger than the standard free-energy change. Cells use this free-energy to perform many functions, such as: the synthesis of important molecules (DNA, RNA, proteins, lipids); the pumping across membranes of ions and molecules against their concentration gradients (see Figure 3.11 and discussion in Chapter 3); in the actions of muscle and nerve cells and the firing of neurons, and to maintain a constant body temperature. In plant cells the energy required to convert ADP back into ATP is supplied by trapping the energy of sunlight through photosynthesis. In animal cells and nonphotosynthetic microorganisms it is supplied by the free-energy released in the enzyme-controlled breaking down of the chemical bonds in glucose molecules.
We noted at the beginning of this chapter that a distinguishing characteristic of a living system is its molecular organisation. In a procedure repeated countless times in cell biology laboratories around the world each day a single microorganism, such as E. coli, can be isolated from an existing culture and transferred to a new culture dish and placed in an incubator. Apart from this single organism, the dish will typically contain no more than magnesium, sulphate, phosphorus and ammonium ions, water and glucose molecules. Within a day or so, depending on the type of organism, a billion or so new cells will have been generated from the original one. For the case of E. coli (a bacterium of cylindrical form about 2 μm long and 1 μm in diameter) its composition includes more than 2 million protein molecules of at least 2000 types, more than 20 million lipid molecules, hundreds of thousands of RNA molecules, and one very large molecule of DNA containing 5 million base pairs. Contained within the volume bounded by its rigid cell wall, formed of interwoven polysaccharides, peptides and lipids, there are also around 300 million ions and metabolites, and some 4 × 1010 water molecules. Thus, the few types of atoms randomly distributed (i.e. of high entropy) in the culture dish have been incorporated into highly organised (low entropy) systems using the free-energy derived from glucose, in an apparently defiant stance against the finality dictated by the second law of thermodynamics! This apparent contradiction of a basic natural law, that all real processes involve the degradation of energy and the dissipation of order, has been clarified by Schrödinger [7], who explained that living organisms extract ‘negentropy’ from their surroundings. What we call the living-state, a state far removed from equilibrium, is maintained by utilising sources of free-energy provided by the continuous flow of energy from the sun to its end point of wasted heat.
We have noted that the E. coli bacterium contains around 50 billion water molecules – representing about 90% of its total weight. Water also contributes about 70 ~ 80% of the total weight of animal cells. Most of the biochemical reactions taking part in a cell do so in an aqueous environment. Water is the mater and matrix of life. It is the only inorganic liquid that occurs naturally on earth. It is also the only chemical compound that occurs naturally in all three physical states, namely solid, liquid and vapour. According to its molecular size the melting and boiling points of water should be about 100 K lower, and its heat of vaporisation, heat of fusion, and surface tension is higher than that of the comparable hydrides, hydrogen sulphide (H2S) and ammonia (NH3) or even than that of most other common liquids. These unique macroscopic properties can only arise from there being strong forces of attraction between the molecules in liquid water – without them water on earth would exist as a gas rather than as a liquid. These strong forces of attraction take the form of hydrogen bonds.
A hydrogen atom normally forms just one covalent bond at a time with another atom. However, in Section 1.2.4 we learnt that the water molecule is dipolar. Each hydrogen atom loses a fraction of charge δ− to the oxygen atom, making the oxygen slightly positively charged by an amount 2δ+ to form an electric dipole m, which can align with an externally applied electric field E:
Liquid water is thus a dielectric material possessing a relatively large value of its relative permittivity (dielectric constant) of around 80 at room temperature. The large relative permittivity value for liquid water arises because each water molecule forms transient hydrogen bonds with several other water molecules. The motion of one water molecule dipole moment induced by the electric field is cooperatively coupled to several other water molecules. This cooperative motion enhances the dielectric polarisability of bulk water, because the effective dipole moment per unit volume is larger than that for the situation where each dipole m acted on its own. This makes water a very good solvent for ionic substances (salts) held together by the coulombic force of attraction of the opposite charges of their constituent ions. This force of attraction is inversely proportional to the relative permittivity of the medium in which the ions are embedded. A sodium chloride crystal has a bulk relative permittivity of ~6, and if it is placed in water the force of attraction between the Na+ and Cl− ions will be greatly weakened. The crystal will dissociate (dissolve) and the Na+ and Cl− ions will be dispersed as free ions in the water. The large electric fields that exist around bare ions such as Na+ and Cl− attract water dipoles, which form strongly held hydration sheaths around the ions. Nonionic substances that possess polar chemical bonds are soluble in water. Sugars, alcohol molecules and many other organic molecules readily dissolve in water because they can form hydrogen bonds with water. Nonionic organic molecules, such as benzene or lipids, that are nonpolar are unable to interact with water in this way, and so are insoluble in water.
Hydrogen bonds form because in liquid water a negatively charged oxygen atom can attract a positively charged hydrogen atom, in a neighbouring water molecule, towards one of its unbonded valence electron pairs. This forms a weak bond of strength around 20 kJ/mol (~5 kcal/mol), which is much weaker than the covalent bond between hydrogen and oxygen (~460 kJ/mol). The transient nature of hydrogen bonding between molecules of water can be depicted as shown below:
This diagram shows how the covalent and hydrogen bonds may exchange places from one instant to the next. There is thus a finite (but small) probability that the three hydrogen atoms will associate with one oxygen atom to form a hydronium ion (H3O+) leaving another oxygen atom with only one hydrogen to form a hydroxyl ion (OH−). The dissociation (ionisation) of water can therefore be written as:
The dissociation of water is sometimes written as H2O ↔ H+ + OH− to emphasise the production of protons. The positively charged hydrogen atoms (protons) of the hydronium ion attract the electronegative, oxygen, ends of the surrounding water molecules to form a stable hydrated hydronium ion, as shown in Figure 1.4.
Figure 1.4 Hydronium ion in solution, surrounded by three hydrogen-bonded water molecules.
Table 1.7 gives the electrical mobility values for various ions in water, from which we learn that the apparent rate of migration of the H3O+ ion in an electrical field is significantly greater than that exhibited by Na+ and K+ ions. We have discussed how the electrostatic binding energy of the proton is so large that it has no independent existence in condensed phases such as water. In water it is generally considered to be present as hydronium, H3O+, which gives it an equivalent size between that of the hydrated sodium and potassium ion. The electrical mobility μ of an ion is defined as μ = v/E, where v is the terminal speed acquired under the influence of an electric field E. To a good approximation we can assume that the terminal speed is reached when the accelerating force (Fa = qE) is balanced by the Stokes viscous drag force. This viscous force is directly proportional to the size of the ion, and so the electrical mobility of a proton should be of the same magnitude as the Na+ and K+ ions.
Table 1.7 The electrical mobility of some ions at 25 °C in dilute aqueous solution [8].
How do we account for the anomalously high proton mobility? The accepted viewpoint is that a transport process, known as the Grotthuss mechanism, is responsible. This mechanism is named after Theodor Grotthuss [9], who suggested that electrical conduction through water resulted from the oxygen atoms simultaneously receiving and passing along a single hydrogen atom. An interesting aspect of this is that at the time of his proposal a water molecule was considered to be OH instead of H2O, and that an understanding of ions in solution (let alone hydrogen bonds) was at a very primitive level. Nevertheless, his description that throughout the conduction process ‘only the water molecules located at the tip of the conducting wires will be decomposed, whereas all those located at intermediate positions will exchange their composing principles reciprocally and alternatively, without changing their nature’ proved to be remarkably insightful. The modern version of the Grotthuss mechanism is depicted in the sequence of events below. The first step involves the injection and binding of a proton into a hydrogen-bond network:
Subsequent steps involve the localised rearrangement of protons and hydrogen bonds, followed by the release of a proton from the hydrogen-bond network:
The final step is the reorientation of the water molecules to re-establish the hydrogen-bonded water structure that existed before the injection of a proton.
A partial analogy for the Grotthuss mechanism is the operation of a bucket brigade, where the buckets move but the people do not. Proton conduction does not involve the diffusion of either the hydronium ions or the protons themselves! A better analogy is electronic conduction along a copper wire. As an electron is injected into the end of a wire at a cathode, another electron is simultaneously ejected from the other end into the anode. This mode of proton transport is of relevance to bioenergetic processes that involve proton diffusion in protein complexes and the pumping of protons across cell membranes, and is the subject of ongoing research [10].
According to the classical BrØnsted-Lowry definition of acid and base, H3O+ is acidic (it can donate a H+ ion) and OH− is basic or alkaline (it can accept a H+ ion). This definition is named after J.N. BrØnsted and T.M. Lowry (Danish and English physical chemists, respectively) who in 1923 independently formulated the protonic definition of acids and bases. Thus, any substance that can form a H+ ion is termed an acid, and any substance that combines with, and thus decreases the concentration of, H+ ions is termed a base. An acid-base reaction always involves such a so-called conjugate acid-base pair – the proton donor and the proton acceptor (H3O+ and OH−, respectively, for the case of water). Water is said to be amphoteric because it acts either as acid or base. Examples of some acids and bases are given in Table 1.8.
Table 1.8 Acids and Bases.
The dissociation of water into acid and base is an equilibrium process that follows the Law of Mass Action, which states that the rate of a chemical reaction is proportional to the active masses of the reacting substances. For example, the equilibrium constant for the dissociation of water is given by:
(1.4)
where the brackets denote concentrations in moles per litre. To derive the final right-hand expression of this equation, we have divided the numerator and denominator by [H2O]. The concentration of water remains virtually unaltered by its partial dissociation, since (at 25 °C) a litre of pure water contains only 1.0 × 10−7 M of H+ and an equal number of OH− ions, whereas the concentration of water in a litre (1000 gm) of pure water is 1000 gm/L divided by the gram molecular weight (18 gm/mol) – namely 55.5 M. Thus, the concentration of water is virtually a constant and it makes no real sense to include it in equation (1.4) as if it were a variable. Equation (1.4) can thus be simplified to:
(1.5)
The constant Keq can be combined with the concentration of water (55.5 M) to give a constant Kw termed the ion product of water. From Equation (1.5) at 25 °C, this is given by:
If [H+] for some reason increases, as when an acid substance is dissolved in water, [OH−] will decrease so as to keep the product [H+] [OH−] = 10−14.
This reaction is the basis for the pH scale, measured as a concentration of H+ (actually H3O+). As shown in the Table 1.9 the pH scale is logarithmic, and typically covers the [H+] range from 1 M to 10−14 M. The term pH can be thought of as a shorthand term for the negative log of hydrogen ion concentration. It is defined as:
(1.7)
Table 1.9 The pH Scale.
Thus, a 10−3 M solution of a strong acid, such as HCl, which dissociates completely in water, has a pH of 3.0, and so on for other concentrations. A solution in which [H+] = [OH−] = 10−7
