180,99 €
* Presents a solid introduction to thermal analysis, methods, instrumentation, calibration, and application along with the necessary theoretical background. * Useful to chemists, physicists, materials scientists, and engineers who are new to thermal analysis techniques, and to existing users of thermal analysis who wish expand their experience to new techniques and applications * Topics covered include Differential Scanning Calorimetry and Differential Thermal Analysis (DSC/DTA), Thermogravimetry, Thermomechanical Analysis and Dilatometry, Dynamic Mechanical Analysis, Micro-Thermal Analysis, Hot Stage Microscopy, and Instrumentation. * Written by experts in the various areas of thermal analysis * Relevant and detailed experiments and examples follow each chapter.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 1179
Veröffentlichungsjahr: 2014
CONTENTS
PREFACE
CHAPTER 1 INTRODUCTION
CHAPTER 2 DIFFERENTIAL SCANNING CALORIMETRY (DSC)
2.1. INTRODUCTION
2.2. ELEMENTS OF THERMODYNAMICS IN DSC
2.3. THE BASICS OF DIFFERENTIAL SCANNING CALORIMETRY
2.4. PURITY DETERMINATION OF LOW-MOLECULAR-MASS COMPOUNDS BY DSC
2.5. CALIBRATION OF DIFFERENTIAL SCANNING CALORIMETERS
2.6. MEASUREMENT OF HEAT CAPACITY
2.7. PHASE TRANSITIONS IN AMORPHOUS AND CRYSTALLINE POLYMERS
2.8. FIBERS
2.9. FILMS
2.10. THERMOSETS
2.11. DIFFERENTIAL PHOTOCALORIMETRY (DPC)
2.12. FAST-SCAN DSC
2.13. MODULATED TEMPERATURE DIFFERENTIAL SCANNING CALORIMETRY (MTDSC)
2.14. HOW TO PERFORM DSC MEASUREMENTS
2.15. INSTRUMENTATION
APPENDIX
ABBREVIATIONS
REFERENCES
CHAPTER 3 THERMOGRAVIMETRIC ANALYSIS (TGA)
3.1. INTRODUCTION
3.2. BACKGROUND PRINCIPLES AND MEASUREMENT MODES
3.3. CALIBRATION AND REFERENCE MATERIALS
3.4. MEASUREMENTS AND ANALYSES
3.5. KINETICS
3.6. SELECTED APPLICATIONS
3.7. INSTRUMENTATION
APPENDIX
ABREVIATIONS
REFERENCES
CHAPTER 4 THERMOMECHANICAL ANALYSIS (TMA) AND THERMODILATOMETRY (TD)
4.1. INTRODUCTION
4.2. PRINCIPLES AND THEORY
4.3. INSTRUMENTAL
4.4. CALIBRATION
4.5. HOW TO PERFORM A TMA EXPERIMENT
4.6. KEY APPLICATIONS
4.7. SELECTED INDUSTRIAL APPLICATIONS (WITH DETAILS OF EXPERIMENTAL CONDITIONS)
APPENDIX
ABBREVIATIONS
REFERENCES
CHAPTER 5 DYNAMIC MECHANICAL ANALYSIS (DMA)
5.1. INTRODUCTION
5.2. CHARACTERIZATION OF VISCOELASTIC BEHAVIOR
5.3. THE RELATIONSHIP BETWEEN TIME, TEMPERATURE, AND FREQUENCY
5.4. APPLICATIONS OF DYNAMIC MECHANICAL ANALYSIS
5.5. EXAMPLES OF DMA CHARACTERIZATION FOR THERMOPLASTICS
5.6. CHARACTERISTICS OF FIBERS AND THIN FILMS
5.7. DMA CHARACTERIZATION OF CROSSLINKED POLYMERS
5.8. PRACTICAL ASPECTS OF CONDUCTING DMA EXPERIMENTS
5.9. COMMERCIAL DMA INSTRUMENTATION
APPENDIX
ABBREVIATIONS
REFERENCES
CHAPTER 6 DIELECTRIC ANALYSIS (DEA)
6.1. INTRODUCTION
6.2. THEORY AND BACKGROUND OF DIELECTRIC ANALYSIS
6.3. DIELECTRIC TECHNIQUES
6.4. PERFORMING DIELECTRIC EXPERIMENTS
6.5. TYPICAL MEASUREMENTS ON POLY(METHYL METHACRYLATE) (PMMA)
6.6. DIELECTRIC ANALYSIS OF THERMOPLASTICS
6.7. DIELECTRIC ANALYSIS OF THERMOSETS
6.8. INSTRUMENTATION
APPENDIX
ABBREVIATIONS
REFERENCES
CHAPTER 7 MICRO- AND NANOSCALE LOCAL THERMAL ANALYSIS
7.1. INTRODUCTION
7.2. THE ATOMIC FORCE MICROSCOPE
7.3. SCANNING THERMAL MICROSCOPY
7.4. THERMAL PROBE DESIGN AND SPATIAL RESOLUTION
7.5. MEASURING THERMAL CONDUCTIVITY AND THERMAL FORCE-DISTANCE CURVES
7.6. LOCAL THERMAL ANALYSIS
7.7. PERFORMING A MICRO/NANOSCALE THERMAL ANALYSIS EXPERIMENT
7.8. EXAMPLES OF MICRO/NANOSCALE THERMAL ANALYSIS APPLICATIONS
7.9. OVERVIEW OF LOCAL THERMAL ANALYSIS
ABBREVIATIONS
REFERENCES
INDEX
About the cover: Image of an optoelectronics device in the middle circle on the cover reproduced with permission from CyOptics, Inc., Breinigsville, PA.
Copyright © 2009 by John Wiley & Sons, Inc. All rights reserved
Published by John Wiley & Sons, Inc., Hoboken, New JerseyPublished simultaneously in Canada
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permission.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com.
Library of Congress Cataloging-in-Publication Data:
Thermal analysis of polymers: fundamentals and applications / edited by Joseph D. Menczel, R. Bruce Prime.
p. cm
Includes bibliographical references and index.
ISBN 978-0-471-76917-0 (cloth)
1. Polymers–Analysis. 2. Thermal analysis. I. Menczel, Joseph D. II. Prime, R. Bruce. QD139.P6.T445 2008
547’.7046—dc22
2008024101
PREFACE
This book is about thermal analysis as applied to polymers. It is organized by thermal analysis techniques and thus contains chapters on the core techniques of differential scanning calorimetry (DSC), thermogravimetric analysis (TGA), thermomechanical analysis (TMA), and dynamic mechanical analysis (DMA). Although it can be argued that dielectric analysis (DEA) is more frequency than temperature oriented, we decided to include it because we believe it is an integral part of the thermal analysis of polymers. And we felt that it was necessary to include micro/nano-TA (μ/n-TA) because we believe that with the ever increasing ability to probe the macromolecular size scale, this field will become increasingly more important in the characterization and development of new materials. Each chapter describes the basic principles of the respective techniques, calibration, how to perform an experiment, applications to polymeric materials, instrumentation, and its own list of symbols and acronyms & abbreviations. Several examples are given where thermal analysis was instrumental in solving industrial problems.
In undertaking this project we wanted to write a book that described the underlying principles of the various thermal analysis techniques in a way that could be easily understood by those new to the field but sufficiently comprehensive to be of value to the experienced thermal analyst looking to refresh his or her skills. We also wanted to describe the practical aspects of thermal analysis, for example, how to make proper measurements and how best to analyze and interpret the data. We wrote this book with a broad audience in mind, including all levels of thermal analysts, their supervisors, and those that teach thermal analysis. Our purpose was to create a learning tool for the practioner of thermal analysis.
We were very fortunate to be able to assemble an international team of distinguished scientists to contribute to this book. These are truly the experts in the field and in some cases the people who invented the techniques. They are scientists and educators with the uncommon ability to explain complex principles in a manner that is thorough but still easy to comprehend. Note that all chapters have multiple authors, illustrating the collaborative nature of this undertaking. We took our jobs as editors seriously by becoming intimately involved in every chapter, and we express our appreciation to each and every contributor not only for their outstanding contributions but also for their understanding and patience with the editors.
We would like to recognize two people who have been role models for us: Professors Bernhard Wunderlich and Edith Turi. Both have been significant influences on our professional careers. As it is our hope that this book will benefit thermal analysis education, it is important to note that both Professors Wunderlich and Turi dedicated much of their professional lives to promoting and furthering education in thermal analysis. Professor Wunderlich was advisor to one of us (RBP) and post-doctoral advisor to the other (JDM) at Rensselaer Polytechnic Institute, giving us a fundamental grounding in the principles of thermal analysis and instilling a lifelong love for the subject. The roots of our understanding of the basics of thermal analysis stem from that time, and they can be noted in his novel teaching efforts and the founding of the ATHAS (Advanced Thermal Analysis System) Research Group. These efforts consisted first of audio tapes, allowing independent study, and then as technology developed, computer-based courses (novel for the time). Professor Turi taught thermal analysis to thousands of scientists and engineers during her renowned short courses at the Polytechnic Institute of New York (Brooklyn Poly) and for the American Chemical Society in addition to several national and international venues. Several of the contributors to this book cut their teeth as instructors in these short courses and/or as contributors to her classic book Thermal Characterization of Polymeric Materials (1981 and 1997).
Many people have contributed to the making of this book, and we thank them all. Special recognition goes to Larry Judovits, who not only led the collaboration on the modulated temperature DSC section but also critically reviewed much of the book. And to Harvey Bair, who contributed several personal examples of the ability of thermal analysis techniques to solve real industrial problems. We want to acknowledge those who read chapters or parts of chapters and offered many helpful comments, including Professor Sue Ann Bidstrup-Allen and Richard Siemens. We express appreciation to Professor Henning Winter for helpful discussions on measurement of the gel point. A huge thank you to our editor at Wiley, Dr. Arza Seidel, who always had the right answer to our many questions and steered us through the maze of transforming a vision into reality. One of us (R.B.P.) would like to acknowledge my long-term collaboration with Professor James Seferis from whom I have learned so much and, last but not least, my wife Donna for generously contributing her graphic arts skills and for her patience and encouragement. The other one of us (JDM) would like to express his gratitude to Judit Simon, Editor-in -Chief of the Journal of Thermal Analysis and Calorimetry, who supported him so much when he entered the field of thermal analysis.
Joseph D. Menczel and R. Bruce PrimeJune 2008
JOSEPH D. MENCZELAlcon Laboratories, Fort Worth, TX
R. BRUCE PRIMEIBM (Retired)/Consultant, San Jose, CA
PATRICK K. GALLAGHERThe Ohio State University (Emeritus), Columbus, OH
Thermal analysis (TA) comprises a family of measuring techniques that share a common feature; they measure a material’s response to being heated or cooled (or, in some cases, held isothermally). The goal is to establish a connection between temperature and specific physical properties of materials. The most popular techniques are those that are the subject of this book, namely differential scanning calorimetry (DSC), thermogravimetric analysis (TGA), thermomechanical analysis (TMA), dynamic mechanical analysis (DMA), dielectric analysis (DEA), and micro/nano-thermal analysis (μ/n-TA).
This book deals almost exclusively with studying polymers, by far the widest application of thermal analysis. In this area, TA is used not only for measuring the actual physical properties of materials but also for clarifying their thermal and mechanical histories, for characterizing and designing processes used in their manufacture, and for estimating their lifetimes in various environments. For these reasons, thermal analysis instruments are routinely used in laboratories of the plastics industry and other industries where polymers and plastics are being manufactured or developed. Thus, thermal analysis is one of the most important research and quality control methods in the development and manufacture of polymeric materials as well as in industries that incorporate these materials into their products.
Not withstanding its importance, educational programs in thermal analysis at universities and colleges are almost nonexistent; certainly they are not systematic. Thermal analysis training in the United States is for the most part limited to short courses, such as the short course at the annual meeting of the North American Thermal Analysis Society (NATAS) and earlier the short course at the Eastern Analytical Symposium. Our goal was to write a book that could be used as a text or reference to accompany thermal analysis courses and that would enable both beginners and experienced practitioners to do some self-education. This book is for experimenters at all levels that addresses both the fundamentals of the thermal analysis techniques as well as the practical issues associated with the running of experiments and interpretation of the results. Several examples are given where thermal analysis played a key role in solving a practical problem, and they are presented in a manner that will allow readers to apply the lessons to their own problems.
This book is organized by measuring techniques rather than by material classification. These techniques all follow the change of specific physical properties as the temperature and possibly atmosphere are controlled. Table 1.1 indicates the classification of the more common techniques by the physical property measured.
Most thermal analysis studies today are conducted with commercial instruments. Manufactures have striven to provide complete “systems” capable of a wide range of analyses and frequently sharing modular components. Naturally, this is a market-driven phenomenon, and the current driving forces are speed, miniaturization, and automation. The goals of a modern industrial quality control facility, a state-of-the-art research institution, or those of a teaching laboratory are quite different. This difference leads to a broad spectrum of available instrumentation in terms of ultimate capabilities, simplicity, and cost.
Commercial thermal analysis instrumentation is relatively new, a product of the last four decades or so. Mass production of TA instruments started in the early 1950s. From then to the 1970s, several major TA instruments were marketed, and some of them are still manufactured even today. This was the time when the two types of DSC (power compensation and heat flux) appeared, and the principle of the measurements is still the same today.
TABLE 1.1. The Most Important General Methods and Techniques of Thermal Analysis
General Method
Acronym
Property Measured
Differential Scanning Calorimetry
DSC
ΔT, differential power input
Differential Thermal Analysis
DTA
ΔT
Thermogravimetry orThermogravimetric Analysis
TG or TGA
Mass
Thermomechanical Analysis,Thermodilatometry
TMA, TD
Length or volume
Dynamic Mechancal Analysis
DMA
Viscoelastic properties
Dielectric Analysis
DEA
Dielectric properties
Micro/Nano-Thermal Analysis
μ/n-TA
Penetration, ΔT
A gigantic leap forward in the evolution of scientific instrumentation and data analysis took place with the advent of the digital computer. The replacement of such things as chart recorders and analog computers has dramatically improved our ability to control, measure, and evaluate experiments precisely. This process in TA introduced revolutionary changes. No longer was there a need to rerun the measurement if the sensitivity was not properly adjusted, the accuracy of the measurements dramatically increased, the time necessary for data evaluation decreased significantly, and many measurements became automated. The use of autosamplers doubled or tripled the throughput of the instruments, allowing them to run day and night. To complement the autosamplers, cooling units can be turned on and off at a pre-programmed time in the absence of the operator.
At the same time, problems developed because of certain software issues. When software is automatically capable of performing calculations, the operator often tends to be lazy and fails to learn the theoretical basis of the measurement and the calculation. Although it is certainly not the intention of any instrument manufacturer or software purveyor to deceive or mislead the user, slavish reliance on software without adequate comprehension can be dangerous. As an example, the quest for nice-looking plots can lead to excessive smoothing or dampening of the results, with the possible consequence of missing meaningful, even critical, subtle events. Another negative aspect of blind software use is the question of significant figures. Modern scientific literature is replete with insignificant figures. The ability of a computer to generate an unending series of digits is no indication or justification of their relevance. And although the manufacturers often provide the option to change the number of displayed digits, with the rush in modern laboratories, operators often fail to make this change. But eventually, it is incumbent on investigators to evaluate the results of their analyses, with regard to both the significance of the numbers and their conclusions.
The number of instrument manufacturers decreased somewhat during the last decade or so, but there are still a significant number on the market. Today it seems that most of these corporations will survive. Although some are highly specialized, almost all thermal analysis instrument companies produce one or more DSCs. As mentioned, popularity has its disadvantages: DSC and TGA are the two most popular TA techniques, and these are the ones that many operators routinely use, often without the necessary theoretical knowledge. This lack of understanding creates such an absurd situation that the essence of DSC measurements is reduced to recording a melting peak, whereas in TGA, all they look for is the start of the mass loss. Similarly, TMA is often reduced to looking for the shift in the slope of the dimension versus temperature curves to measure Tg. Interestingly, experimenters who use the relaxation techniques (DMA and DEA), and μ/n-TA, tend to rely more on theory and do fewer simple repetitive measurements.
Temperature in thermal analysis is the most important parameter. The strict definition of TA stipulates a programmed (i.e., time- or property-dependent) temperature. From the standpoint of instrumentation and methodology, however, isothermal measurements are included. Some applications concerning kinetics, as an example, involve a series of isothermal measurements at different temperatures or measuring an isothermal induction time to reaction. Other isothermal techniques may involve time to ignition and changes in the measured property with a changing atmosphere or force applied to the sample. Temperature is conveniently and most often measured by thermocouples, either individually or coupled in series as a thermopile to increase the sensitivity and/or to integrate the measurement over a greater volume. Some instruments use platinum resistance thermometers. Optical pyrometry has been applied in rare instances. These latter two methods are the methods of choice, depending on the specific temperature, as set forth in the definition of the International Temperature Scale. Regardless of the particular sensor used, calibration of the temperature is generally dependent on the specific technique and will be discussed with each class of instrumentation. Careful consideration must always be given when equating sensor temperature with actual sample temperature. Depending on the enthalpy of the processes or reactions occurring, thought must also be given to equating a “bulk” sample temperature with that at the interface where the actual reaction may be taking place. Such considerations are particularly important for meaningful kinetic analyses.
Thermal analysis can be used in a variety of combinations. The most common combinations all share the same sample as well as thermal environment. A key distinction is made between true simultaneous methods like TGA/DTA and TGA/DSC, in which there is no time delay between the measurements, and near-simultaneous measurements like TGA/MS and TGA/ FTIR, in which the time delay is small between the mass loss and the respective gas detector. Such combined techniques not only represent a saving in time, but also they help to alleviate or minimize uncertainties in the comparison of such results. And TGA/MS and TGA/IR can be instrumental in identifying complex processes that involve mass loss. Although combined instruments are not described in this book in detail, some examples are given in the TGA chapter.
Unfortunately, the proper selection of the run parameters is often ignored in thermal analysis measurements, even though it is a critical part. Of all the run parameters, the sample mass, the ramp rate, and the purge gas are the most important. The sample size and its physical shape play a significant role in the results. The proper sample size and the heating rate are interconnected because in several techniques faster rates require smaller samples and substantially improved conditions for rapid thermal transport between the sample and its controlled environment. Therefore, compromises are necessary between sample size and heating and cooling rates. And thermal conductivity and flow of the atmosphere thus become significant factors. Transport conditions may change substantially with temperature as the nature of the thermal path and the relative roles of conduction, convection, and radiation are altered.
Traditionally, simple combinations of linear heating or cooling rates and isothermal segments have been employed. Modern methods, however, frequently impose cyclic temperature programs coupled with Fourier analyses to achieve particular advantages and added information. These approaches are referred to as modulated techniques, and temperature is the most commonly modulated parameter. Note that in DMA, stress or strain is the modulated parameter and that in DEA, the electric field is modulated, but in modulated temperature DSC and modulated temperature TMA, it is the temperature that is modulated.
Often interaction of the sample with its atmosphere is important, for example, in oxidation/reduction reactions or catalytic processes. Reversible processes will be influenced by product accumulations, and hence, the ability of the flowing atmosphere to purge these volatile products becomes important. A common example of such a reversible process is dehydration or solvent removal. Clearly the degree of exposure of the sample to its atmosphere then becomes a factor. Deliberate modifications of the sample holder or compartment are made to control these interactions. Maximum exposure can be attained with the sample in a thin bed with the atmosphere flowing over or even percolating through it. Minimum exposure can be achieved using sealed sample containers or ones with a small orifice to alleviate the changes of pressure resulting from temperature changes and/or reactions with the gas phase. Diffusion of species into and out of the sample also needs to be considered. For example, in oxidative processes, diffusion of oxygen into the sample becomes important, and in mass loss processes, the volatile products need to diffuse to the surface where evaporation occurs. In these cases, sample size and shape, e.g., surface-to-volume ratios, may influence the results.
The above considerations all dictate that the sample size and form can be a significant factor in the effort to achieve the desired analysis and its reliability. The ability to impose a rapidly changing thermal environment on the sample may be necessary to simulate the true process conditions properly or simply to obtain the results more quickly. As mentioned, this necessity dictates a small sample in order to follow the temperature program, but this in turn demands a representative sample. Thus, it may be difficult or impossible to achieve for a very small sample of some materials. Composites, blends, and naturally occurring materials may lack the necessary homogeneity. Reproducibility of the measurements or even other analytical data is needed to assure that the sample is indeed representative.
Even though the inhomogeneity of the sample at fast heating rates can be compensated for with a smaller sample size, there are time-dependent phenomena in most thermal analysis techniques. One can change the heating rate and ensure an acceptable temperature gradient in the sample, and different physical processes may and will take place at different heating rates as demonstrated in Chapters II and III (DSC and TGA). Thus, the selection of a proper ramp rate is important, and just by changing the sample size, one cannot compensate for all the variabilities in the sample.
A word about mass is appropriate. The International Union of Pure and Applied Chemistry (IUPAC) and the International Confederation of Thermal Analysis and Calorimetry (ICTAC) have determined that the property measured by TGA should be referred to as “mass” and not as “weight.” Although some figures are reproduced with their original ordinates, which may be weight or weight percent, we adhere to this terminology and refer to sample mass, mass percent, and mass loss. It is still correct to refer to the weighing of samples and to standard reference weights.
JOSEPH D. MENCZELAlcon Laboratories, Fort Worth, Texas
LAWRENCE JUDOVITSArkema, King of Prussia, Pennsylvania
R. BRUCE PRIMEIBM (Retired)/Consultant, San Jose, California
HARVEY E. BAIRBell Laboratories (Retired)/Consultant, Newton, New Jersey
MIKE READINGUniversity of East Anglia, Norwich, United Kingdom
STEVEN SWIERDow Corning Corporation, Midland, Michigan
Differential scanning calorimetry (DSC) is the most popular thermal analysis technique, the “workhorse” of thermal analysis. This is a relatively new technique; its name has existed since 1963, when Perkin-Elmer marketed their DSC-1, the first DSC. The term DSC simply implies that during a linear temperature ramp, quantitative calorimetric information can be obtained on the sample. According to the ASTM standard E473, DSC is a technique in which the heat flow rate difference into a substance and a reference is measured as a function of temperature, while the sample is subjected to a controlled temperature program. As will be seen from this chapter, the expression “DSC” refers to two similar but somewhat different thermal analysis techniques. It is a common feature of these techniques that the various characteristic temperatures, the heat capacity, the melting and crystallization temperatures, and the heat of fusion, as well as the various thermal parameters of chemical reactions, can be determined at constant heating or cooling rates. It is important to note that the acronym DSC has two meanings: (1) an abbreviation of the technique (i.e., differential scanning calorimetry) and (2) the measuring device (differential scanning calorimeter).
Since the 1960s the application of DSC grew considerably, and today the number of publications that report DSC must amount to more than 100,000 annually.
One of these techniques that brought into science the name DSC, called today power compensation DSC, was created by Gray and O’Neil at the Perkin-Elmer Corporation in 1963. The other technique grew out of differential thermal analysis (DTA), and is called heat flux DSC. Differential thermal analysis itself originates from the works of Le Chatelier (1887), Roberts-Austen (1899), and Kurnakov (1904) (see Wunderlich, 1990). It needs to be emphasized that both of these techniques give similar results, but of course, they both have their advantages and disadvantages.
The major applications of the DSC technique are in the polymer and pharmaceutical fields, but inorganic and organic chemistry have also benefited significantly from the existence of DSC. Among the applications of DSC we need to mention the easy and fast determination of the glass transition temperature, the heat capacity jump at the glass transition, melting and crystallization temperatures, heat of fusion, heat of reactions, very fast purity determination, fast heat capacity measurements, characterization of thermosets, and measurements of liquid crystal transitions. Kinetic evaluation of chemical reactions, such as cure, thermal and thermooxidative degradation is often possible. Also, the kinetics of polymer crystallization can be evaluated. Lately, among the newest users of DSC we can list the food industry and biotechnology. Sometimes, specific DSC instruments are developed for these consumers.
DSC is extremely useful when only a limited amount of sample is available, since only milligram quantities are needed for the measurements. As time goes by, newer and newer techniques are introduced within DSC itself, like pressure DSC, fast-scan DSC, and more recently modulated temperature DSC (MTDSC). Also, with development of powerful mechanical cooling accessories, low-temperature measurements are common these days. DSC helps to follow processing conditions, since it is relatively easy to fingerprint the thermal and mechanical history of polymers. Although computerization, enormously accelerated the development of DSC, this has its own negatives; many operators tend to use the software without first understanding the basic principles of the measurements. Nevertheless, newer and more powerful software products have been were marketed that increased the productivity of the thermal analyst by significantly reducing the time for calculation of experimental results, and sometimes, interpretation of the data. It is unfortunate that few of these software applications are available to the research personnel for modification and thus for use in special conditions.
Today DSC is a routine technique; a DSC instrument can be found virtually in every chemical characterization laboratory, since the instruments are relatively inexpensive. Unfortunately, this has its drawback. It is a popular misconception that if you recorded a DSC peak, you did your job. In this chapter we would like to prove, from one side, that this is not true, but from the other side that despite this, DSC is still a simple and easily applied technique. Our goal is to present a simple but consistent picture of the present state of this measuring technique.
Thermodynamics studies two forms of energy transfer: heat and work. Heat can be defined as transfer of energy caused by the difference in temperatures of two systems. Heat is transferred spontaneously from hot to cold systems. It is an extensive thermodynamic quantity, meaning that its value is proportional to the mass of the system. The SI (Système International de Unités) unit of the heat is the joule (J). The earlier unit of “calorie” is not in use any more.
The goal of thermodynamics is to establish basic functions of state, the most important of which (for differential scanning calorimetry) are U, internal energy; H, enthalpy; p, pressure; V, volume; S, entropy; and Cp, heat capacity at constant pressure.
In thermodynamics, the description of reversible processes is the one that is most widely used. This is called equilibrium thermodynamics, because it deals with equilibrium systems. Nonequilibrium thermodynamics, which deals with irreversible processes, and thus has time as an additional variable to the basic parameters of state, exists, but is rarely used by chemists. It was Onsager and Prigogine [see, e.g. Onsager (1931a,b), Prigogine (1945, 1954, 1967), and Prigogine and Mayer (1955)] who did the most for development of this branch of science. We will not spend time on describing nonequilibrium thermodynamics in this book, but simply mention that the whole nonequilibrium system can be subdivided into small subsystems being in equilibrium, and the whole system is described as a sum of these subsystems. The interested reader can find more information in the book by de Groot and Mazur (1962).
As mentioned above, equilibrium thermodynamics deals with reversible processes, and is based on the following four laws of thermodynamics, which are empiric laws rather than theoretically deduced laws:
So, the zeroth law says that, there is a game (heat-to-work conversion game), and that you’ve got to play the game. The first law says you can’t win; at best, you can only break even. But according to the second law, you can break even only at 0 K. And the third law says, you can never reach 0 K (Moore 1972; Wunderlich 2005).
The most important functions of state used in DSC are described in the following subsections.
Temperature is the most important quantity in differential scanning calorimetry. With DSC, in essence temperature is the only measured quantity. Everything else is calculated from the changes of temperature, from the difference between the sample and reference temperatures. We can define temperature as a primary thermodynamic parameter of a system, which is a measure of the average kinetic energy of the atoms or molecules of the system. In everyday language, we use the words “hot,” “warm,” and “cold” to characterize the temperature of materials and bodies.
Temperature can be defined for equilibrium systems only, in which the velocity of the particles are described by the Boltzmann distribution. The temperature controls the flow of heat between two thermodynamic systems.
There are two laws of thermodynamics that help define “temperature” as a parameter of the system.
The zeroth law of thermodynamics states that if systems A and B are separately in thermal equilibrium with system C, then they are in thermal equilibrium with each other as well. Since all these systems are in thermal equilibrium with each other, some thermodynamic parameter must exist that has the same value in all of them. This parameter is called temperature. In other words, in the state of equilibrium, all thermodynamic systems have an intensive variable of state called the temperature.
The second law of thermodynamics helps define temperature mathematically as
(2.2)
in other words, temperature is the rate of increase of internal energy of the system with increasing entropy.
There are three temperature scales in use:
Temperatures throughout the Universe vary widely. To show this, here are several important temperature values: (1) the average temperature of the Universe is ~–270°C; (2) the temperature in the core of the Sun is ~12 million°C; and (3) finally, several temperatures are important in thermal analysis: the melting point of indium, 156.60°C, the melting point of tin, 231.93°C; the melting point of lead, 327.47°C; and the melting point of mercury is –38.8°C.
Temperature is measured by thermometers. The first temperature-measuring device was a special type of liquid thermometer, the thermoscope (discovered by Galilei in the sixteenth century). These days there are gas thermometers, liquid thermometers, infrared thermometers, liquid crystal (LC) thermometers (cholesteric), thermocouples, and resistance thermometers (these latter ones are the most important in thermal analysis).
Heat is a form of energy, which in spontaneous processes flows from a higher-temperature body to a lower-temperature body (the second law of thermodynamics). Therefore, heat flow can be defined as a process in which two thermodynamic systems exchange energy. The flow of heat continues until the temperature of the two systems or bodies becomes equal. This state is called thermal equilibrium.
For infinitesimal processes Eq. (2.1) can be rewritten as
(2.3)
(the quantities of δQ and δW are not differential, because Q and W are not functions of state in general; the heat (Q), becomes a function of state only for reversible processes).
In the case when only volume work takes place, one obtains
(2.4)
so
(2.5)
and
(2.6)
which can be rewritten as
(2.7)
This equation states that if a process occurs at constant volume, and no other work is taken into account, then
(2.8)
Thus, at these conditions the change in the internal energy equals the amount of heat added to the system or extracted from the system, and heat (Q) becomes a function of state.
We need to mention here the flow of heat, which is especially important in calorimetry. There are three major forms of heat flow: conduction, convection, and thermal radiation:
The latent (“hidden”) heat is the amount of heat absorbed or emitted by a material during a phase transition (it is called “latent” because the temperature of the material does not change during the phase transition despite the absorption or release of heat). This expression is used less frequently now; the current term is the heat of transition.
Equation (2.8) indicates that in thermodynamics, the internal energy is used as a function of state to characterize the system at constant volume, and also when no work is being performed on or by the system. But the majority of real processes, especially for polymers, take place at constant pressure, because solids and liquids (the only physical states for polymers) are virtually incompressible. For such processes (i.e., those taking place at constant pressure), Gibbs introduced a new function of state, enthalpy H
(2.9)
where p is pressure and V is volume.
Thus, enthalpy as a function of state is similar to internal energy, but it contains a correction for the volume work. However, the change of volume with temperature for solids and liquids is small; therefore the difference between enthalpy and internal energy is also small. Enthalpy is especially useful for processes taking place at constant pressure. For such processes
(2.10)
and
(2.11)
Therefore, the enthalpy increase of the system in equilibrium processes is identical to the heat added to the system.
In practice, enthalpy differences are generally used. The total absolute enthalpy of the sample cannot be directly measured, but it can be calculated if the heat capacity in the whole temperature range (from absolute zero) is known.
In DSC, the enthalpy change is calculated from the temperature difference between the sample and the reference. In endothermic processes (processes with energy absorption, such as melting or evaporation), the enthalpy of the system increases, while in exothermic processes (condensation, crystallization) the enthalpy (and the internal energy) of the system decreases.
Similar to the SI unit for heat, the SI unit for enthalpy is J (joule). Calories are no longer used.
Entropy is probably the most important function of state. Every aspect of our life is governed by entropy; it is the function of state characterizing the disorder of the system.
Entropy was introduced by Clausius in 1865 (Clausius 1865):
(2.12)
so
(2.13)
where the greater than (>) part of the sign refers to irreversible processes, while the equal to part of the sign refers to reversible processes. Therefore, in a reversible process that proceeds at constant temperature from state A to state B, we have
(2.14)
For cyclic processes, according to the second law of thermodynamics
(2.15)
where the “equal” part of the ≤ sign refers to the reversible processes, and the “less than” part refers to irreversible processes.
From Eq. (2.14) it follows that in an isolated system irreversible processes can proceed spontaneously if the entropy of the system increases:
(2.16)
Two more functions of state play an extremely important role in thermodynamics: the Helmholtz free energy (F) and the Gibbs free energy, or, as it is often called, the free enthalpy (G). These functions indicate what part of the internal energy or enthalpy can be converted into work at constant temperature and volume or pressure, respectively:
(2.17)
(2.18)
As can be seen, the usable portion of the internal energy and enthalpy decreases with increasing temperature and increasing entropy.
An extremely important function of state in differential scanning calorimetry is the heat capacity at constant pressure (Cp) and constant volume (Cv), because in the absence of chemical reactions or phase transitions, the amplitude of the DSC curve is proportional to the heat capacity of the sample at constant pressure.
Heat capacity indicates how much heat is needed to increase the sample temperature by 1°C. The heat capacity of a unit mass of a material is called specific heat capacity. The SI units for heat capacity are J/(K · mol) or J/(K·kg). There are two major heat capacities:
Heat capacity at constant volume:
(2.19)
Heat capacity at constant pressure:
(2.20)
Differential scanning calorimetry always determines Cp, because it is impossible to keep the samples at constant volume when temperature changes. When necessary, Cv can be calculated from Cp using one of the following relationships:
(2.21)
Equation (2.21) can be modified to the following equation
(2.22)
where V is the volume, γ is the coefficient of volumetric thermal expansion, and βT is the isothermal compressibility (the reciprocal of bulk modulus) (Wunderlich 1997a).
The melting point (Tm) is the temperature at which a crystalline solid changes to an isotropic liquid. From a DSC curve the melting point of a low-molecular-mass, high-purity substance can be determined as the point of intersection of the leading edge of the melting peak with the extrapolated baseline (see Section 2.6 of this chapter). This determination of the melting point is not suitable for low-molecular-mass substances of low purity and semicrystalline polymers. In both cases the melting range is somewhat broad, but for semicrystalline polymers it is often extremely broad. In such cases, the melting point is determined as the last, highest-temperature point of the melting endotherm, because this is the temperature at which the most perfect crystallitesmelt. Also, the melting point determined by the method described above can be correlated with the melting point determined by polarization optical microscopy. In addition to the melting point of semicrystalline polymers, often the peak temperature of melting (Tmp) is also reported; in the case of polymers, this temperature corresponds to the maximum rate of the melting process.
The heat of fusion (ΔHf) is the amount of heat that has to be supplied to 1 g of a substance to change it from a crystalline solid to an isotropic liquid.
The equilibrium melting point of a crystalline polymer is the lowest temperature at which macroscopic equilibrium crystals completely melt (Prime and Wunderlich 1969; Prime et al. 1969).
The heat of fusion of 100% crystalline polymer, or as it is sometimes called, the equilibrium heat of fusion, is the heat of fusion of the equilibrium polymeric crystals at the equilibrium melting point (the heat of fusion of 100% crystalline polymer depends somewhat on the melting temperature; that is why is given at ).
The crystallization temperature [often called the freezing point (Tc)] is the temperature at which an isotropic liquid becomes a crystalline solid during cooling. As a result of supercooling, the freezing point is almost always lower than the melting point. For low-molecular-mass, pure substances the freezing point is determined as the point of intersection of the leading edge of the crystallization exotherm with the extrapolated baseline. For semicrystalline polymers, the crystallization temperature is the highest temperature of the crystallization exotherm (designated as Tc0). When reporting DSC data, in addition to the freezing point, often the peak temperature of crystallization (indicating the highest rate of crystallization) is also reported (Tcp). Since usually both the melting and crystallization of semicrystalline polymers are far from equilibrium, the heating and cooling rates should be given when reporting data.
The glass transition temperature (Tg) is the temperature beyond which the long-range translational motion of the polymer chain segments is active. At this temperature (on heating) the glassy state changes into the rubbery or melt state. Below Tg, the translational motion of the segments is frozen, only the vibrational motion is active. Formally, the glass transition resembles a thermodynamic second-order transition because at the glass transition temperature there is a heat capacity jump. But this heat capacity increase does not take place at a definite temperature as would be required by equilibrium thermodynamics, but rather in a temperature range. Therefore the glass transition is a kinetic transition. When the glass transition temperature is determined by a relaxational technique (DMA, DEA), it is often called the temperature of the α relaxation or the α dispersion (or β relaxation or β dispersion if a crystalline relaxation exists at higher temperatures).
As previously mentioned in 2.1, ASTM standard E473 defines differential scanning calorimetry (DSC) as a technique in which the heat flow rate difference into a substance and a reference is measured as a function of temperature while the substance and reference are subjected to a controlled temperature program. It should be noted that the same abbreviation, DSC, is used to denote the technique (differential scanning calorimetry) and the instrument performing the measurements (differential scanning calorimeter).
As Wunderlich (1990) mentioned, no heat flow meter exists that could directly measure the heat flowing into or out of the sample, so other, indirect techniques must be used to measure the heat. Differential scanning calorimetry is one of these techniques; it uses the temperature difference developed between the sample, and a reference for calculation of the heat flow. An exotherm indicates heat flowing out of the sample, while an endotherm indicates heat flowing in.
Two types of DSC instruments exist: heat flux and power compensation. Historically, heat flux DSC evolved from differential thermal analysis (DTA). The basic design of a DTA consists of a furnace adjoined to separate sample and reference holders—A programmer heats the furnace containing the sample and the reference holders at a linear heating rate. The signals from the DTA sensors, usually thermocouples, are then fed to an amplifier. Unlike DSC, in DTA the sample and reference holders hold the sample and the reference material directly, without any additional packing (this additional packing is the sample and reference pans in DSC); the sample holder is loaded with the sample, while the reference holder is filled with an inert reference material such as aluminum oxide. Since the sample and the reference are heated from the outside, the DTA response is now susceptible to heat transport effects through the sample because of the usually large amount of the sample (up to several grams in older-type DTAs). Such factors could be the amount, packing, or thermal conductivity of the sample. These problems are reduced for DSC when the sample is separated from direct contact with the sensor and encapsulated in a pan constructed of a high-thermal-conductivity material. This is typically high-purity-aluminum, although other metals, such as copper, gold, or platinum can also be used. A normally empty sample pan is used as a reference. Newer DTAs now include sensors that are separated from the sample and lie outside the container; however, the sample is still directly packed into the holder. An example of this is the 1600°C DTA attachment to the TA Instruments 2920 module. In this instrument the sample is put into platinum sample containers (TA Instruments 1993).
Since the sample is heated from one specific source (usually from outside), potenfially significant temperature gradients exist within the sample. It is an important task in thermal analysis to create conditions in which the temperature gradients within the sample can be minimized. The temperature gradient is the unequal distribution of temperature within the sample. The temperature gradient in the sample depends on the heating rate, the sample size, and the thermal diffusivity of the sample and the sample holder. Thermal diffusivity (m2/s) is determined as the ratio of thermal conductivity λ [W/(m·K)] and the volumetric heat capacity [(J/(kg·K)]
(2.23)
where ρCp (the product of the density and the specific heat capacity) is the volumetric heat capacity.
This means that sample holders with high thermal diffusivity are desirable in DSC or DTA, because they conduct heat rapidly. Since the design of the DSC cell is set and has been optimized by the manufacturer, one can minimize the temperature gradient only by selecting appropriate sample size and heating rate. The temperature gradient within a sample can be calculated from the heating rate and the sample thickness (Wu et al. 1988); it increases with increasing heating rate and sample thickness.
The temperature gradient is not to be confused with thermal lag, which is another physical property that should also be minimized in DSC experiments. Thermal lag is the difference between the average sample temperature and the sensor temperature and is caused by so-called thermal resistance, which characterizes the ability of the material to hinder the flow of heat. Thermal lag is smaller in DSC than in DTA because of smaller sample size (milligrams in DSCs), but more types of thermal resistance develop in DSC than in DTA. These effects are caused by introduction of the sample and reference pans into the DSC sample and reference holders. Thus, in DTA thermal resistance develops between the sample holder (in some instruments called the sample pod) and the sample (analogously, between the reference holder and the reference material), and within the sample and the reference materials. On the other hand, in DSC thermal resistance will develop between the sample holder and the bottom of the sample pan and the bottom of the sample pan and the sample (these are called external thermal resistances), and within the sample itself (this is called internal thermal resistance). These thermal resistances should be taken into account since they determine the thermal lag. Let us suppose that the cell is symmetric with regard to the sample and reference pods or holders, the instrumental thermal resistances are identical for the sample and reference holders, the contact between the pans and the pods are intimate, no crosstalk exists between the sample and reference sensors (i.e., the electrical signals from the sensors do not influence each other), and the temperature distribution in the sample (and reference) is uniform (i.e., there is no temperature gradient in the sample). In such a case in steady state, the sum of all the thermal lags described can be expressed by the following equation
(2.24)
where is the heat flow rate, R is the thermal resistance (including both internal and external contributions), λ is the thermal conductivity, A is the contact area, ΔTsbl is the temperature difference between the sample and the block (which is the heat sink), and ΔX is the linear heat conduction pathway (Hemminger and Höhne 1984). The thermal resistance can be calculated from the slope of the leading edge of the melting peak of a pure low molecular mass substance such as indium. The directional heat pathway formed by the temperature difference is referred to as the heat leak. Here we mention that in differential scanning calorimetry one comes to steady state when in nonisothermal mode (i.e., during heating or cooling), the ΔT(=Ts – Tr) signal reaches a constant value. In steady state the ΔT signal may change slightly because of the slight increase of the heat capacity of the sample with temperature.
If the cell is not symmetric or the thermal resistances are not identical, then one will not measure equal contributions from the sample and the reference sensors, which in turn will manifest itself in a nonlinear baseline. This can if the cell is not machined for exact symmetry or even if sample and reference pans of unequal masses are used, but software can be used to compensate for these imbalances. Imbalances can result in a curved baseline. Other influences like crosstalk between the sensors will result in unequal contributions from the sample and reference sensors. The operator can control pan contact to some extent. The pan bottoms should be crimped flat without any deformities and sit steady on the sensor. Although this is less of a problem for the reference pan, the sample pan may become deformed if a bulky material is encapsulated, such as irregularly shaped hard pieces with sharp edges. How to best encapsulate a bulky sample depends on the situation, but one can use sample pans pressed out of thick sheet, pulverize the sample into a powder, press the sample into a film, or use a specially designed crimper (like the Tzero™ crimper of TA Instruments). The last factor considered here is temperature uniformity throughout the sample. As mentioned above, a good uniform temperature distribution is dependent on sample size, heating rate, and packing. When preparing the sample, one needs to use a mass that does not result in a large temperature gradient for the heating (or cooling) rate used and ensure that there is good contact with the pan. Wunderlich (1990) calculated the maximum sample size for various heating rates when the maximum temperature gradient in the sample did not exceed ±0.5°C for a disk-shape sample with a radius of 2.5 mm. His calculations showed that at a heating rate of 10°C/min the maximum sample mass is 20 mg, at 1000°C/min it is 2 mg, and at 100,000°C/min it is 200 μg.
Heat flux DSC usually consists of a cell containing reference and sample holders separated by a bridge that acts as a heat leak surrounded by a block that is a constant-temperature body (see Fig. 2.1). The block is the housing that contains the heater, sensors, and the holders. The holders are raised platforms on which the sample and reference pans are placed. The heat leak permits a fast transfer of heat allowing a reasonable time to steady state. The differential behavior of the sample and reference is used to determine the thermal properties of the sample. A temperature sensor is located at the base of each platform. Associated with the cell are a furnace and a furnace sensor. The furnace is designed to supply heating at a linear rate. However, not only the heating rate must be linear, but also the cooling rate during cooling experiments. This can be accomplished by cooling the block or housing of the instrument to a low temperature, where the heater fights against a cold block, or a coolant can be nebulized into the block. Finally, some inert gas, called the purge gas, flows through the cell.
Figure 2.1. Cross section of a Du Pont (now TA Instruments) 910, 2910, and 2920 DSC heat flux cell (Blaine, Du Pont Instruments bulletin; courtesy of TA Instruments).
The operation of the heat flux DSC is based on a thermal equivalent of Ohm’s law. Ohm’s law states that current equals the voltage divided by the resistance, so for the thermal analog one obtains
(2.25)
where a is the heat flow rate, ΔT is the temperature difference between the sample and the reference sensors, and R is the thermal resistance of the heat leak disk. This equation can also be derived from Newton’s law of cooling when the heat flow rate is substituted for the slope of the cooling curve (Wunderlich 1990). Newton’s law of cooling can be given by the following equation
(2.26)
where the slope of the cooling curve, barring no transitions, is equal to the difference in temperature times a constant, K. The slope in this equation is denoted by a negative sign to indicate that it is a cooling rate, T is the temperature at any time t, and Tsur is the temperature of the heat sink (or surroundings).
The TA Instruments Q series DSCs evolved from their 910, 2910, and 2920 modules. The DSC 910, 2910, and 2920 cells use a thermoelectric heat leak made of constantan (a copper/nickel alloy) as noted in Fig. 2.2. The sample and reference pans sit on raised platforms or pods with the constantan disk at their base. The temperature sensors are disk-shaped chromel/constantan “area” thermocouples and chromel/alumel thermocouples. The thermocouple disk sensors sit on the underside of each platform. The ΔT output from the sample and reference thermocouples is fed into an amplifier to increase their signal strength. The heating block is made of silver for good thermal conductivity and also provides some reflectivity for any emissive heat.
Figure 2.2. DSC sensor assembly for the TA Instrument Q10, Q20, Q100, Q200, Q1000, and Q2000 modules. Note the three thermocouple heat flow sensor design as compared to the two thermocouple heat flow sensor design as seen in Fig. 2.1. [From Danley (2003a); reprinted with permission of Elsevier and TA Instruments.]
The TA Instruments cell for the Q series DSC utilizes three thermocouples (see Fig. 2.2) and the associated Tzero technology (Danley 2003, 2004). In addition to the sample and reference sensors, an additional center thermocouple, denoted T0 (Tzero), is utilized for the heat flow measurements. Similar to the 910, 2910, and 2920 modules, there are two raised platforms for the sample and the reference on a constantan disk, which acts as a heat leak. The sample and reference disk thermocouples are attached to the underside of each platform. Two ΔT measurements are made. The first is taken between the chromel wires that are attached to the chromel disk area detectors. In addition, ΔT0 is measured between chromel wires attached to the sample chromel disk and the T0 sensor. A chromel wire is looped between the sample chromel disk and the T0 sensor, which measures the sample temperature at the raised pod. The T0 sensor temperature is measured at the junction of the constantan and chromel wires attached at the center of the heat leak base.
The operation of the Tzero (T0
