144,99 €
Explores the benefits and limitations of the latest high-throughput screening methods With its expert coverage of high-throughput in vitro screening methods for toxicity testing, this book makes it possible for researchers to accelerate and streamline the evaluation and risk assessment of chemicals and drugs for toxicity. Moreover, it enables them to comply with the latest standards set forth by the U.S. National Research Council's "Toxicity Testing in the 21st Century: A Vision and Strategy" and the E.U.'s REACH legislation. Readers will discover a variety of state-of-the-science, high-throughput screening methods presented by a group of leading authorities in toxicology and toxicity testing. High-Throughput Screening Methods in Toxicity Testing is divided into five parts: * General aspects, including predicting the toxicity potential of chemicals and drugs via high-throughput bioactivity profiling * Assessing different cytotoxicity endpoints * Assessing DNA damage and carcinogenesis * Assessing reproductive toxicity, cardiotoxicity, and haematotoxicity * Assessing drug metabolism and receptor-related toxicity Each chapter describes method principles and includes detailed information about data generation, data analysis, and applications in risk assessment. The authors not only enumerate the advantages of each high-throughput method over comparable conventional methods, but also point out the high-throughput method's limitations and potential pitfalls. In addition, the authors describe current research efforts to make high-throughput toxicity screening even more cost effective and streamlined. Throughout the book, readers will find plenty of figures and illustrations to help them understand and perform the latest high-throughput toxicity screening methods. This book is ideal for toxicologists and other researchers who need to implement high-throughput screening methods for toxicity testing in their laboratories as well as for researchers who need to evaluate the data generated by these methods.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 1020
Veröffentlichungsjahr: 2013
Contents
Cover
Title Page
Copyright
Preface
Contributors
Part I: General Aspects
Chapter 1: ToxCast: Predicting Toxicity Potential Through High-Throughput Bioactivity Profiling
1.1 INTRODUCTION
1.2 CHEMICAL LANDSCAPE
1.3 THE CHEMICAL LIBRARIES
1.4 THE BIOLOGICAL ASSAYS
1.5 IN VIVO TOXICITY DATABASE
1.6 PREDICTIVE MODELS
1.7 CHEMICAL PRIORITIZATION
1.8 TARGETED TESTING
1.9 CONCLUSION
DISCLAIMERS
ACKNOWLEDGMENTS
REFERENCES
Chapter 2: High-Throughput Toxicity Testing in Drug Development: Aims, Strategies, and Novel Trends
2.1 INTRODUCTION
2.2 DRUG TOXICITY FAILURE IN (PRE)CLINICAL DEVELOPMENT
2.3 STRATEGY FOR IMPLEMENTATION OF IN VITRO TOXICITY TESTING
2.4 ASSAYS FOR IN VITRO GENOTOXICITY SCREENING
2.5 ASSAYS FOR IN VITRO CYTOTOXICITY SCREENING
2.6 ASSAYS FOR NUCLEAR RECEPTOR-INDUCED ACTIVATION OF PHASE I AND II ENZYMES
2.7 ASSAYS FOR NUCLEAR STEROID RECEPTOR ACTIVATION AND ENDOCRINE DISRUPTION
2.8 ASSAYS FOR NUCLEAR RECEPTOR ACTIVATION AND NON-GENOTOXIC CARCINOGENICITY
2.9 ASSAYS FOR NUCLEAR RECEPTOR ACTIVATION AND EMBRYOTOXICITY
2.10 TOXICOGENOMICS: A DIFFERENT MULTIPLE SIZE HTS FORMAT TO STUDY MECHANISMS OF ACTION
2.11 CONCLUSIONS
REFERENCES
Chapter 3: Incorporating Human Dosimetry and Exposure Information with High-Throughput Screening Data in Chemical Toxicity Assessment
3.1 INTRODUCTION
3.2 EXPERIMENTAL APPROACH
3.3 FINDINGS
3.4 DISCUSSION
3.5 CONCLUSION
FUNDING
ACKNOWLEDGMENT
REFERENCES
Chapter 4: The Use of Human Embryonic Stem Cells in High-Throughput Toxicity Assays
4.1 INTRODUCTION
4.2 POTENTIAL ROLE OF hESCs IN HIGH-THROUGHPUT SCREENING STRATEGIES FOR TOXICITY
4.3 ENDPOINT-BASED HTS IN TOXICITY ASSAYS
ACKNOWLEDGMENTS
REFERENCES
Part II: High-Throughput Assays to Assess Different Cytotoxicity Endpoints
Chapter 5: High-Throughput Screening Assays for the Assessment of Cytotoxicity
5.1 INTRODUCTION
5.2 BASIC CONSIDERATIONS FOR MEASUREMENTS OF CELLULAR HEALTH
5.3 SINGLE-PARAMETER ASSAYS (METABOLISM- AND NONMETABOLISM-BASED)
5.4 MULTIPARAMETRIC METHODS
5.5 SUMMARY
REFERENCES
Chapter 6: High-Throughput Flow Cytometry Analysis of Apoptosis
ACKNOWLEDGMENTS
REFERENCES
Chapter 7: High Content Imaging-Based Screening for Cellular Toxicity Pathways
7.1 INTRODUCTION
7.2 AUTOMATED IMAGING: PREDEFINED UNBIASED MICROSCOPY FIT FOR HCS
7.3 END-POINT HCS ASSAYS
7.4 TIME-LAPSE MICROSCOPY OF APOPTOSIS
7.5 APPLICATION OF LIVE APOPTOSIS IMAGING TO FUNCTIONAL GENOMICS SCREENING
7.6 TIME-LAPSE MICROSCOPY OF CELL STRESS DYNAMICS
7.7 HIGH CONTENT ANALYSIS
REFERENCES
Chapter 8: The Keratinosens Assay: A High-Throughput Screening Assay to Assess Chemical Skin Sensitization
8.1 SKIN SENSITIZATION AND THE NEED TO SCREEN NOVEL CHEMICALS
8.2 THE MECHANISTIC BACKGROUND OF THE KERATINOSENS ASSAY
8.3 THE CREATION OF THE KERATINOSENS REPORTER CELL LINE
8.4 THE STANDARD OPERATING PROCEDURE OF THE KERATINOSENS ASSAY
8.5 DATA ANALYSIS AND PREDICTION MODEL
8.6 THE INTRA- AND INTERLABORATORY REPRODUCIBILITY AND PREVALIDATION STUDIES
8.7 THE PREDICTIVITY FOR STANDARD LISTS OF REFERENCE CHEMICALS
8.8 THE APPLICABILITY DOMAIN AND LIMITATIONS OF THE ASSAYS AS DETERMINED THROUGH THE SCREENING
8.9 CASE STUDIES ON SPECIFIC CHEMICAL CLASSES
8.10 THE USE OF THE ASSAY IN AN INTEGRATED TESTING STRATEGY (ITS) FOR A COMPLETE REPLACEMENT OF ANIMAL TESTING
REFERENCES
Chapter 9: High-Throughput Screening Assays to Assess Chemical Phototoxicity
9.1 INTRODUCTION
9.2 PHOTOTOXIC PATHWAYS
9.3 SCREENING SYSTEMS FOR PHOTOSAFETY ASSESSMENT
9.4 3T3 NRU PHOTOXICITY TEST
9.5 ROS ASSAY
9.6 CONCLUSION AND FUTURE OUTLOOK
REFERENCES
Part III: High-Throughput Assays to Assess DNA Damage and Carcinogenesis
Chapter 10: Ames II™ and Ames Liquid Format Mutagenicity Screening Assays
10.1 INTRODUCTION
10.2 AMES II™ ASSAY
10.3 AMES LIQUID FORMAT ASSAY
REFERENCES
Chapter 11: High-Throughput Bacterial Mutagenicity Testing: Vitotox™ Assay
11.1 INTRODUCTION
11.2 PURPOSE OF THE VITOTOX TEST
11.3 PRINCIPLE OF THE VITOTOX TEST
11.4 CONSTRUCTION OF recN-luxCDABE FUSIONS
11.5 CONSTRUCTION OF pr1-LUXCDABE FUSION
11.6 VITOTOX® TEST PROCEDURE
11.7 EXAMPLES OF VITOTOX TEST RESULTS
11.8 APPLICATIONS OF THE VITOTOX TEST
11.9 CONCLUSIONS
REFERENCES
Chapter 12: Genotoxicity and Carcinogenicity: Regulatory and Novel Test Methods
12.1 INTRODUCTION
12.2 GENOTOXICITY
12.3 CARCINOGENICITY
12.4 REGULATORY TESTS TO DETECT PHARMACEUTICALS WITH GENOTOXIC AND CARCINOGENIC POTENTIAL
12.5 SCREENING FOR COMPOUNDS WITH GENOTOXIC AND CARCINOGENIC POTENTIAL WITHIN THE LEAD OPTIMIZATION PHASE OF DRUG DEVELOPMENT
12.6 COMPARISON OF THE SENSITIVITY AND SPECIFICITY OF THE REGULATORY AND NOVEL HIGHER THROUGHPUT IN VITRO GENOTOXICITY ASSAY
12.7 CONCLUSION
REFERENCES
Chapter 13: High-Throughput Genotoxicity Testing: The Greenscreen Assay
REFERENCES
Chapter 14: High-Throughput Assays to Quantify the Formation of DNA Strand Breaks
14.1 SINGLE CELL GEL ELECTROPHORESIS ASSAY
14.2 FLUORIMETRIC DETECTION OF ALKALINE DNA UNWINDING
14.3 IMMUNOFLUORESCENCE STAINING OF γ-H2AX FOCI
14.4 CONCLUDING REMARKS
REFERENCES
Chapter 15: High-Throughput Versions of the Comet Assay
15.1 INTRODUCTION
15.2 DESCRIPTION OF THE DIFFERENT VERSIONS OF THE HIGH-THROUGHPUT COMET ASSAY
15.3 VALIDATION OF THE HIGH-THROUGHPUT ASSAYS
15.4 ADVANTAGES OF THE HIGH-THROUGHPUT COMET ASSAYS COMPARED TO THE CONVENTIONAL VERSION
15.5 APPLICATION AREAS
15.6 COMPARISON OF THE HIGH-THROUGHPUT COMET ASSAY AND THE HIGH-THROUGHPUT MICRONUCLEUS TEST
REFERENCES
Chapter 16: Automated Soft Agar Colony Formation Assay for the High-Throughput Screening of Malignant Cell Transformation
16.1 INTRODUCTION
16.2 96-WELL SOFT AGAR COLONY FORMATION ASSAY
16.3 384-WELL SOFT AGAR COLONY FORMATION ASSAY
16.4 AUTOMATED 96-WELL SOFT AGAR COLONY FORMATION ASSAY
REFERENCES
Chapter 17: High-Throughput Quantification of Morphologically Transformed Foci in Bhas 42 Cells (v-Ha-ras Transfected BALB/c 3T3) Using Spectrophotometry
17.1 INTRODUCTION
17.2 MATERIALS
17.3 REAGENT SETUP
17.4 PROCEDURES
17.5 OPTION: CONFIRMATION OF CORRESPONDENCE BETWEEN HIGH ABSORBANCE VALUES AND TRANSFORMED FOCI
17.6 DISCUSSION
ACKNOWLEDGMENTS
REFERENCES
Part IV: High-Throughput Assays to Assess Reproductive Toxicity, Cardiotoxicity, and Haematotoxicity
Chapter 18: ReProGlo: A New Stem-Cell-Based High-Throughput Assay to Predict the Embryotoxic Potential of Chemicals
18.1 INTRODUCTION
18.2 ESTABLISHING THE ReProGlo ASSAY PROTOCOL
18.3 PARTICIPATION IN THE ReProTect FEASIBILITY STUDY
18.4 AUTOMATION OF THE ASSAY
18.5 INTEGRATION OF A METABOLIC ACTIVATION SYSTEM
18.6 CONCLUSIONS
REFERENCES
Chapter 19: Embryonic Stem Cell Test (EST): Molecular Endpoints Toward High-Throughput Analysis of Chemical Embryotoxic Potential
19.1 INTRODUCTION
19.2 THE IMPLEMENTATION AND ASSESSMENT OF FOCUSED MOLECULAR-BASED ENDPOINTS IN THE EST
19.3 IMPLEMENTING GLOBAL MOLECULAR-BASED ENDPOINTS IN THE EST
19.4 COMPARISON OF EST TO TRADITIONAL DEVELOPMENTAL TOXICITY IN VIVO DATA
19.5 COMPARISONS ACROSS IN VITRO AND IN VIVO MODELS USING OMIC APPROACHES
19.6 CONCLUSION
REFERENCES
Chapter 20: Zebrafish Development: High-Throughput Test Systems to Assess Developmental Toxicity
20.1 INTRODUCTION
20.2 ZEBRAFISH BACKGROUND
20.3 DEVELOPMENTAL CONCORDANCE
20.4 EXAMPLES OF SCREENS AND VALIDATION APPROACHES
20.5 THE QUESTION OF BIOAVAILABILITY
ACKNOWLEDGMENTS
REFERENCES
Chapter 21: SINGLE CELL IMAGING CYTOMETRY-BASED HIGH-THROUGHPUT ANALYSIS OF DRUG-INDUCED CARDIOTOXICITY
21.1 INTRODUCTION
21.2 SINGLE CELL IMAGING CYTOMETRY
21.3 QUANTITATIVE HIGH-THROUGHPUT ANALYSIS OF DRUG-INDUCED CARDIOTOXICITY
21.4 APPLICATION OF HCS TO ANALYZE DRUG-INDUCED CARDIOTOXICITY BY USING A SINGLE CELL IMAGING CYTOMETRY SYSTEM AND FLUORESCENT INDICATORS
REFERENCES
Chapter 22: High-Throughput Screening Assays to Evaluate the Cardiotoxic Potential of Drugs
22.1 INTRODUCTION
22.2 THE MICROELECTRODE ARRAY SYSTEMS
22.3 THE USE OF CELLULAR OXYGEN UPTAKE RATES FOR MONITORING THE STATE OF CARDIOMYOCYTES
22.4 SURFACE PLASMON RESONANCE FOR TESTING OF TROPONIN
22.5 CONCLUSIONS
ACKNOWLEDGMENTS
REFERENCES
Chapter 23: High-Throughput Screening Assays to Evaluate the Hematotoxic Potential of Drugs
23.1 INTRODUCTION
23.2 HEMATOPOIESIS
23.3 COLONY FORMING UNIT ASSAYS
23.4 APPLICATIONS OF THE COLONY FORMING UNIT ASSAYS
23.5 LIQUID CULTURE ASSAYS
23.6 HIGH-THROUGHPUT METHODS FOR THE HEMATOTOXIC EVALUATION OF COMPOUNDS
23.7 THE FMCA-GM SYSTEM
23.8 THE HALO® SYSTEM
23.9 ALTERNATIVE SOURCES OF HEMATOPOIETIC CELLS
23.10 APPLICATIONS OF HIGH-THROUGHPUT ASSAYS TO ESTIMATE THE HEMATOTOXIC POTENTIAL OF DRUGS
23.11 LIMITATIONS OF IN VITRO ASSAYS TO ESTIMATE THE HEMATOTOXIC POTENTIAL OF DRUGS
23.12 SUMMARY
REFERENCES
Part V: High-Throughput Assays to Assess Drug Metabolism and Receptor-Related Toxicity
Chapter 24: High-Throughput Enzyme Biocolloid Systems for Drug Metabolism and Genotoxicity Profiling Using LC–MS/MS
24.1 INTRODUCTION TO METABOLITE-BASED TOXICITY TESTING
24.2 BIOCOLLOID REACTOR PARTICLES
24.3 HIGH-THROUGHPUT ANALYSIS WITH BIOCOLLOID REACTORS
24.4 SUMMARY AND FUTURE PROSPECTS
ACKNOWLEDGMENT
REFERENCES
Chapter 25: Higher-Throughput Screening Methods to Identify Cytochrome P450 Inhibitors and Inducers: Current Applications and Practice
25.1 INTRODUCTION
25.2 CYTOCHROME P450 INHIBITION METHODS
25.3 CYTOCHROME P450 INDUCTION METHODS
25.4 CYTOCHROME P450 INHIBITION AND INDUCTION BY THERAPEUTIC PROTEINS
25.5 CONCLUSIONS AND CONSIDERATIONS FOR FUTURE DIRECTIONS
ACKNOWLEDGMENTS
REFERENCES
Chapter 26: High-Throughput Yeast-Based Assays to Study Receptor-Mediated Toxicity
26.1 INTRODUCTION
26.2 YEAST AS A NUCLEAR RECEPTOR STUDY ORGANISM
26.3 PRINCIPLE OF NUCLEAR RECEPTOR YEAST ASSAYS
26.4 CURRENT NUCLEAR RECEPTOR YEAST ASSAYS
26.5 YEAST NUCLEAR RECEPTOR ASSAYS AND HIGH-THROUGHPUT SCREENING
REFERENCES
Chapter 27: Evaluating the Peroxisomal Phenotype in High Content Toxicity Profiling
27.1 INTRODUCTION
27.2 EVALUATING PEROXISOME PHENOTYPE IN CYTOTOXICITY TESTING
27.3 ASSAY ASSEMBLY AND CONSIDERATIONS AFTER A PRODUCTION SCREEN: HepG2 PEROXISOME BIOGENESIS ASSAY
27.4 IMAGE PROCESSING STRATEGY
27.5 ASSAY VALIDATION
27.6 IMPLEMENTATION OF NOVEL PHOTOSWITCHABLE/ PHOTOBLEACHABLE FLUORESCENT REPORTERS FOR HIGH CONTENT INVESTIGATION OF PROTEIN DYNAMICS
27.7 COMPOUND LIBRARIES FOR ASSAY DEVELOPMENT
27.8 EXPLORATORY MULTIVARIATE DATA ANALYSIS
27.9 DISCUSSION AND SUMMARY
ACKNOWLEDGMENTS
REFERENCES
Chapter 28: A Panel of Quantitative Calux® Reporter Gene Assays for Reliable High-Throughput Toxicity Screening of Chemicals and Complex Mixtures
28.1 BACKGROUND
28.2 GENERAL APPROACH
28.3 PRINCIPLE OF CALUX® REPORTER GENE ASSAYS TO MEASURE MAJOR TOXICITY PATHWAYS
28.4 SELECTIVE REPORTER GENE ASSAYS IN THE CALUX® PANEL
28.5 GENERATION OF A PANEL OF CALUX® REPORTER GENE ASSAYS
28.6 METABOLISM
28.7 APPLICATIONS OF PANELS OF CALUX® REPORTER GENE ASSAYS
28.8 HIGH-THROUGHPUT (HTP) SCREENING
28.9 QUALITY CONTROL AND VALIDATION OF HIGH-THROUGHPUT METHODS
ACKNOWLEDGMENTS
REFERENCES
Chapter 29: Dr-Calux®: A High-Throughput Screening Assay for the Detection of Dioxin and Dioxin-Like Compounds in Food and Feed
29.1 DIOXINS
29.2 DETECTION OF DIOXINS
29.3 CONCLUSIONS AND OUTLOOK
ACKNOWLEDGMENTS
REFERENCES
Index
Copyright © 2013 by John Wiley & Sons, Inc. All rights reserved.
Published by John Wiley & Sons, Inc., Hoboken, New Jersey.
Published simultaneously in Canada.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permission.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com.
Library of Congress Cataloging-in-Publication Data:
High-throughput screening methods in toxicity testing / edited by Pablo Steinberg. pages cm Includes index. ISBN 978-1-118-06563-1 (hardback) 1. High throughput screening (Drug development) 2. Toxicity testing. I. Steinberg, Pablo, editor if compilation. RS419.5.H542 2013 615.1′9′dc23 2012035755
ISBN: 9781118065631
PREFACE
Conventional approaches to toxicity testing of chemicals and drugs are often decades old, costly, do not allow high-throughput testing, and are of questionable value when wanting to estimate human risk. The publication of the document entitled “Toxicity Testing in the 21st Century: A Vision and Strategy” by the US National Research Council and the implementation of the European legislation on the Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) have led to a paradigm shift regarding the strategy to be pursued when evaluating the toxic potential of chemicals and drugs. Namely, toxicity evaluation should be preponderantly performed by using high-throughput in vitro methods and toxicity testing methods in animals should play, if at all, a minimal role.
The book gives an overview on a variety of high-throughput screening methods being used in toxicity testing nowadays and should be of help to all scientists working in the field of toxicity evaluation and risk assessment of chemicals and drugs in chemico-pharmaceutical as well as biotechnology companies, contract laboratories, academia as well as regulatory agencies. The book chapters are written in such a way that they lend support to those wanting to establish these methods in their laboratories as well as those having to evaluate the data generated. Each chapter describes the principle of the method and includes detailed information on data generation, data analysis, and the use/application(s) in risk assessment. Moreover, the chapters not only list the advantages of the high-throughput method over the “conventional” methods used up to now in safety evaluation of chemicals and drugs but also point out limitations and pitfalls.
The book is divided into five parts. Part I includes the strategies pursued nowadays to predict the toxicity potential of chemicals and drugs through high-throughput bioactivity profiling, the incorporation of human dosimetry and exposure data into high-throughput in vitro toxicity screening, and the use of human embryonic stem cells in high-throughput toxicity assays. Part II presents a variety of high-throughput assays to assess different cytotoxicity endpoints; Part III describes high-throughput assays to assess DNA damage and carcinogenesis; Part IV includes high-throughput assays to assess reproductive toxicity, cardiotoxicity, and hematotoxicity; and Part V presents high-throughput assays to assess drug metabolism and receptor-related toxicity. By including all these above-mentioned aspects, the book should be of great value to toxicologists, pharmacologists, analytical chemists, and pharmaceutical scientists working in academic institutions, industry, and regulatory agencies that are involved in safety evaluation and risk assessment of chemicals and drugs and an excellent complement to the current literature on toxicology in general and safety evaluation/risk assessment in particular. Because of the test systems and the toxicity endpoints described, this book could also be extremely interesting for all scientists working in the fields of biochemistry, cell biology, molecular biology, systems biology, and computational toxicology.
I hereby would like to thank all authors for their excellent contributions. Only because of them it was possible to conceive a book including such a broad spectrum of toxicity testing methods. The development of high-throughput methods to screen the toxic potential of drugs and chemicals is a rapidly evolving field. If the one or the other method was missed, then this omission was not on purpose and an incentive to actualize this version of the book in the future.
PABLO STEINBERG
CONTRIBUTORS
Harrie T. Besselink, BioDetection Systems BV, Amsterdam, The Netherlands
Jörg Blümel, MedImmune, Gaithersburg, MD, USA
Abraham Brouwer, BioDetection Systems BV, Amsterdam, The Netherlands, and Department of Animal Ecology, VU University Amsterdam, Amsterdam, The Netherlands
Bart van der Burg, BioDetection Systems BV, Amsterdam, The Netherlands
Alexander Bürkle, Molecular Toxicology Group, Department of Biology, University of Konstanz, Konstanz, Germany
Nathan J. Evans, Research Department, Promega Corporation, Madison, WI, USA
Francesca de Giorgi, FluoFarma, Pessac, France
Caroline Haglund, Division of Clinical Pharmacology, Department of Medical Sciences, Uppsala University, Uppsala, Sweden
Bram Herpers, Division of Toxicology, The Leiden Amsterdam Center for Drug Research, Leiden University, Leiden, The Netherlands
Martin Höglund, Division of Hematology, Department of Medical Sciences, Uppsala University, Uppsala, Sweden
G. Jean Horbach, Department of Toxicology and Drug Disposition, Merck Sharp & Dohme, Oss, The Netherlands
Keith A. Houck, National Center for Computational Toxicology, Office of Research and Development, US Environmental Protection Agency, Research Triangle Park, NC, USA
Xin Huang, Cardiovascular Institute, Clinical Research Center, 2nd Affiliated Hospital at School of Medicine, Zhejiang University, Hangzhou, People's Republic of China
François Ichas, FluoFarma, Pessac, France
Esther de Jong, Laboratory for Health Protection Research, National Institute for Public Health and the Environment (RIVM), Bilthoven, The Netherlands, and Institute for Risk Assessment Sciences, Faculty of Veterinary Medicine, Utrecht University, Utrecht, The Netherlands
Lydia Jonker, BioDetection Systems BV, Amsterdam, The Netherlands
Richard S. Judson, National Center for Computational Toxicology, Office of Research and Development, US Environmental Protection Agency, Research Triangle Park, NC, USA
Min Jung Kim, College of Pharmacy, Seoul National University, Seoul, South Korea
Nadine Krause, Nonclinical Safety, Merz Pharmaceuticals GmbH, Frankfurt (Main), Germany
Rolf Larsson, Division of Clinical Pharmacology, Department of Medical Sciences, Uppsala University, Uppsala, Sweden
Sander van der Linden, BioDetection Systems BV, Amsterdam, The Netherlands
Yi-jia Lou, Institute of Pharmacology, Toxicology and Biochemical Pharmaceutics, Zhejiang University, Hangzhou, People's Republic of China
Hai-yen Man, BioDetection Systems BV, Amsterdam, The Netherlands
Carl-Fredrik Mandenius, Division of Biotechnology, Department of Physics, Chemistry and Biology, Linköping University, Linköping, Sweden
Matthew T. Martin, National Center for Computational Toxicology, Office of Research and Development, US Environmental Protection Agency, Research Triangle Park, NC, USA
Thomas Meyer, Multi Channel Systems GmbH, Reutlingen, Germany
Richard A. Moravec, Research Department, Promega Corporation, Madison, WI, USA
María Moreno-Villanueva, Molecular Toxicology Group, Department of Biology, University of Konstanz, Konstanz, Germany
Andreas Natsch, Givaudan Schweiz AG, Duebendorf, Switzerland
Andrew L. Niles, Research Department, Promega Corporation, Madison, WI, USA
Satomi Onoue, Department of Pharmacokinetics and Pharmacodynamics, School of Pharmaceutical Sciences, University of Shizuoka, Shizuoka, Japan
Stephanie Padilla, Integrated Systems Toxicology Division, US Environmental Protection Agency, Research Triangle Park, NC, USA
Kamala Pant, BioReliance Corporation, Rockville, MD, USA
Aldert H. Piersma, Laboratory for Health Protection Research, National Institute for Public Health and the Environment (RIVM), Bilthoven, The Netherlands, and Institute for Risk Assessment Sciences, Faculty of Veterinary Medicine, Utrecht University, Utrecht, The Netherlands
Johanna Rajasärkkä, Department of Food and Environmental Sciences, University of Helsinki, Helsinki, Finland
David M. Reif, National Center for Computational Toxicology, Office of Research and Development, US Environmental Protection Agency, Research Triangle Park, NC, USA
Ann M. Richard, National Center for Computational Toxicology, Office of Research and Development, US Environmental Protection Agency, Research Triangle Park, NC, USA
Terry L. Riss, Research Department, Promega Corporation, Madison, WI, USA
Joshua F. Robinson, Laboratory for Health Protection Research, National Institute for Public Health and the Environment (RIVM), Bilthoven, The Netherlands, and Department of Toxicogenomics, Maastricht University, Maastricht, The Netherlands
James F. Rusling, Department of Chemistry, University of Connecticut, Storrs, CT, USA, and Department of Cell Biology, University of Connecticut Health Center, Farmington, CT, USA
Ayako Sakai, Laboratory of Cell Carcinogenesis, Division of Alternative Toxicology Test, Hatano Research Institute, Food and Drug Safety Center, Hadano, Kanagawa, Japan
Kiyoshi Sasaki, Laboratory of Cell Carcinogenesis, Division of Alternative Toxicology Test, Hatano Research Institute, Food and Drug Safety Center, Hadano, Kanagawa, Japan
John Schenkman, Department of Cell Biology, University of Connecticut Health Center, Farmington, CT, USA
Willem G.E.J. Schoonen, Department of Toxicology and Drug Disposition, Merck Sharp & Dohme, Oss, The Netherlands
Michael Schwarz, Department of Toxicology, Institute of Experimental and Clinical Pharmacology and Toxicology, University of Tübingen, Tübingen, Germany
Yoshiki Seto, Department of Pharmacokinetics and Pharmacodynamics, School of Pharmaceutical Sciences, University of Shizuoka, Shizuoka, Japan
Jonathan Z. Sexton, Biomanufacturing Research Institute and Technology Enterprise, North Carolina Central University, Durham, NC, USA
Imran Shah, National Center for Computational Toxicology, Office of Research and Development, US Environmental Protection Agency, Research Triangle Park, NC, USA
Joon Myong Song, College of Pharmacy, Seoul National University, Seoul, South Korea
André Stang, Institut für Biologie und Umweltwissenschaften, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
Pablo Steinberg, Institute for Food Toxicology and Analytical Chemistry, University of Veterinary Medicine Hannover, Hannover, Germany
Joe C.R. Stevenson, Department of Toxicology and Drug Disposition, Merck Sharp & Dohme, The Netherlands
David M. Stresser, Corning GentestSM Contract Research Services, Corning Life Sciences - Discovery Labware, Woburn, MA, USA
Noriho Tanaka, Laboratory of Cell Carcinogenesis, Division of Alternative Toxicology Test, Hatano Research Institute, Food and Drug Safety Center, Hadano, Kanagawa, Japan
Peter T. Theunissen, Laboratory for Health Protection Research, National Institute for Public Health and the Environment (RIVM), Bilthoven, The Netherlands, and Department of Toxicogenomics, Maastricht University, Maastricht, The Netherlands
Russell S. Thomas, The Hamner Institutes for Health Sciences, Research Triangle Park, NC, USA
Frederik Uibel, Department of Toxicology, Institute of Experimental and Clinical Pharmacology and Toxicology, University of Tübingen, Tübingen, Germany
Luc Verschaeve, Scientific Institute of Public Health, Operational Direction Public Health & Surveillance, Laboratory of Toxicology, Brussels, Belgium
Marko Virta, Department of Food and Environmental Sciences, University of Helsinki, Helsinki, Finland
Barbara van Vugt-Lussenburg, BioDetection Systems BV, Amsterdam, The Netherlands
Beppy van de Waart, Department of In Vitro & Environmental Toxicology, WIL Research, DD ’s-Hertogenbosch, The Netherlands
Bob van de Water, Division of Toxicology, The Leiden Amsterdam Center for Drug Research, Leiden University, The Netherlands
Femke M. van de Water, Department of Toxicology and Drug Disposition, Merck Sharp & Dohme, Oss, The Netherlands
Walter M.A. Westerink, Department of In Vitro and Environmental Toxicology, WIL Research, DD's-Hertogenbosch, The Netherlands
Barbara A. Wetmore, The Hamner Institutes for Health Sciences, Research Triangle Park, NC, USA
Kevin P. Williams, Biomanufacturing Research Institute and Technology Enterprise, North Carolina Central University, Durham, NC, USA
Roos Winter, BioDetection Systems BV, Amsterdam, The Netherlands
Irene Witte, Institut für Biologie und Umweltwissenschaften, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
Tracy J. Worzella, Research Department, Promega Corporation, Madison, WI, USA
Shizuo Yamada, Department of Pharmacokinetics and Pharmacodynamics, School of Pharmaceutical Sciences, University of Shizuoka, Shizuoka, Japan
George Zhang, Corning GentestSM Contract Research Services, Corning Life Sciences -- Discovery Labware, Woburn, MA, USA
Dan-yan Zhu, Institute of Pharmacology, Toxicology and Biochemical Pharmaceutics, Zhejiang University, Hangzhou, People's Republic of China
PART I
GENERAL ASPECTS
1
ToxCast: PREDICTING TOXICITY POTENTIAL THROUGH HIGH-THROUGHPUT BIOACTIVITY PROFILING
KEITH A. HOUCK, ANN M. RICHARD, RICHARD S. JUDSON, MATTHEW T. MARTIN, DAVID M. REIF, AND IMRAN SHAH
1.1 INTRODUCTION
Chemical safety assessment has long relied on exposing a few species of laboratory animals to high doses of chemicals and observing adverse effects. These results are extrapolated to humans by applying safety factors (uncertainty factors) to account for species differences, susceptible sub-populations, establishing no observed adverse effect levels (NOAEL) from the lowest observed adverse effect levels, and data gaps yielding theoretically safe exposure limits. This approach is often criticized for lack of relevance to human health effects due to the many demonstrated differences in physiology, metabolism, and toxicological effects between humans and rodents or other laboratory animals [1]. Such criticism exists mainly due to the lack of knowledge of specific mechanisms of toxicity and whether these are relevant to humans. Toxicological modes of action (MOA) have been elucidated for only a limited number of chemicals; even fewer chemicals have had their specific molecular mechanisms of action determined. Having such detailed knowledge would facilitate higher confidence in species extrapolation and setting of exposure limits. However, tens of thousands of chemicals currently in commerce and with some potential for human exposure lack even traditional toxicity testing and much less elucidated modes or mechanisms of toxicity [2]. Understanding mechanisms of toxicity usually results from decades-long research dedicated to single chemicals of interest, a model unsuitable for such vast numbers of chemicals. Even with dedicated research, such efforts are not guaranteed to succeed; the extended focus on understanding the mechanism of toxicity of 2,3,7,8-tetrachlorodibenzodioxin (TCDD) is an example [3]. Traditional animal testing, in addition to the criticisms discussed above, is not appropriate or feasible for the large numbers of untested chemicals due to the high costs and number of animals required [1].
One major effort to address this dilemma by providing a high-capacity alternative is underway, facilitated by integration of the fields of computational toxicology and high-throughput in vitro testing [4,5]. The ultimate goals of this approach are the means to screen and prioritize thousands of chemicals, predict the potential for human health effects, and derive safe exposure levels for the myriad of chemicals to which we are exposed. This approach relies on a shift in toxicology research away from “black-box” testing on whole animals and toward an understanding of the direct interactions of chemicals with a broad spectrum of potential toxicity targets comprising specific molecular entities and cellular phenotypes. This bioactivity profiling of chemicals generated through the use of high-throughput approaches produces characteristic chemical response profiles, or signatures, which may describe the potential for toxicity of that chemical [6].
Computational analysis and modeling of the results are required to provide insight into complex datasets and support the development of predictive toxicity algorithms that ultimately may serve as the foundation of an in vitro toxicity testing approach replacing most or all animal testings. The groundwork required for a computational toxicology approach is the generation of datasets comprising the quantitative effects of chemicals on biological targets. Two types of data are required. The first are the test results from in vitro and/or in silico assays that can be run in high-throughput mode and provide bioactivity profiles for hundreds to thousands of chemicals. The second is a dataset that details the effects of these chemicals on whole organisms, ideally the species of interest. These data are used to anchor and build predictive models that can then be applied to chemicals that lack in vivo testing. Generation of the in vitro dataset has become feasible and widely available as high-throughput in vitro screening technology, developed in support of the drug discovery community. The selection and use of these assays for computational toxicology will be discussed further in Section 4. Obtaining the latter dataset of in vivo effects necessary to build the computational models presents unique challenges. Although thousands of chemicals have been tested using in vivo approaches, only a limited amount of this information has been readily available. Much of it lies in formats not readily conducive to computational analysis, for example, paper records, in the data stores of private corporations, or protected by confidentiality clauses [7], and generation of extensive new in vivo data to support the approach is cost prohibitive. The access and collation of these data into a relational database useful for computational toxicology will be discussed in Section 5.
Beyond the technical aspects of generating the data, assembling the collection of required datasets to support computational approaches is a challenging task in itself. Robust, efficient, and accurate knowledge discovery from large datasets require a robust data infrastructure. There are a number of critical steps in the process beginning with designing an underlying architecture to manage the data. Appropriate data must be selected and preprocessed into common formats usable to computer programs (e.g., standardized field names for the types of attributes being measured, standardized chemical names and links to other data sources). The use of standardized ontologies can be particularly useful in the sharing of information across organizations [8]. Because of the complexities of achieving this on a large scale, these approaches are perhaps best conducted by large organizations with access to computational scientists in addition to experts in chemistry, toxicology, statistics, and high-throughput screening (HTS). Examples of integration of these diverse areas of expertise include the U.S. Environmental Protection Agency's (EPA) ToxCast program [4] and the Tox21 collaboration between the EPA, the National Toxicology Program, the National Institutes of Health Center for Translational Therapeutics-–NCTT (formally the NIH Chemical Genomics Center [NCGC]), and the U.S. Food and Drug Administration [9,10]. In addition, a number of large pharmaceutical companies have internal programs in this area relying on their own, extensive in-house expertise [11,12].
As described, the ultimate goal is to use high-throughput in vitro assays to rapidly and inexpensively profile the bioactivity of chemicals of unknown toxicity and make predictions about their potential for causing various adverse endpoints [4]. Achieving a robust, predictive toxicology testing program is a long-range goal that will need to proceed through a number of systematic stages including proof-of-concept, extension of chemical and bioassay diversity, refinement, and ultimately, supplementation or replacement of existing methods. The initial stage involves multiple steps including (1) selecting an appropriate chemical test set for which in vivo data are available; (2) selecting high-throughput biological assays for screening the chemicals; (3) generating the screening data on the chemicals; (4) collating the in vivo anchoring data for the chemicals; and (5) building up predictive models. Such models can then be validated through testing of additional chemicals with known toxicity endpoints to determine the robustness of the models. It is likely that the development of the test systems, as well as the computational models, will be an iterative process. New biological assays and statistical approaches are evaluated for potential inclusion in the program, whereas assays and models not producing useful results are dropped.
The success of this stage of the process would be models judged useful for prioritizing chemicals for the potential to cause specific toxicity endpoints. This prioritization will be valuable in the short term by allowing focused and limited in vivo use of testing resources on chemicals most likely to be of concern. The results of targeted testing of designated chemicals for specific endpoints should ensure a reduced use of test animals as only limited endpoints would need to be evaluated. This targeted testing will also provide an additional validation method for the testing program, that is, do the adverse endpoints predicted by the models occur to a significant extent in the tested chemicals? Ultimately, refinement of the testing and modeling approaches should allow high-confidence prediction of the likelihood for toxicity, thereby avoiding animal testing altogether for many chemicals. The remainder of this chapter will focus more specifically on providing background on the steps undertaken in developing the initial stages of the ToxCast testing program at EPA, as well as examples of applications of the program in prioritizing environmental chemicals for multiple toxicity endpoints.
1.2 CHEMICAL LANDSCAPE
A major driver of the development and use of HTS methods in toxicology is the scope of the chemical problem, that is, tens of thousands of chemicals to which individuals are potentially exposed, the majority of which have never been tested in any significant way [2] . What chemicals are of interest and the kind of data that is likely to be available depends on the use of the chemical, which in turn is related to the regulations to which the chemicals are subjected. To understand the world of chemicals that are of concern for potential toxicity and candidates for testing, it is useful to discuss a set of chemical inventories, some of which are overlapping.
1.2.1 Pesticide Active Ingredients
These are typically the active compounds in pesticide formulations, which are designed to be toxic against select types of organisms. A related category of compounds falling under this general label are antimicrobials, which are also designed to be toxic to certain organisms, in this case-targeting fungi or bacteria. These groups of chemicals are further divided into food-use and nonfood-use actives for the purpose of regulation. EPA sets tolerance levels for pesticides that may be used in specific foods, for particular reasons, and at particular exposure levels. Thus, EPA regulates the maximum amount of pesticide residue permitted to remain on a food approved for pesticide application. FDA, in contrast, has the authority to monitor and enforce levels of food-use pesticides and ensure that they comply with EPA regulations. FDA has additional authority regarding the use of antimicrobials in food packaging [13]. Food-use pesticide actives have the highest data requirements and, for these, a company will typically generate data from 2-year chronic/cancer bioassays in rats and mice, developmental toxicity studies in rats and rabbits, multigenerational reproductive toxicity studies in rats, and other specialized in vivo studies [14]. These are similar to the complete set of preclinical studies that are required for human pharmaceuticals. Because of this large data requirement, these chemicals are ideal for use in building up toxicity prediction models, since one will have near-complete in vitro and in vivo datasets. It is not surprising that pesticide actives have some of the same features and chemical properties as pharmaceutical products, given that they are often designed to interact with a specific molecular target.
1.2.2 Pesticidal Inerts
These are all of the ingredients in a pesticide product or formulation other than the active ingredients. Although they are labeled as “inert”, there is no requirement that they be nontoxic. These can range from solvents (e.g., benzene) to animal attractants, such as peanut butter or rancid milk. As with the actives, inerts are classified as food-use and nonfood-use. Regulatory data requirements are, in general, limited, thus resulting in the availability of little in vivo data [15].
1.2.3 Industrial Chemicals
This is an extremely broad class of chemicals including solvents, detergents, plastic monomers and polymers, fuels, synthesis intermediates, and dyes. As such, they are typically not designed to be bioactive, although many do have bioactivity, sometimes through interaction with enzymes and receptors, or by chemically reacting with biomolecules or via physical interactions (e.g., by disrupting cell membranes). Many of these compounds are manufactured in very large quantities, posing greater potential risks. Such chemicals typically have less stringent regulatory oversight and toxicity testing requirements but are subject to reporting rules under the Toxics Substances Control Act (TSCA). Under TSCA, different reporting requirements and regulatory scrutiny are applied depending on production volume levels (MPV–-medium production volume chemicals, >25 K tons/year; HPV–-high-production volume chemicals, >1 M tons/year). On average, these industrial compounds have lower molecular weight than pesticidal actives or pharmaceuticals, and include many more volatile and semivolatile compounds.
1.2.4 Pharmaceuticals
These are the active ingredients in drugs and, hence, are designed to have specific bioactivity. It is well known that many drugs have toxic side effects, often through unexpected off-target interactions, and that this is a major economic concern for the pharmaceutical industry driving up the costs of drug development. In addition, there is increasing concern for toxicity, not only for patients directly taking the drug, but also for ecological species exposed to these compounds in waste water [16]. Despite large amounts of toxicity data submitted to the FDA during the drug-approval process, including clinical data on humans if the drug reaches clinical trials, as well as additional preclinical toxicity data generated within the pharmaceutical industry, little of these data see the light of day due to confidentiality concerns. As a result, public availability of toxicity data on pharmaceuticals is generally limited to what is available in the open literature.
1.2.5 Food Additives/Ingredients
This category includes both natural and synthetic small molecules that are intentionally added to food, often to enhance nutritional value (e.g., vitamins), to act as preservatives, such as in food packaging, or to enhance color or texture. FDA regulates allowed tolerances for such chemicals and has the authority to require a battery of in vitro (primarily genotoxicity) and in vivo toxicity testing to support such reviews within the Center for Food Safety and Nutrition (CFSAN) [17]. Such data can be made publicly available, hence providing a potentially rich source of additional in vivo data for computational toxicology modeling.
1.2.6 Water Contaminants
EPA regulates chemicals in surface and drinking water, and the relevant chemicals include any of the above categories that enter the water system, as well as metabolites or degradation products. One example of the latter is disinfection byproducts that can result from reactions of chlorine with organic molecules in a drinking water system to produce polychlorinated organic compounds. The regulatory authority in this instance is reactive. First, a chemical has to be detected in water at sufficient levels to cause some concern, and then sufficient scientific justification must be provided to warrant regulatory action. As a result, toxicity data is generally lacking for many of these chemicals, similar to the situation for industrial chemicals.
Because there are so many chemicals to which humans and ecological species are potentially exposed, it is necessary to prioritize among them when setting up a large-scale screening program such as ToxCast or Tox21. The potential for exposure is one critical aspect of this prioritization, and these and further chemical use-categories are important indicators of the potential for exposure. For instance, any chemical that is directly in food or water (e.g., food additives or pesticides that leave residues on crops or chemicals found in drinking water) would have extra weight in a prioritization scheme. More detailed “use-categories” are also available to help refine estimates of potential exposure routes. For instance, if a chemical is found in products to which children are exposed (e.g., baby bottles, clothing), that chemical would have a heightened priority for screening. There is no general mapping of chemicals to use-categories that is publicly available, but the ExpoCast project, affiliated with the ToxCast project within EPA, is currently developing such a mapping based on merging data from many different sources [18].
The lack of data availability on chemicals, whether it is use-category, exposure potential, or toxicity data, is one of the major drivers of EPA's HTS computational toxicology program [4]. However, the success of this effort also relies upon the ability to collate as much available data as possible and systematize and format these data into computable forms to enable modeling efforts to proceed. To provide a central resource to support this effort, a large-scale database is being created to gather all publicly available data on chemicals in the environment through the Aggregated Computational Toxicology Resource (ACToR) effort [19]. Thus far, varying amounts and types of data have been compiled on several hundred thousand chemicals collected from over 1,000 different sources, consisting of data types that, for example, include information on hazard (i.e., in vitro and in vivo toxicity data), exposure, use, and production.
The above discussion focuses on the chemical landscape of concern for testing from a regulatory and use or exposure perspective, but an equally important consideration for our long-range purposes is providing adequate coverage of the chemical feature and property landscape spanned by the various use-category inventories of chemicals. Given the intimate relationship between the chemical structure and its biological activity, building a computational toxicology approach capable of predicting toxicity from HTS bioactivity profiles must provide for sufficient coverage of biological pathways and toxicity mechanisms across the chemical landscape of interest. This means that a chemical testing library must also provide sufficient coverage of the diverse chemical property and features space capable of adequately probing this biological mechanism diversity.
1.3 THE CHEMICAL LIBRARIES
To generate the in vitro dataset required for the computational toxicology approach, a chemical library was assembled, with initial and later testing candidates largely drawn from the chemical inventories described above. Meeting the initial objectives of providing proof-of-concept of the HTS computational toxicology approach required a strong anchoring to in vivo animal toxicity studies. Hence, selection of the initial testing set for ToxCast, which we refer to as the Phase I chemical library, was primarily driven by the availability of detailed, in vivo toxicity data. The existence of high-quality regulatory guideline studies required for chemical safety evaluation of pesticide active ingredients by EPA motivated the selection of these compounds to fulfill these data requirement needs. Thus, the Phase I library consisted of 309 unique chemical substances, with more than 90% pesticides and the rest a mixture of in vivo data-rich industrial chemicals such as bisphenol A (BPA) and perfluorooctanoic acid (PFOA).
In vitro HTS testing procedures additionally have a number of practical requirements that affect the types of chemicals that can be tested using current technologies. Obvious concerns are the solubility of the chemical in aqueous buffer, which is the medium used to conduct HTS testing, as well as dimethyl sulfoxide (DMSO), which is the near universal solvent used to solubilize test chemicals for testing. Additionally, volatility is a concern, since the chemicals are run in batch mode and attention cannot be paid to special handling requirements for volatile or semivolatile chemicals. A few physical–chemical property filters, primarily molecular weight (MW) and octanol/water partition coefficient (logP), were used to choose the Phase I chemicals, but the structures of pesticides are such that most met the criteria for inclusion and were soluble in DMSO. The ToxCast Phase I chemical solutions that underwent the initial round of HTS testing were also post-analyzed by analytical quality control (QC) methods that are amenable to high-throughput application (primarily liquid chromatography–mass spectrometry [LC/MS] with gas chromatography–mass spectrometry [GC/MS] follow-up for compounds not suitable for LC/MS analysis). Identity and purity were confirmed for over 80% of the Phase I library, with the majority of the remaining compounds deemed unsuitable for analysis because they were metal containing or of low MW. One class of pesticides, consisting of 14 sulfurons, was found to significantly dissociate in DMSO over time, motivating the removal of these compounds from further ToxCast testing.
The ToxCast Phase I chemical library, despite its relatively small size, contained a significant amount of chemical and functional diversity, spanning over 40 chemical functional classes (e.g., pyrazoles, sulfonamides, organochlorines, pyrethroids, carbamates, organophosphates) and 24 known pesticidal functional classes (e.g., phenylurea herbicides, organophosphate insecticides, dinitroaniline herbicides). The implication is that although the particular compounds included in this Phase I test set may not be representative of the larger chemical universe of potential interest such as antimicrobials, food-additives, drugs, and industrial compounds, the constituent features of these chemicals are potentially capable of representing a much broader set of chemicals from a wide range of use-categories.
Clearly, however, in order to meet the larger objectives of the ToxCast program for modeling in vivo toxicity, it is necessary to test larger chemical inventories that include greater representation of the various use-categories of high interest, as well as the more varied chemical and biological interactions that must be probed and characterized in order to build general models for predicting toxicity. Following the testing of the Phase I library, a much larger chemical collection was assembled based on these considerations for the dual purposes of expanding the ToxCast test library and constructing the EPA contribution to the Tox21 library. Nominations for this library were broadly drawn from the previously described inventories and initially exceeded 9,000 compounds. Given the much larger structural diversity of the chemicals nominated, a greater number of compounds were excluded from consideration on the basis of calculated physical–chemical properties, such as MW, vapor pressure, boiling point, solubility, and logP. Finally, practical considerations pertaining to physical samples, such as cost, availability, actual solubility in DMSO, and confirmed volatility, determined whether or not a compound was included in the final EPA Tox21 inventory, consisting of more than 3,700 unique chemical substances.
The ToxCast Phase II chemical library, currently undergoing testing, consists of 776 unique chemical substances, including nine Phase I compounds used as testing replicates, drawn from the expanded EPA Tox21 chemical inventory, spanning a much broader range of use-cases and chemical structures than in Phase I. For the selection of Phase II compounds, significant weight was given to those substances with extensive in vivo data available, as well as to toxicity reference substances with well-defined activities and mechanisms of action. Pursuant to this goal, approximately 30% of the Phase II compounds have in vivo data available from the National Toxicology Program or were generated to meet EPA or FDA's regulatory requirements for pesticide or food additives. However, due to the relative paucity of data for many of the use-categories described previously, many of the chemicals in this expanded collection had relatively little or no such data available. In addition, higher weight was given to chemicals on high-interest EPA inventories (such as listed above), as well as to chemicals that appeared on multiple inventories or use-categories. The Phase II inventory also benefitted from an unprecedented collaboration between EPA and the pharmaceutical industry, whereby 135 “failed drugs” were donated by six pharmaceutical companies (Pfizer, Merck, GlaxoSmithKline, Sanofi, Roche, and Astellas), along with preclinical and, in some cases, human clinical data reporting adverse effects. The value of these data in extending findings made on chemicals tested only in laboratory animals to those tested in humans may be significant.
The ToxCast Phase I and Phase II inventories total 1,060 unique compounds. These compounds are being run in the full suite of more than 500 ToxCast assays. Both of these chemical inventories are fully contained within the EPA Tox21 chemical inventory, which in turn is a subset of the complete Tox21 collection, totaling approximately 8,200 unique chemical structures. In addition to the failed pharmaceuticals, the Tox21 library contains an extensive collection of human pharmaceuticals [20]. Although the Tox21 inventory is much larger and spans much greater chemical diversity, this library will only be tested in HTS assays being run at the NCTT and, thus, will have more limited bioactivity profiling data available. On the other hand, the smaller ToxCast Phase I and II chemical inventories will be run in the full suite of ToxCast assays, as well as in the additional Tox21 assays, providing a rich chemical and biological context for the interpretation of these data. Details of the chemical libraries can be accessed at http://www.epa.gov/ncct/toxcast/chemicals.html.
An expanded analytical quality control process to ensure that the tested chemicals are indeed what they are intended to be is accompanying the full Tox21 effort. Careful review and curation of chemical identifiers, including names and Chemical Abstracts Service Registry Numbers (CASRN), as well as reported purity were extracted from Certificates of Analysis at the time of procurement. Further review and chemical structure annotation of the full Tox21 inventory and component ToxCast inventories were carried out within EPA's Distributed Structure-Searchable Toxicity (DSSTox) project (see http://www.epa.gov/ncct/dsstox/ for access to downloadable structure files). Following solubilization in DMSO, the chemical identity, purity, and, concentration are determined by appropriate analytical techniques, including LC/MS and follow-up GC/MS. This analysis will be repeated over the course of the use of the library to assess compound stability during testing. While complex and costly, such efforts ensure that biological activity measured in an assay is associated with the appropriate chemical structure and, conversely, those negative results are associated with a chemical structure only if that chemical was indeed present.
1.4 THE BIOLOGICAL ASSAYS
Selection of in vitro assays for toxicity testing would be relatively straightforward if the molecular targets underlying mechanisms of toxicity were known. Advances in HTS technologies to support the drug discovery industry have provided the tools to develop assays for large numbers of biological targets, ranging from receptors to enzymes to ion channels and more. If a protein has a defined function, it is safe to say that an in vitro assay can be built to measure effects of chemicals on that function. Techniques such as surface plasmon resonance or LC–MS–MS exist that measure chemical–protein interactions even when the function is unknown [21]. Beyond assays focusing on specific molecular targets, many assays are available to probe phenotypic changes induced in cells by chemical exposure including effects on organelles and cellular structures such as mitochondria, nuclei, cytoskeleton, and cell membrane. Again, with advances in automated fluorescent microscopy screening platforms and associated imaging algorithms, the ability to measure altered cellular phenotypes is almost unlimited. However, assays targeting specific proteins or cellular phenotypes suffer from our lack of detailed knowledge with respect to mechanisms of toxicity that would guide high-confidence assay selection. Exceptions to this, while clear, are relatively few and include molecular targets such as the potassium ion channel hERG [22], acetylcholinesterase [23], cytochrome P450s [24], drug transporters [25], nuclear receptors including the androgen, estrogen, and aryl hydrocarbon receptor (AhR) [26], as well as the 5-HT2b G-protein-coupled receptor (GPCR) [27]. In addition, cellular phenotypic assays for genotoxicity, oxidative stress, mitochondria energy homeostasis, calcium release from intracellular stores, and necrotic and apoptotic cell death can be used to determine toxicity, although with less specificity with respect to molecular target. Acceptance of these as valid toxicity targets usually resulted from many years of research, sometimes combined with serendipitous findings. Continuing with this model to complete our understanding of toxicology would be a long, expensive, and arduous route.
As an alternative approach, a broadly based interrogation of important families of biological targets and cellular phenotypes can be conducted efficiently using high-throughput in vitro screens, probing them with large chemical libraries with known animal and human health effects. The reference in vivo toxicity data for these chemicals are needed to correlate the in vitro findings with in vivo endpoints. The tools of computational toxicology can then be applied to analyze, interpret, and model the results, ultimately generating predictive signatures of toxicity compatible with cost-efficient, high-throughput assays conducive to screening unknown chemicals.
Defined toxicity targets are usually members of large protein families such as enzymes (e.g., acetylcholinesterase), receptors (e.g., estrogen receptor), and ion channels (e.g., voltage-gated sodium channels). These protein families make up the majority of what is called the “druggable genome”, molecular targets thought to provide an opportunity for therapeutic intervention and of high interest to the pharmaceutical industry [28]. As a result, hundreds of HTS assays have been developed to support this drug discovery research. Since the vast majority of these potential drug targets have been selected based on believed critical roles in various pathological processes, extension of this thinking suggests that such targets could also be involved in toxicity when inappropriately perturbed by xenobiotic chemicals. This served as the impetus for developing a diverse suite of HTS assays to use for profiling the biological activity of chemical libraries by several groups including ourselves through the ToxCast program [4,11,12].
In vitro HTS assays facilitate the rapid, parallel generation of large numbers of individual assay data points through the use of miniaturized assay formats, automated liquid dispensers, and high-speed plate readers. The miniaturized assay formats are usually in multi-well plates with densities of 96, 384, or 1536 wells per plate in a single, standardized plate footprint, and use total assay volumes ranging from 200 μL down to 5 nL. The assay components can be highly varied and depend to a large degree on the biological target being measured. For instance, an assay measuring kinase activity would have a purified kinase, required cofactors, required substrates, appropriate buffer, and chemical to be tested. In addition, a means of measuring the assay endpoint, here the phosphorylation of the substrate, is required. This could be a radioactive or fluorescent technique, a means to detect the loss of ATP or the increase in ADP, or a separation of the phosphopeptide from the nonphosphorylated one by means of mobility shift microfluidics assay technology. Cellular phenotypic assays use in vitro cultured cells and automated, fluorescence microscopy to image chemical perturbations of cellular structures, organelles, and functions followed by specific imaging algorithms to quantitate results [29]. Examples of this are assays using fluorescently tagged antibodies to actin microfilaments to monitor chemical affects on the stabilization or destabilization of the cytoskeleton [30].
The diversity of techniques used to quantitate HTS results underscores a critical point of understanding of HTS assays; all assays are susceptible to artifacts, and different assay formats are susceptible to different types of artifacts [31]. Assay artifacts are defined as test chemical-induced events that interfere with the ability to measure an accurate assay result such as chemical-induced fluorescent quenching, precipitation of the biological target by chemical aggregation, and inherent chemical fluorescence among others. Thus, an underlying caveat of any HTS assay is that all results must be interpreted with caution. In addition to artifacts induced by specific test chemicals, there are also experimental errors and normal assay variabilities that can affect the results. While HTS assays should be validated according to industry standards (http://spotlite.nih.gov/assay/index.php/) inherent in testing large numbers of chemicals is that some results will not be accurate. Inaccurate results can generate both false-positive and false-negative findings. Each has its own issues.
For toxicity testing, it is strongly desirable to avoid false-negative results that could miss important activities potentially resulting in endangering the health of exposed populations. Too many false-positives, however, can invalidate the utility of the screening by requiring extensive measures to follow-up on the large number of active chemicals. Unfortunately, decreasing the false-negative rate is usually at the expense of increasing the false-positive rate; thus, finding the right balance with a robust HTS assay is of high importance. Two methods of utility in providing high-confidence results for HTS toxicity testing are to use a concentration–response format for testing all chemicals and to have multiple assays using different assay technologies for important targets. Concentration–response testing allows testing concentrations high enough to detect the activity of weakly active chemicals, while minimizing concern for high concentration-induced artifacts such as cytotoxicity, which can mimic inhibition of functional activity in a cell-based assay. In addition, knowledge about the types of response expected for specific biological targets can help discriminate between chemicals affecting the target from those active by artifact. Receptor binding assays, for example, should follow the law of mass action and resulting concentration–response curves should display sigmoidal behavior with a slope near one on a semi-log plot [32]. Results with slopes of 10, for example, should flag the response as potentially suspect. Orthogonal assays are particularly useful as, for example, the use of a radioligand receptor binding assay and a cellular transactivation assay for the estrogen receptor. Chemicals active in both would have a high degree of confidence of being truly active at the receptor site and likely active in vivo, assuming the chemical reaches its receptor target. The efficiency of HTS supports both of these approaches by providing inexpensive screening methods with sufficient capacity to screen both large numbers of chemicals and at multiple concentrations [33].
However, given the sheer numbers of possibilities, testing of all potential toxicity targets is not feasible even with HTS technologies. Selection of the assays for testing within the ToxCast program followed a strategy of selecting targets with known links to toxicity, for which assays were available, combined with widely sampling potential targets from the large protein superfamilies including GPCRs, kinases, phosphatases, nuclear receptors, chromatin-modifying enzymes, CYP P450s, ion channels, and transporters [34]. A list of the families and numbers of assays targeting specific molecular targets is shown in Figure 1.1. Sampling of these families may provide a window into potential chemical activity, even when the specific target of a chemical-induced toxicity is not included. This occurs through testing in a concentration–response format, which may allow the detection of chemical promiscuity at higher concentrations. This can be helpful when a specific target of toxicity is not included in the assay suite. Due to conservation of protein structure within families, it is somewhat more likely that a chemical will affect other closely related family members, but with different affinities. These may serve as assay surrogates for the actual target and may still be useful in developing signatures of toxicity.
FIGURE 1.1 Distribution of assays categories in the ToxCast Phase I testing battery.
The use of cellular assays provides a means to include large numbers of potential targets concurrently in a more physiologically relevant format. Such assays usually rely on coordinated signaling networks to carry out the downstream function being measured, for example, cell proliferation. There are many nodes in the pathways regulating cell proliferation that are potentially susceptible for chemical perturbation. These include growth factor receptors on the plasma membrane, kinase second messengers transmitting the growth signal to the nucleus, transcriptional regulatory and protein synthesis machinery, mitotic spindle apparatus, cytoskeletal components, and associated regulatory enzymes. It is because of this complexity that cell proliferation has been described as one of the most sensitive endpoints for in vitro toxicology [35]. Endpoints may be more narrowly defined, such as mitochondrial function or DNA damage endpoints, but these also have many potential upstream targets. Thus, cellular assays, in general, lack the ability to clearly identify molecular mechanisms of action. They do, however, put the molecular targets in a more physiologically relevant context than generally are found with cell-free, biochemical assays. A valuable strategy is a combination of a biochemical assay, in which chemical specificity can be defined, as well as a cellular assay, in which functional efficacy can be demonstrated. The assays used in the ToxCast program provide both a broader coverage of toxicity targets as well as opportunities to define cellular efficacy for chemicals active in the biochemical screens.
Choosing the appropriate cell type for HTS assays supporting predictive toxicology approaches is important to the success of the approach. Many factors need to be considered and these vary, depending on the goal of the assay. In measuring the ability of a chemical to perturb a specific molecular target such as a kinase or a nuclear receptor, it may be appropriate to use standard, immortalized cell lines that provide robust and highly reproducible results. However, in determining the effects of chemicals on complex signaling pathways, the use of such cells may be of little value if these pathways have been altered during the immortalization process and adaptation to growth under standard cell culture conditions. In this case, the use of primary cells may have distinct advantages and provide more physiologically relevant information useful to predicting in vivo effects [36,37]. However, the use of primary cells also has its limitations in terms of limited passage numbers, batch-to-batch variation, difficulty to engineer with respect to introducing reporter genes, and lack of large signal-to-background ratios for the endpoints being measured.
To effectively use data from HTS assays for computational toxicology approaches, it is very useful to acquire complete testing datasets, meaning testing all chemicals against all assays in the testing set and to define standard data handling and analysis procedures. The ToxCast project used defined chemical libraries, described earlier, in testing against suites of in vitro assays in a concentration–response format. All chemicals were run in all assays as minimizing missing values in the data matrix greatly enhances the value of the dataset for subsequent analysis. Screening results were used to generate AC50 values, the concentration at which an assay is activated or inhibited by 50% when compared to the control values, for each chemical–assay pair. The AC50 is somewhat arbitrary in that it often has no direct toxicological interpretation. However, it does provide a means of comparing chemicals within an assay, serves as a flag for activity for a chemical in a given assay, and provides information as to its general potency range.
The concentration–response curves for ToxCast are modeled by the four-parameter Hill equation [38] implemented in the R programming language [39]. Heuristics are employed to accommodate aspects of assay results that cause implementation of the Hill equation to fail. The reasons underlying the curve-fitting failures may apply to all assays or be specific to a given assay or platform. For example, results that show no concentration-dependent increase in activity but rather maximal activity at all concentrations tested must be flagged with an AC50 less than the lowest concentration tested. Assays susceptible to cytotoxicity at high concentrations often need the responses obtained at these cytotoxic concentrations removed from the curve-fitting routine. Since it would be very difficult to generate a universally accepted best method for doing curve-fitting to a wide variety of biological assays, it is important to provide transparency to the process used, as well as access to the underlying unprocessed data in order for others to apply their own techniques.
The combination of chemicals and assay AC50 values defines the basic data matrix required for the computational toxicology input. Value can be added to the matrix through additional metadata. One very useful type of metadata is the mapping of the assays to specific gene ontologies, which are tied to biological pathways annotated by databases such as GO or KEGG [40]. This bioinformatics approach links chemical effects to biological pathways that can provide an additional connection to toxicity endpoints. The annotation is relatively straightforward for most biochemical assays targeting single proteins. However, the ability to do this properly with cellular assays is more challenging, since often a specific molecular target of the assay is not known. In some cases, specific biological pathways could be used to annotate the assay endpoint. This approach will be illustrated in Sections 6 and 7.
Development of a complete, well-annotated data matrix consisting of curated chemical structures and their activity against well-characterized biological targets is the core component of a computational toxicology approach. Such a dataset could be the final product for a predictive toxicology effort, if the biological assays were all highly validated surrogates for in vivo toxicology. However, as previously discussed, few such validated targets exist. Therefore, one needs to identify which assays or groups of assays are linked to toxicity endpoints and can serve as signatures for in vivo toxicity. We thus focused much of our early ToxCast screening on chemicals with rich in vivo toxicity information to use as an anchor for our in vitro results. The development of the in vivo database to support this effort will be described next.
1.5 IN VIVO TOXICITY DATABASE
