9,99 €
Poet, philosopher, novelist and former physician, Raymond Tallis is one of the world's foremost scientific philosophers. In this book, he brings together his diverse intellectual interests to address profoundly important questions about our well being. Hippocratic Oaths blends philosophy with public opinion, polemic and personal experience to bridge the disjunction between the health care we believe we are entitled to expect, and the difficult realities of what is possible. In a series of fiercely stimulating and impassioned arguments, Tallis looks at the truth behind public health scares; why we continue to incorrectly treat our bodies as if they were machines, separate from ourselves; and why the popularity of alternative therapies is bad for doctors and patients alike. Hippocratic Oaths is the summation of a lifetime's thought and medical practice, by one of the most singular stars in the British scientific firmament. It will, quite simply, change forever the way you think about yourself, and your health.
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Veröffentlichungsjahr: 2015
Hippocratic Oaths
First published in trade paperback in Great Britain in 2004 by Atlantic Books, an imprint of Grove Atlantic Ltd.
This edition published in Great Britain in 2014 by Atlantic Books Ltd.
Copyright © Raymond Tallis, 2004
The moral right of Raymond Tallis to be identified as the author of this work has been asserted in accordance with the Copyright, Designs and Patents Act of 1988.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without prior permission both of the copyright owner and the above publisher of this book.
Every effort has been made to contact copyright holders.
The publishers will be pleased to make good any omissions or rectify any mistakes brought to their attention at the earliest opportunity.
ISBN 9781782396512
A CIP catalogue record for this book is available from the British Library.
Atlantic Books Ltd.
Ormond House
26–27 Boswell Street
London WC1N 3JZ
www.atlantic-books.co.uk
For Mahendra Gonsalkorale and Bill Sang
in friendship, gratitude and admiration.
Contents
Acknowledgements
A (Very) Personal Introduction
PART ONE Origins
1 The Medicine-taking Animal: a Philosophical Overture
2 The Miracle of Scientific Medicine
3 The Coming of Age of the Youngest Science
PART TWO Contemporary Discontents
4 Communication, Time, Waiting
5 Power and Trust
6 Enemies of Progress
7 Representations and Reality
PART THREE Destinations
8 ‘Meagre Increments’: the Supposed Failure of Success
9 The End of Medicine as a Profession?
10 ‘Everyone Has To Die Sometime’
Envoi
Notes
Index
Acknowledgements
This book owes its existence to the enthusiasm and encouragement of Jacqueline Korn of David Higham Associates, my agent, and of Toby Mundy of Atlantic Books. Toby it was who suggested the title of the book. Many thanks to both of you.
I am even more indebted to Louisa Joyner for her brilliant editorial work. Her eye for detail, combined with her clear understanding of the big picture, has resulted in countless suggestions that have dramatically improved Hippocratic Oaths from the original manuscript. Louisa, I can’t thank you enough. Louisa’s work has been complemented by the superb copy-editing of Jane Robertson, to whom I am enormously grateful for much judicious textual liposuction, vital structural changes, an intelligent scepticism that has tempered some of my more passionate outbursts, and an unremitting attention to important minutiae.
Finally, thanks are due to my secretary, Penny Essex a) for putting up with me not only during the period of gestation of Hippocratic Oaths but also for wonderful support over the preceding decade and a half and b) for chasing up many elusive references often on the basis of vague and/or misleading information.
1
A (Very) Personal Introduction
Nothing could be more serious than the care of ill people, nor more deserving of intelligent discussion. Few topics attract such media coverage; the National Health Service is never far from the top of the political agenda; and most people regard good health – and access to first-class care when they fall ill – as supremely important. It is, therefore, regrettable that discussion of medicine – of medical science, of clinical practice, of the profession itself – is frequently ill-informed. Comment is often shallow, even when it is not riddled with errors of fact, interpretation or emphasis. Reactive, piecemeal and disconnected from the big picture, much analysis lacks historical perspective and ignores the complex reality of medical care.
Notwithstanding all the books, column inches, air-time and screentime devoted to it, therefore, the practice of medicine remains virtually invisible. Hippocratic Oaths, which contemplates the art of medicine from a broad perspective while not losing sight of the details, aims at making medicine more visible. This is worthwhile not only because scientific medicine is one of the greatest triumphs of humankind; but also because illness is potentially a mirror, albeit a dark one, in which we may see something of what we are, at the deepest level. Making medicine truly visible may cast some light on the greater mystery of what it is to be a human being. That mystery is the starting point of this book.
Medicine, objectively, has never been in better shape. Its scientific basis, the application of this science in clinical practice, the processes by which health care is delivered; the outcomes for patients, the accountability of professionals, and the way doctors and their patients interact with each other – all have improved enormously even during my thirty years as a practitioner. Yet the talk is all of doom and gloom: short memories have hidden the extraordinary advances of the last century. The danger is that endless predictions of crisis may become self-fulfilling by making the key roles of doctor and nurse deeply unattractive. This would be a disaster, given that further progress will require more, not less, medical and nursing time.
The curious dissociation between what medicine has achieved and the way in which it is perceived originates outside of medicine itself. While medical practice is continuously improving, it has not kept up with patients’ rising expectations. Many things are much better than they were, but few things are as good as people have been led to expect. Changes in patients’ expectations reflect changes in the world at large. What is more, there is a tension between the consumerist values of society and the values that have hitherto informed medicine at its best; values that have driven its gradual transformation from a system beleaguered by fraud, venality and abuse of power1 to a genuinely caring profession whose practices are informed by biological science and underpinned by clinical evidence.
Hippocratic Oaths does not pretend be a comprehensive account of medicine or even of its current troubles. I have aimed at depth rather than breadth. I examine the institutions of medicine and their present discontents in a series of essays – in some cases prompted by particular events or personal experiences. The book is a triptych: the large middle section deals with present discontents. It is flanked by panels that deal, respectively, with the origins and the destination of the art of medicine.
Though many of its reflections are cast in an impersonal form and address matters of public interest, Hippocratic Oaths is deeply personal. I believe that medicine is in danger of being irreversibly corrupted. This threat comes not from within (where its values are struggling to survive) but from society at large. The most serious dangers emanate from those for whom the moral high ground is a platform for self-advancement, many of whom have never borne, or have been willing to bear, the responsibilities that weigh on the daily life of practitioners. The unthinking voices of those who have a shallow understanding of the real challenges of medicine (and an even shallower appreciation of its achievements) will make patient care worse not better. Their influence already threatens to bring about a disastrous revolution in the values and attitudes of health-care professionals: if we are not careful, the patientas-client will receive service-with-a-smile from a ‘customer-aware’ self-protecting doctor delivering strictly on contract. If the current debased public perception is not challenged, medicine may become the first blue-collar profession, delivered by supine, sessional functionaries. This will not serve the longer term interests of people who fall ill.
Everyone agrees that we need to rethink medicine; in particular its relationship to society at large. This book offers an introduction to that rethink. We need to take a long view and to unpeel the layers of second-order discussion that takes so much for granted and has hidden the reality of a deeply human, and humane, profession. Only on the basis of an appreciation of what has been achieved, and a better understanding of the ends, aims and ultimate limitations of medical care, shall we be able to begin an intelligent examination of the present discontents and the future path; and arrive at a clearer understanding of what might be expected of medicine and of those who deliver medical care.
This book is dedicated to two of the many admirable people I have worked with in my thirty-two years in the NHS. Mohendra Gonsalkorale has been a consultant colleague for sixteen years. His many patients and colleagues, including myself, have benefited from his energy, cheerfulness, clinical expertise and wisdom, moral support and conscientiousness. Bill Sang is a manager whose ability to keep the larger vision in view while attending to the small details has been an inspiration. It is such people who keep the NHS afloat despite the misguided interventions of those many ill-informed individuals who wish to ‘save it’. However, neither Mohendra nor Bill would agree with everything in the pages that follow: both would be more philosophical about many of the things that cause me to bite the carpet. But they share my passion for public service and for the supremely serious calling of medicine – a passion which, over the years, has prompted them to work all the hours God made and some He has not thought of yet.
Perhaps this book should have another dedicatee: the students of Manchester Medical School, which still attracts the best and brightest. It is upon such people that the future of medicine will rest. If their sense of medicine as a calling is not destroyed, they will be doing their best for sick people in the dark hours when the hostile critics of the profession are chattering away at their dinner parties or safely tucked up in bed.
PART ONE
Origins
Nor dread nor hope attend
A dying animal;
A man awaits his end
Dreading and hoping all;
Man has created death1
From ‘Death’ by W.B. Yeats
A crushed beetle pedals the air for a while before expiring. A wounded snake slithers to a dark place and dies. A sick dog mopes, eats grass, vomits, and waits. A cat with a damaged paw licks it incessantly. Chimps are a little more sophisticated: they sometimes dab leaves on a bloody wound. This is as far as ‘animal medicine’ goes. If medicine is ‘the provision of special care to a sick individual by others’,2 there are no examples in the animal kingdom. The closest that non-human creatures get to physicianly attention is picking ticks off each other’s backs.
William Osler’s ironical definition of man as ‘the medicine-taking animal’ is therefore justified inasmuch as it captures something distinctive about humans. It is, however, inaccurate for interesting reasons. First, taking medicine is only a recent characteristic of the species. While hominids started parting company from the beasts several million years ago, taking medicine might not have begun until 10,000 years ago. We cannot be sure of this, of course. The behaviour and institutions of our ancestors prior to the invention of writing can only be guessed at. Evidence about the beginning of medical care is bound to be tenuously inferential. For the taking of medicines is not just a matter of ingesting material of therapeutic benefit as when a dog eats grass. And this is the second reason for qualifying Osler’s assertion: there is a vast cultural hinterland to the popping of the most ordinary pill.
1
The Medicine-taking Animal: a Philosophical Overture
Medicine-taking has roots in many different quarters of individual and collective human consciousness. Swallowing a safe pill makes sense only in the context of a recognized system of knowledge and belief, which encompasses many things: the significance of the suffering that prompts the search for relief; the structure and function of the human body and the means by which they may be changed; and numerous sciences, such as organic chemistry, pharmacology and industrial chemistry. What is more, it is part of a tapestry of social arrangements ensuring the dissemination of expertise in the prescription and administration of medicines, involving the division and sub-division of labour, the development of institutions, the creation of numerous forms of material infrastructure, and networks of agreements based on trust or contracts. The least-considered therapeutic action draws on fathomless aquifers of implicit knowledge, understanding, custom and practice.
In the next chapter, I will sketch the long journey that led us to this point where humanity began to pop its pills. For the present, I want to focus on the beginning of the journey. While whatever it was that made us takers of medicine sits at the heart of the difference between ourselves and animals, the science which gives scientific medicine its efficacy comes from seeing sick persons as if they were stricken animals. There is therefore a paradox: as medicine-takers we are not organisms but complex selves; but the effectiveness of the medicine we take is owed to a view of ourselves as organisms.1 If we are to place medicine and its present discontents in perspective, and understand both its achievements and its limitations, we must bear this paradox in mind: medicine’s triumphs are rooted in a biological understanding of sickness while the science, the art, the humanity of medical care is a supreme expression of the distance of humans from their biology.
Humane, scientific medicine is a (very) recent manifestation of the special nature of a creature who, uniquely among sentient beings, has knowledge. Knowledge – articulated or propositional awareness formulated into factual information and abstract general principles – is utterly different from the sentience that all conscious animals (including human beings) possess. Medical expertise is a peculiar development of knowledge: it is directed upon the body of the knower, who is in the grip of the least mediated form of awareness, namely bodily suffering. It is hardly surprising, then, that truly scientific medicine is less than a hundred years old. It has taken a long time for ‘the knowing animal’ to look dispassionately at his own body, the place where knowledge first awoke.
No one, I expect, will count it a revelation that the practice of medicine is a manifestation of the special consciousness of human beings; that the reason sick ducks don’t go to quacks is that they have a fundamentally different relationship to the world in which they live. This special consciousness, however, is worth examining because it contains not only the seeds of medicine but also the origin of the tensions that have always beset medical practice.2
At the root of the innumerable differences between animals that merely live (and suffer illness) and human beings who lead their lives (and, for example, seek help from doctors) is a difference in their relationship to their own bodies. The animal lives its body; the human being not only lives its body but also explicitly and deliberately utilizes it, possesses it and exists it. (The awkward transitive is intended to reflect the active expropriation of the human body by its ‘owner’.) This difference originates in the emergence, several million years ago, of the full-blown hand which acquired the status of a tool. This proto-tool has a wider instrumentalizing effect: it makes both the hominid’s body and the surrounding world into a potential tool kit. Several other consequences follow. The hand tool, which instrumentalizes the body and its world, awakens not only a sense of agency but also suffuses the organism with a sense of ‘am’. The consciousness of the organism is transformed into a subject: the subject is ‘within’ the body, not entirely merged with it (as in the case of a sentient animal) but, in a sense, ‘owning’, ‘having’, ‘possessing’, ‘utilizing’ it. As subjects, we experience our bodies as objects as well as suffering them as more or less invisible destiny. This is not to imply that we are separate from our bodies as Descartes imagined. Human consciousness can never be entirely liberated from the flesh; on the contrary, humans assume their bodies, or parts of them, as themselves and use other parts of their bodies to serve their purposes as tools, as means to action.
Within the human body there are many layers of subjects and objects, of agents and tools. These primordial corporeal tools are, of course, supplemented by extra-corporeal tools of ever greater complexity, requisitioned for a variety of purposes. The proto-tool that is the hand instrumentalizes not only the body but the world outside of it. As the philosopher Martin Heidegger said, the world with which the human subject engages in busy everyday life is, to a greater or lesser degree, a nexus of tools or of potential tools – what he called ‘the ready-to-hand’.3
The sense that those tools are ‘objects’, that they have properties in themselves that are not entirely dissolved into their relationships with the user, lies at the root of science. The fundamental intuition of science is that the things that lie around us are only partially open to direct scrutiny: they have something ‘in themselves’ that is beyond the direct deliverances of our senses; and there is, therefore, more sense to be made, more to be known. What John Dewey called the ‘active uncertainty’ of human enquiry – systematized in the multifarious enterprises of science – owes its origin to this feeling that objects have a reality beyond their immediate appearance.
The world experienced by the merely sentient animal has no objects (objects in themselves) because the creature is not fully developed as a subject. Consciousness of self – which is not present, except perhaps fleetingly, in other animals4 – makes apparent to the human creature the incomplete transparency of its own body. A human being’s encounter with its own body as an object lies at the origin of object knowledge: the intuition of one’s own body as being only partially available to oneself, intensified by an increasing awareness of oneself as a subject, awakens the uniquely human sense of living in a world comprised of objects of incomplete scrutability. Incomplete identification with one’s own body lies at the basis of the intuition that eventually gives rise to objective or factual knowledge.
As knowledge grows, its relationship to sense experience becomes less direct. This is in part because knowledge is a collective or collectivized form of awareness: whereas sentience is solitary, knowledge is always actually or potentially shared. The collectivization of awareness is most obviously underpinned by language. Language, however, is a relative newcomer: the socialization of awareness, and the transformation of the spatial cohabitation of beasts into the more complex modes of togetherness of human societies, was originally mediated by the tools that were suggested by, or extensions of, the proto-tool that is the hand.
There are obvious ways in which tools might facilitate socialization, indeed collectivization, of human consciousness; for example, they are held in common and they are publicly visible. More fundamentally, they symbolize the needs they serve, making problems and solutions visible in shared space. More fundamentally still, they embody and signify those needs in a generalized way. Tools are consequently proto-linguistic; forerunners (by several million years) of the signs of language. It is no coincidence that the demands made on the brain by tool use are similar to those that are required for language.
This philosophical excursus is meant to underline the wide, deep gap between man, the medicine-taking non-animal and non-medicine-taking animals, between leaf-dabbing chimps and pill-popping humans. While medicine has much in common with many other complex human practices, it is rather special. Although the body apprehended by the human subject may have been the primordial object, or the primordial bearer of object-sense, treating the body itself as an object among objects, an object like any other – the necessary precursor of systematic medicine – was a late development. The collectivization and intellectualization of human consciousness was well advanced before there arose the fully developed notion of the human body as an object – and subsequently as an object of care to which abstract knowledge might be applied.
The transition from sentience to self-awareness, from sense experience to object knowledge, is the ultimate source of the medical gaze in which our bodies are objects of knowledgeable care. It seems doubtful that any animal ‘worries’ about falling ill or interprets abnormal sensations or bodily failings, with or without an evident external cause, as ‘symptoms’. Animal suffering is present experience and not a sign of possible future experiences, or future bodily states. Conceiving of her body as a vulnerable organism, with an endangered future as well as an uncomfortable present, requires an individual human being to be at once outside of her body and identified with it; to be its subject and at the same time see it objectively; to suffer it as her being and know it as an object.
This is what lies at the bottom of Yeats’ seemingly paradoxical assertion that ‘Man created death’. The animal who created death also invented disease, labelling decay, or the heightened possibility of it, with the names of sicknesses, and invented medicine to postpone the one and ward off and treat the other.
The cognitive pre-history of medicine is, of course, unwritten. The written record shows how long and difficult was the subsequent journey to scientific medicine. We shall examine this journey very briefly in the next chapter, with the primary purpose of demonstrating that it was by no means inevitable that it should have reached its present remarkable destination. If the phenomenon of human knowledge is ‘the greatest miracle in the universe’,5 medical knowledge – pre-scientific and scientific – is one of the most extraordinary manifestations of that great miracle. It required much cognitive ‘self-overcoming’ on the part of humanity.
2
The Miracle of Scientific Medicine
These conquests have been made possible only by a never-ending struggle against entrenched error, and by an unflagging recognition that the accepted methods and philosophical principles underlying basic research must be constantly revised… Disease is as old as life, but the science of medicine is still young.
Jean Starobinski1
The long journey to biomedical science
I have described some conditions necessary for the emergence of Homo therapeuticus. They are not, of course, sufficient in themselves nor are they specific to medicine. Indeed, the process of placing medicine on an objective basis is not complete even today.2 While we do not know how recent medicine-taking is, we do know that scientific therapeutics is little more than a century old.
It is hardly surprising that the objective inquiries of Homo scientificus should have been directed rather late to the human body – to the body of the inquirer. Since it is out of our special relationship to our bodies that knowledge has grown, the pursuit of objective knowledge about the body and its illnesses requires a return to the very place where knowledge first awoke. Somewhat less esoterically, we may anticipate that the body ‘we look out of ’ should be the kind of object we are most likely to ‘look past’. It is something that we are as well as something we know or use; mired in subjectivity, it was a late focus for systematic objective inquiry. Humans found it easier to assume an objective attitude towards the stars than towards their own inner organs: scientific astronomy antedated scientific cardiology by thousands of years.
Scientific medicine required the assumption of an attitude to the human body similar to that which physical scientists had adopted towards other objects in the world: a ‘depersonalization’ and ultimately ‘dehumanization’ of the human body. (None of these terms is meant pejoratively: they are all necessary conditions of effective – humane and non-fraudulent – medical care.) Progress was neither smooth nor swift. Even less was it inevitable. ‘Physic’ had to extricate itself from a multitude of pre-scientific world-views. Other sciences had had to negotiate such obstacles: the heliocentric theory and the notion of the elliptical orbits of the planets, for example, faced opposition from theologically based ideas about the proper order of things, and how, consequently, God would order them. They had to displace more intuitively attractive notions of the principles governing the movement of objects. In the case of knowledge of the body and its illnesses, resistance to objective understanding was particularly intimate and adherent. The brief observations that follow are not intended even to outline all the steps leading to the forms of medicine we know today. Their purpose is solely to emphasize what had to be overcome during the passage from the first therapeutic intuitions to scientific practice.
In the earliest recorded phase of medicine, sickness was attributed to ill will, malevolent spirits, sorcery, witchcraft and diabolical and divine interventions. Illness and recovery were interpreted in providential and supernatural terms.3 Illness was about persons rather than bodies and was often seen as punishment.
The secular world-views postulated in early Greek science opened up the possibility of a naturalistic understanding of illness. ‘Natural causation theories which view illness as a result of ordinary activities that have gone wrong – for example the effects of climate, hunger, fatigue, accidents, wounds, or parasites’4 began to displace ‘personal or supernatural causation beliefs, which regarded illness as harm wreaked by a human or superhuman agency’. The so-called ‘sacred disease’ – epilepsy – was nothing of the kind. It was caused by phlegm blocking the airways and the convulsions were the body’s attempt to clear the blockage.5 Crucially, the body was seen to be subject to the same laws as the world around it: it was a piece of nature. The theory of the four humours (blood, phlegm, choler and black bile), which corresponded directly with the four elements of nature (fire, water, air and earth), and dominated thinking from Hippocrates in the fifth century BCE to Galen in the second century CE, expressed this naturalistic approach. The aim of the doctor was to restore the balance of humours when it was disturbed. Analogous ideas held sway in Indian and traditional Chinese medicine.
The replacement of transcendental by naturalistic (though still intuitive) ideas of illness was an enormous step. It did not, however, bring real progress, except in so far as it removed a justification for inhumane attitudes to sick people. The step from intuitive theories of illness to science-based ones was as great as that from transcendental to naturalistic accounts of disease. It built on the Hippocratic denial of the ‘sacred’ nature of disease – and of the body that suffered from it – and allowed a new conception of illness, upon which European medicine was founded. In the sixteenth century we see this new conception active in the pursuit of the anatomical, physiological and pathological knowledge which eventually led to European medicine becoming, on account of its singular efficacy, world medicine.
Two events are crucial: the publication of Vesalius’ great anatomical textbook, De Humani Corporis Fabrica (1543) and William Harvey’s De Motu Cordis (1628). Both authors described how the body looked when exposed to the unprejudiced, undazzled gaze, what its structure was and how it, or part of it, might function. Cartesian dualism, which separated the spiritual from the natural in the human person, endorsed the mechanistic view of the body that was implicit in the work of proto-biomedical scientists such as Vesalius and Harvey. The idea of the body as a carnal machine emerged as an intellectual framework for a systematic investigation of its component mechanisms. The development of physics and chemistry from the seventeenth century onwards furnished the concepts, insights and facts necessary to translate general ideas about bodily mechanisms into specific accounts of how various parts – organs, systems, cellular components – worked. (The verb is itself illuminating.) Metaphors from the technology of the time – mechanical, hydrodynamic, and later electrical – fed into the modelling process.
This desacralization, which permitted the body to be examined as a set of mechanisms and understood illness in terms of disorders of those mechanisms, was supported by another, not entirely distinct, intellectual trend: that of de-animation. Underpinning de-animation was the discrediting of vitalism – the assumption that living tissues and nonliving matter belonged to irreducibly different orders of being. The demonstration that organic substances, such as urea (the end-product of protein metabolism in many species), which was derived from living creatures, could also be synthesized out of inorganic substances was a crucial step in the development of organic chemistry (a revealing hybrid) and eventually its mighty offshoot, biochemistry. The examination of non-living components of living tissues (isolated organs, cells, individual chemical substances) emerged as the high road to understanding health and disease.
While it was accepted long before Darwin that human health and disease could be illuminated by studies and experiments performed on animals, The Origin of Species provided blanket justification, if it was needed, of extrapolation from animals to humans. Since Homo sapiens was the product of the same processes as other species, there could be no principled limit to the applicability of animal research to human beings. While there were differences between species, similarities were more important. Biomedical sciences, which could progress faster on the basis of animal experiments, envisaged human beings as organisms like any other. The physiological or biochemical parameters that signified sickness or health were similar in monkeys and monarchs.
The sick body, a damaged carnal machine operating in accordance with the laws of physics and chemistry, is a far cry from the man or woman punished by the gods for some private peccadillo or ancestral wrong. Scientific medicine minimized the personal element in illness: disease was a manifestation of general biological processes. Illness, which could ultimately be understood in biochemical, chemical or even physical terms, was not only impersonal but in a sense inanimate. The component mechanisms were remote from the living, breathing, animate whole organism, and even more remote from the suffering endured by the whole person.
Each of these steps – desacralization, de-animation, dehumanization, and depersonalization of illness – which of course overlapped both conceptually and temporally, represents a huge collective leap of understanding. The consequences have been entirely benign: not only treatments that are effective to a degree unimaginable by our predecessors but also humanization of medical care. Priestly authorities, supposed representatives of vast invisible forces, and bearers of terror, were banished from the sickbed. Gratuitous cruelty inflicted by those pretending to intercede on behalf of the sick, often justified by the ill person’s supposed responsibility for her illness, had no place in scientific medicine. Healing (notwithstanding the complaints that will be discussed in later chapters) was separated from amorphous or pervasive power – the power of priests and shamans and of the social order they support. The obverse of this was the increasing accountability of healers – a trend which led to the establishment of regulatory authorities which policed the behaviour of healers and monitored their procedures and outcomes against collectively agreed professional and ethical standards.
One of the healthiest features of scientific medicine was the separation in time between the acquisition of knowledge (of the body and its ailments) and the ability to use such knowledge to effect cures. Biomedical science did not at once translate into science-based medical practice. It was recognized that true science was full of disappointments while only charlatans hit the jackpot every time. The disappointments were salutary: they undermined the intuitive certainties that had arrested progress. Uncertainty as to whether even robust knowledge would lead to effective treatments dissolved the priestly ‘knowledge-healer-authority’ complex. There was also disciplinary separation: the rise of the non-clinical biomedical scientist meant that those who generated the knowledge were not necessarily those who applied it.
Medicine, as Jean Starobinski pointed out, is still a young science. The dissolution of the ‘knowledge-healer-authority’ complex is not yet complete. Even now, effective practitioners have something of the charismatic healer mixed with the scientific doctor. A doctor brings personality as well as knowledge to the bedside. The rise of scientific medicine, however, put the instilling of confidence on the basis of personal authority in its proper place. Decreasing personal authority is healthy, and unique to modern Western medicine.
Another, equally profound, consequence of the rise of scientific medicine was the increasing distance between knowledge of the body and of sickness and intuitive or common-sense understanding of disease. Science, as Lewis Wolpert has pointed out, is deeply counterintuitive, to the point of being unnatural.6 To import that ‘unnatural’ standpoint into the body, where knowledge and understanding began, was an extraordinary achievement. A striking example is the understanding of the circulation of the blood. The beating of the heart is something we all experience; whereas the surprising fact that the blood circulates around the arteries and veins and through the capillaries had to be realized by an individual of genius. For less than a ten-thousandth part of the millions of years that hominids have been aware of the beating of their hearts have they known that the blood that is set in motion by these pulsations is circulating around their bodies.
From modest counter-intuitive beginnings such as this, a vast continent of knowledge about the body and its blood has grown. The dependence of my well-being upon, for example, my blood pressure or the level of potassium in my serum will not be something I can perceive by means of introspection. Biomedical science knows things about me in general that I could not directly intuit. ‘The heart’, Pascal said, ‘has reasons that reason knows not.’ Scientific medicine has taught us that the body has mechanisms that the embodied know not. It undermines both personal and socially mediated preconceptions.
The discrediting of common sense as a guide to understanding ill-health has profound connections with one of the most impressive and powerful engines of knowledge acquisition: scepticism and a willingness to live with, indeed to prolong, uncertainty. The sceptical physician is no less passionate about bringing the quest for cures to a successful conclusion than the traditional healer, but he is able to separate his passion from his procedures and his conclusions. This preparedness to expose ideas and claims to objective testing gradually permeated clinical medicine. (Though, as we shall see, only recently has it become ubiquitous.) Nietzsche’s aphorism that ‘convictions are greater enemies of truth than lies’ identifies by default the drivers of true progress. At its edges, scientific medicine is in constant quarrel with itself. Unlike traditional medicine, it does not take the antiquity of its ideas as independent evidence of their truth and efficacy; on the contrary, every assumption and assertion is to be tested and re-tested using ever more ingenious methodologies. Its cumulative body of reliable knowledge is the product of permanent civil war.
While scientific medicine had to advance in the teeth of prior (theological and other) convictions, it had also to overturn immediate (‘common sense’) and mediated (‘cultural’) intuitions about the nature of health and disease. What is more, these intuitions were often supported by systems of thought, themselves backed up by institutions with authority, power and menaces, and by the less organized forces of deception and self-deception. On top of all this, it had to insert longer and longer chains of argument, knowledge, and expertise between the body and its care for itself. Medical science has transformed the self-consciousness of the hominid body into a vast corpus of mediated understanding. Let me illustrate this with a personal example.
A little while back, I came to believe that I had dyspepsia due either to a stomach ulcer or to a reflux of acid into my oesophagus. I arrived at this seemingly straightforward conclusion as a result of accessing a body of knowledge and understanding that had taken many centuries to assemble. The first intimation that this might be my problem was noticing that my recurrent discomfort had a certain pattern. I was able to match this pattern against a variety of conditions whose naming has been the outcome of a vast effort of conceptualization and empirical research. My interrupted interior monologue as to what the pain might mean drew on facts and concepts emerging out of the cooperative effort of many thousands of people scattered over widely disparate times and places.
In order to test my diagnosis, I undertook a therapeutic trial of lansoprazole, a drug for dyspepsia. This seemingly simple act was not, of course, at all simple. Inserting the pill into my mouth was an act whose rationale drew on many disparate realms of intellectual achievement and human endeavour and indirectly involved many institutions, professions and trades. The manufacture, packaging and transport of the pills (which, I see, have been imported from Italy) engage many kinds of expertise, each of which incorporates and presupposes other forms of expertise. Some of these lie outside of strictly medical knowledge: the technologies of invoicing, lorry manufacture, the synthesis of plastic capsules, automated packaging, quality control in mass production, all meet in this tablet. James Buchan reflects that a banknote is ‘an outcrop of some vast mountain of social arrangements, rather as the little peaks called nunataks that I later marvelled at in Antarctica, are the tips of Everest buried under miles of ice’.7 This applies a thousand times over to the capsule that I swallowed in the hope of curing my discomfort. While it is true of any manufactured item, as Adam Smith pointed out,8 the distinctive miracle of this example of science-based technology deserves more attention.
Lansoprazole belongs to a class of drugs called ‘proton pump inhibitors’. They prevent the active transport of hydrogen ions (that is to say, atoms of hydrogen minus their electrons) across the semi-permeable membrane that constitutes the lining of some of the cells that coat the stomach wall. The point of proton pump inhibition is to switch off the secretion of hydrochloric acid. While hydrochloric acid has a role in creating an environment favourable to the first stage of digestion of food, it may also attack the lining of the very organ from which it is secreted, causing peptic ulcers, or alternatively wash up into the oesophagus, causing reflux oesophagitis. Each of these terms – proton, active transport, semi-permeable membrane, hydrochloric acid, digestion, reflux oesophagitis – is a node in a web of countless concepts, and the product of discussion spread over vast numbers of papers and presented in numerous scientific meetings and letters and corridor conversations. The pill is a meeting point of many hundreds of nunataks, the tips of Everests of discovery and their technological application.
In order to appreciate the complexity of the scientific discourse I have glanced at, consider some of the terms I have employed. For example, the notion of ‘a proton’ comes from fundamental physics; the concept of a semi-permeable membrane from physical chemistry; that of active transport comes from biochemistry; of acid secretion from physiology (and some famous experiments); and the esoteric idea of proton pump inhibition from the pharmacological application of biochemistry. I have not even considered the many layers of the drug delivery system which ensure that it arrives in the right quantity and in good condition at the places in my body where it does its work. Nor have I examined the dovetailing of the different components of the system – the capsule, the blister pack, the cardboard box, the pharmacist, the prescription, the educational institutions that enabled me to prescribe the right tablet – necessary to present the drug to my acid-scorched mucosa.
Scientific medicine delivers – life expectancy
While it is entirely proper to be impressed by the science, technology and sociology of medicine, it is equally proper to ask what it has done for mankind. An account that did any kind of justice to the achievements of science-based medicine would occupy many volumes. I shall settle for a few observations.
The most direct measure of success is postponement of death, and on this medicine has delivered handsomely. Global life expectancy has more than doubled over the last 140 years.9 Nearly two thirds of the increase in longevity in the entire history of the human race has occurred since 1900.10 If we narrow our gaze for a while and look simply at the data for England and Wales in the first fifty years of the NHS,11 the news remains pretty extraordinary. Infant mortality fell from 39/1000 to 7/1000 for girls and from 30/1000 to 5/1000 for boys; and the proportion of people dying before reaching 65 from 40 per cent to 7 per cent. Life expectancy at birth increased by nearly a decade – from 66 to 74.5 for men and from 70.5 to just under 80 for women – during the second half of the twentieth century. If we look at the last century as a whole, the changes are even more amazing. Whereas the proportion of deaths that occurred between 0 and 4 years of age was 37 per cent in 1901, it was 0.8 per cent in 1999; and while only 12 per cent of deaths in 1901 were in people above 75, 64 per cent of all deaths in England and Wales in 1999 were among people over the age of 75.12
Much of this may be attributed to factors beyond medicine narrowly understood. Increasing prosperity, better nutrition, education, public hygiene, housing, health and safety at work, the emergence of liberal democracies protecting individuals against exploitation and abuse, and social welfare policies have all played their part. It is easy, however, to underestimate the contribution of medical science.13 Admittedly, much of the fall in mortality at all ages during the first half of the last century was due to declining death rates from infectious diseases, especially at younger ages, to which the contribution of specific treatments was relatively small. For example, reduced mortality from respiratory tuberculosis (which alone accounted for 20 per cent of the increase in life expectancy of the UK population between 1871 and 1911) occurred before effective treatments and specific preventative measures such as BCG immunization had been discovered. Perhaps as little of two out of the twenty-five years of increased life expectancy between 1900 and 1950 in USA and UK were directly due to medical treatments.14
It is easy to misunderstand the significance of these facts. The public health measures that reduced premature deaths from infectious diseases were shaped by the rationalistic understanding of disease that owed its origin to an emergent medical science: successful public health is informed by medical as well as social science. It is no coincidence that the steepest declines in deaths from infectious diseases came in the wake of the final decades of the nineteenth century in which Robert Koch and Louis Pasteur raced each other to the identification of the micro-organisms causing tuberculosis, cholera and other decimators of humanity. They placed the germ theory on a firm footing and created a scientific framework for public health measures. The importance of the scientific approach to public hygiene is dramatically illustrated by the contrast between the success of Western attitudes to infectious diseases and the catastrophic and continuing failure of traditional approaches based upon theological and moralistic notions of purity and impurity (for example those that underpin the caste system in India).15
In recent decades, moreover, when public health infrastructure has been a constant and reliable factor, the proportional contribution of specific medical treatments to improved life expectancy in developed countries has risen. The American physician and commentator J. P. Bunker has estimated that about half of the gains in the UK since the inception of the NHS have been due to medical treatments.16 The absolute increase in life expectancy in the developing world and the contribution to this of medicine in the widest sense – medical science, medical practice and what we may call ‘scientific medical intelligence’ or ‘a medically informed outlook’ – is even greater.
More telling still is the fact that life expectancy has continued to rise sharply in older people long after public health measures and social policies have been fully in place: life expectancy at birth increased in the last two decades of the twentieth century by 4.7 years for man and 3.5 years for women;17 male life expectancy at age 65 in England and Wales increased from just under 12 years to just under 15 years between 1970 and 1995 – a little more than in the whole period from 1900 to 1970. (Similarly encouraging figures apply to females.) During this period, estimates indicated that the percentage of people in England and Wales surviving to 85 more than doubled: in males from 11.4 to 24.2, and in females from 27 to 41. Perhaps most telling of all, life expectancy in the UK in both males and females increased by nearly a year in the first half of the 1990s, when the malign impact of a decade of Tory assault on state welfare was at its height.18
So while scientific medicine is not acting alone, its contribution – once the foundation stones of public hygiene and a welfare state are in place – is proportionately greater. Unless a new politician with Mrs Thatcher’s destructive fervour comes along, this proportionate contribution of medicine to health gains will continue to rise.
Scientific medicine delivers – quality of life
If medicine prolonged life without alleviating suffering, this would scarcely be cause for congratulation. However, the impact of medicine on the traditional sources of discomfort and misery – pain, itch, nausea, immobilization, decay – is even more impressive than its impact on life expectancy. This is true even in old age in developed countries, as we shall see in Chapter 8, where many have (incorrectly) suggested that increased life expectancy has been bought at the cost of increased suffering. The example I want to give here, however, is a recent (and not atypical) triumph in a developing country.
In 1999, the World Health Organization announced the virtual elimination of onchocerciasis, ‘River Blindness’, in much of its West African home.19 This was the result of a programme of control inaugurated in 1974. Blindness is caused by dead microfilarial larvae of Onchocerca volvulus, produced for up to 15–18 years by adult worms inside the eye. The disintegrating bodies of the worms damage the cornea. The beneficial results of the elimination programme are both immediate and long-term. It has saved 100,000 people at immediate risk of contracting the disease and prevented the potential infection of nearly 12 million children. In addition, 1.25 million people have lost their onchocercal infection through the programme. Removing the threat of infection has allowed people to farm the 25 million hectares of fertile land, capable of feeding 17 million people a year, that had been abandoned as a result of infestations of the black fly that carried the filaria.
Thus was eliminated a scourge that had literally darkened the lives of many millions of Africans since time immemorial and had had catastrophic effects on the economic well-being of entire villages, indeed, entire populations. This triumph was the outcome of advances in a multitude of medical sciences, of dozens of cognitive nunataks, all of which are taken so much for granted that they are almost invisible.
First, recognition of the disease as a specific entity, distinct from other conditions causing blindness, such as trachoma and vitamin deficiency, was a tour de force of descriptive clinical science. Identification of the correct insect carriers (the black fly) in a habitat teeming with thousands of candidates was equally remarkable. It built on the notion of insects as ‘disease vectors’. This presupposed a grasp of insect anatomy and physiology, speciation and parasitology.
The incrimination of the filarian worm as the cause of the disease also required knowledge and wisdom to interpret, observe, identify and inculpate micro-organisms in the blood of sufferers. Further, stringent criteria for separating innocent bystanders and secondary opportunist infections from primary causative organisms had to be applied with unremitting vigour. The identification of the dead larvae as the cause of the blindness – as the result of an immune, inflammatory reaction – required a further leap of understanding.
Equally, the development of drugs to treat the organisms without causing harm to the sufferer relied upon numerous bodies of knowledge: not only of the physiology of filarian worms and the kinetics, dynamics, hepatotoxicity, nephrotoxicity of potential pharmaceuticals, but also of the appropriate way to monitor the clinical, physiological and biochemical impact of the chosen drugs. The invention and assessment of Ivermectin – the cure developed in the late 1980s and the first appropriate drug that could be dispensed widely without fear of serious side effects – was an extraordinary achievement even if one overlooks the organic chemistry, chemical engineering and analytical techniques necessary to ensure mass production to a high standard of purity. On top of this, there was the practical knowledge necessary to overcome the obstacles that stand in the way of a rational public health initiative in a terrain where adverse climatic conditions (a hell of moist heat), disease, corruption, the threat of war, endemic poverty and malnutrition and, above all, application of traditional magic and superstition to thinking about disease and its causes, predominated.
Scientific medicine delivers – but only so much
The terrible story of AIDS occasioned another striking triumph of recent medicine. The recognition of AIDS as a specific disease, the identification of Human Immunovirus (HIV) as its cause and of the synergistic effects of other sexually transmitted diseases, malnutrition and other factors promoting the transition from HIV carriage to the full-blown disease; the development of rational policies to reduce spread through the population, and of drugs to prevent transmission of HIV and to treat AIDS – all this took a mere fifteen years. One only has to compare the 600 years it took for the cause of the Black Death to be understood to appreciate how scientific medicine has transformed our ability to respond to new diseases.
In many places, however, scientific triumphs have not translated into the alleviation of human suffering. The unremitting catastrophe of AIDS in sub-Saharan Africa is due not to the deficiencies of medical science, but to the failure to apply it because of irrationality and a misplaced national amour-propre which has delayed acknowledgement of the problem until it is too late. The effects of this delay have been compounded by the endemic sexual abuse of women, war, famine, poverty, and corrupt governments headed by murderous kleptocrats.
By contrast, quality-controlled medicine is effective because it is based upon a rational, though counter-intuitive, understanding of the pathophysiology of disease. This is remarkable not only on account of the science but for the way in which the community that has generated the science has been able collectively to overcome a multitude of weaknesses, frailties and temptations. Wishful thinking, superstition, corruption and deception are ubiquitous features of human life, and while biomedical scientists, clinical scientists and clinicians are still prey to them, they are nevertheless able to resist them collectively to a degree that is unique in human affairs.
The rewards for this have been immeasurable. Unlike most human beings in history (indeed, unlike most organisms), I did not die before reaching adult life. My survival to what is now called ‘late middle age’ makes me part of an even luckier minority. I have not brought to my middle years the heritage of chronic childhood disease. My own children are not the survivors of a permanent natural massacre of the innocents. I am not riddled with numerous undiagnosed and incurable infestations. Even if the illnesses that I eventually develop prove to be incurable, they will be significantly alleviated or at least palliated.
In this, and many other respects, I am privileged beyond the wildest dreams of my ancestors – thanks to those of my predecessors who were able to see past their dreams and look dispassionately at the object that, more than any other, is infused with subjectivity: the human body.Scientific medicine has made the human body more human by acknowledging its intrinsic animality, its lack of divinity. It is this that has made it possible to displace the inhumanity of the body to the edges of human life.
3
The Coming of Age of the Youngest Science
It is hardly surprising that medicine is ‘the Youngest Science’.1 The furthering of our understanding of the human body had to await the maturation of natural sciences such as chemistry and physics. Scientific medicine also requires us to adopt an objective approach to the material basis of our existence and to treat dispassionately the horrors which engulf the part of the world with which we are most closely identified. That the body – the object at the root of objective knowledge – should have been the last to be illuminated by objective knowledge is precisely what might be expected. We humans have to make sense of disease from within the bituminous darkness of the relationship we have with our own bodies.
Between the sick human body and the gaze of the scientist, there have intervened many distorting lenses – authorities, prejudices, and preconceptions, both theological and secular. Scientific medicine is a triumph of human knowledge over superstition and irrationality, over intuitively attractive ideas, and over the temptation to exploit the fears of the sick and abuse the trust of the vulnerable.
The great sociologist, philosopher and anthropologist Ernest Gellner spoke of reason as ‘a Cosmic Exile’.2 Rationality has the weight of culture, custom, tradition and traditional authority lined up against it:
Reason is a foundling, not an heir of the old line, and its identity or justification, such as it is, is forged without the benefit of ancient lineage. A bastard of nature cannot be vindicated by ancestry but only, at best, by achievement. (p. 160)
And when reason is applied to something as terrifying, intimate and engulfing as illness, even ‘achievement’ is sometimes insufficient vindication.
There are inescapable tensions built into the very nature of science-based medicine. Unresolved and probably unresolvable, they bubble just under the surface, feeding the discontents that will be the central theme of this book. But before I discuss them, let us look at the steps that had to be taken by medicine before it could become a fully ‘grown up’ science.
The permanent self-criticism of scientific medicine
One of the arguments mobilized by alternative medicine practitioners (of whom more later) against orthodox medicine is that the latter is constantly changing while alternative medicine has remained largely unaltered for hundreds, even thousands, of years. This decade’s favourite orthodox remedy, they point out, is next decade’s also-ran.
This is true; there is a fringe of development at which medicine is indeed changing, but this is not the result merely of the fluctuation of fashion but a consequence of the discovery of new treatments that are more effective and have fewer side-effects than remedies previously on offer. Change reflects strengths rather than weaknesses in conventional medicine: it is not a question of replacing one useless drug with another but of replacing a useful drug with one that is more useful. This places the claim that alternative medicine remedies belong to ‘an ancient tradition’ in an interesting light. The lack of development in 5,000 years can be a good thing only if 5,000 years ago alternative practitioners already knew of entirely satisfactory treatments for conditions that orthodox medicine has only recently started to be able to cure or improve, or cannot yet cure or improve. (If they did, they have kept remarkably quiet about them.) The argument that venerability makes evaluation unnecessary3 is based on a confusion between 5,000 years of use and 5,000 years of accumulated evidence of usefulness.
Unlike traditional medicine, which is deeply self-satisfied with its knowledge and what it believes to be its effectiveness, scientific medicine is driven by an active uncertainty that is sceptical of received ideas and of authority and is continually seeking to improve on the status quo. The contrast between the stagnation of alternative medicine and scientific medicine’s dissatisfaction and constant transformation – resulting in ever more effective and, for the most part, less unpleasant treatments – is often misunderstood. A discipline which is marked by a carefully nurtured scepticism towards itself is sometimes seen as arrogant or in disarray.
The greatest of all the obstacles medicine has had to face in its journey towards a fully developed clinical science has been the overthrowing of its own authority. In an act of collective humility, it has cultivated a routine distrust of its own practices. This humility has been almost as important in the development of effective therapies as biomedical science. Let us examine its evolution.
The first step towards genuine evidence-based medical practice was the formal clinical trial. It is not enough for me to say of a treatment that ‘I know it works’ or (even) that ‘My patients know it works’. It is not even good enough that those in whom I have faith – Aristotle or the Queen’s physician – believe or assert that it works. The duration of the faith and belief makes no difference, either: just because a treatment has been prescribed with enthusiasm for many centuries doesn’t prove it is effective. Tried doesn’t mean tested. The humility of shaping clinical practice in accordance with the unmanipulated outcomes of therapeutic trials is also connected with another layer of scepticism: that what looks good in theory (irrespective of how good the theory is) may not benefit patients in the real world.
Perhaps the most remarkable facet of this humility is the willingness of doctors to enter patients (with the latter’s consent) into trials run by other clinicians. Submission to the authority of such trials means subordinating one’s own personal authority to that of other professionals, many of whom (such as statisticians and biomedical scientists) do not even belong to the medical profession. As Marc Daniels pointed out:
[for clinicians] to be willing to merge their individuality sufficiently to take part in group investigations, to accept only patients approved by an independent team, and to submit results for analysis by an outside investigator involves considerable sacrifice.4
The design of the modern clinical trial assumes that, unless numerous safeguards are put in place, results will be distorted by wishful thinking. Clinical trials are ‘double-blind’: neither the patient nor the doctor knows whether the patient is receiving the new treatment or the old (where two treatments are being compared) or the new treatment or a placebo (where the new treatment is being compared with a dummy) until completion. In order to avoid bias that might come from entering patients with a better prognosis in the group that is receiving the new treatment, patients have to be allocated randomly, for example by means of computer-generated random numbers. This ensures that like is being compared with like.
The danger of bias was strikingly illustrated when I was in Nigeria. There I observed an involuntary trial comparing snake-bite treatment by the local hospital with the community’s traditional healer. Most snake bites are unpleasant but not life-threatening and the majority of patients either seek no treatment or go to the traditional healer. Where snake bites settle, no more is heard of them: they are therapeutic successes. A small number of patients develop serious, and sometimes fatal, adverse consequences; they are the ones that go to hospital usually after an unsuccessful visit to the traditional healer. The traditional healer unsurprisingly had much better results in treating snake bites than the hospital did. Many of our patients died, whereas the healer’s outcomes were almost invariably excellent. The greater apparent success of the healer was manifestly due to case selection: cases that went well remained on his list; those that went badly ended up in hospital.5
