Erhalten Sie Zugang zu diesem und mehr als 300000 Büchern ab EUR 5,99 monatlich.
'An excellent book' - Ted Honderich, Emeritus Professor of Philosophy of Mind and Logic at University College London (UCL) Introducing Consciousness provides a comprehensive guide to the current state of consciousness studies. It starts with the history of the philosophical relation between mind and matter, and proceeds to scientific attempts to explain consciousness in terms of neural mechanisms, cerebral computation and quantum mechanics. Along the way, readers will be introduced to zombies and Chinese Rooms, ghosts in machines and Erwin Schrodinger's cat.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 105
Veröffentlichungsjahr: 2015
Das E-Book (TTS) können Sie hören im Abo „Legimi Premium” in Legimi-Apps auf:
Published by Icon Books Ltd, Omnibus Business Centre, 39–41 North Road, London N7 9DP Email: [email protected]
ISBN: 978-184831-171-8
Text copyright © 2012 Icon Books Ltd
Illustrations copyright © 2012 Icon Books Ltd
The author and illustrator has asserted their moral rights
Originating editor: Richard Appignanesi
No part of this book may be reproduced in any form, or by any means, without prior permission in writing from the publisher.
Cover
Title Page
Copyright
What is Consciousness?
The Indefinability of Consciousness
What is it Like to be a Bat?
Experience and Scientific Description
How Does Consciousness Fit In?
The First Option: Dualist
The Second Option: Materialist
The Third Option: Mysterian
Hard and Easy Problems
The Explanatory Gap
Creature Consciousness
The Hard Problem is New
René Descartes’ Dualism
Matter in Motion
Mind Separate From Matter
The Pineal Gland
Berkeley’s World of Ideas
The Idealist Tradition
Idealism in Britain
The Scientific Reaction to Idealism
Behaviourist Psychology
The Skinner Box
The Ghost in the Machine
The Beetle in the Box
Psychological Functionalists
Structure Versus Physiology
The Mind as the Brain’s Software
Variable Realization
A Physical Basis for Mind
A Modern Dualist Revival
A Dualism of Properties
Descartes’ Argument from Possibility
A Zombie Duplicate
Leibniz’s Argument from Knowledge
The Modern Argument from Knowledge
A Dualist Science of Consciousness
Arguments Against Dualism
Causal Completeness
The Demise of Mental Forces
Newtonian Physics
Back to Descartes
Materialist Physiology
No Separate Mental Causes
What About Quantum Indeterminism?
Causal Impotence
Pre-established Harmony
Modern Epiphenomenalism
The Oddity of Epiphenomenalism
The Materialist Alternative
Materialism is not Elimination
The Example from Temperature
Functionalist Materialism
Making a Computer Conscious?
The Turing Test
The Chinese Room
Language and Consciousness
Functionalist Epiphobia
Mental States are “Wetware”
Human Chauvinism
Facing up to the Dualist Arguments
Zombies are Impossible
Mysteries of Consciousness
The Mysterian Position
A Mysterian Speculation
Special Concepts of Consciousness
Everybody Wants a Theory
Neural Oscillations
Neural Darwinism
Re-entrant Loops
Evolution and Consciousness
The Purpose of Consciousness
Quantum Collapses
How Quantum Physics Differs
Schrödinger’s Cat
Quantum Consciousness
Another Link to Quantum Mechanics
Quantum Collapses and Gödel’s Theorem
The Global Workspace Theory
CAS Information-Processing
Equal Rights for Extra-Terrestrials
Intentionality and Consciousness
Consciousness and Representation
Explaining Intentionality
Can We Crack Intentionality?
Non-Representational Consciousness
In Defence of Representation
Non-Conscious Representation
Panpsychist Representation
Behaviour without Consciousness
What versus Where
The Problem of Blindsight
HOT Theories
Criticism of HOT Theories
Self-Consciousness and Theory of Mind
The False-Belief Test
Conscious or Not?
Cultural Training
Sentience and Self-Consciousness
Future Scientific Prospects
PET and MRI
A Signature of Consciousness
The Fly and the Fly-Bottle
The Dualist Option
The Materialist Option
A Question of Moral Concern
Is There a Final Answer?
Further Reading
Bibliography
Index
The best way to begin is with examples rather than definitions.
Imagine the difference between having a tooth drilled without a local anaesthetic…
The difference is that the anaesthetic removes the conscious pain… Assuming the anaesthetic works!
Again, think of the difference between having your eyes open and having them shut…
When you shut your eyes, what disappears is your conscious visual experience.
Sometimes consciousness is explained as the difference between being awake and being asleep. But this is not quite right.
Dreams are conscious too.
Dreams are sequences of conscious experiences, even if these experiences are normally less coherent than waking experiences.
Indeed, dream experiences, especially in nightmares or fantasies, can consciously be very intense, despite their lack of coherence – or sometimes because of this lack.
Consciousness is what we lose when we fall into a dreamless sleep or undergo a total anaesthetic.
The reason for starting with examples rather than definitions is that no objective, scientific definition seems able to capture the essence of consciousness.
For example, suppose we try to define consciousness in terms of some characteristic psychological role that all conscious states play – in influencing decisions, perhaps, or in conveying information about our surroundings.
Or we might try to pick out conscious states directly in physical terms, as involving the presence of certain kinds of chemicals in the brain, say.
Any such attempted objective definition seems to leave out the essential ingredient. Such definitions fail to explain why conscious states feel a certain way.
Couldn’t we in principle build a robot which satisfied any such scientific definition, but which had no real feelings?
Imagine a computer-brained robot whose internal states register “information” about the world and influence the robot’s “decisions”. Such design specifications alone don’t seem to guarantee that the robot will have any real feelings.
The lights may be on, but is anyone at home?
The same point applies even if we specify precise chemical and physical ingredients for making the robot.
Why should an android become conscious, just because it is made of one kind of material rather than another?
There is something ineffable about the felt nature of consciousness. We can point to this subjective element with the help of examples. But it seems to escape any attempt at objective definition.
Louis Armstrong (some say it was Fats Waller) was once asked to define jazz.
Man, if you gotta ask, you’re never gonna know. We can say the same about attempts to define consciousness.
When we talk about conscious mental states, like pains, or visual experiences, or dreams, we often run together subjective and objective conceptions of these states. We don’t stop to specify whether we mean to be talking about the subjective feelings – what it is like to have the experience – or the objective features of psychological role and physical make-up.
It usually doesn’t matter, given that the two sides always go together in humans. If not in robots.
Even so, these two sides can always be distinguished. This is the point of the American philosopher Thomas Nagel’s famous question: “What is it like to be a bat?”
Most bats find their way about by echo-location. They emit bursts of high-pitched sound and use the echoes to figure out the location of physical objects. So the intent of Nagel’s question is: “What is it like for bats to sense objects by echo-location?”
It must be like living in the dark, spending a lot of time hanging upside down, and hearing a barrage of high-pitched noises. But this is unlikely. That’s perhaps what it would be like for humans to live as bats do.
But for bats, to whom echo-location comes naturally, it is presumably not sounds they are aware of, but physical objects – just as vision makes humans aware of physical objects, not light waves.
But still, what is it like for bats to sense physical objects? Do they sense them as being bright or dark or coloured? Or do they rather sense them as having some kind of sonic texture? Do they even sense shapes as we do?
We can’t answer these questions. We don’t have a clue about what it is like to be a bat.
We have no conception of the subjective side of bat experience.
In raising his question, Nagel does not want to suggest that bats lack consciousness. He takes bats to be normal mammals, and as such just as likely to be conscious as cats and dogs. Rather, he wants to force us to distinguish between the two conceptions of conscious experiences, objective and subjective.
When we think about humans, we don’t normally bother about Nagel’s distinction. We usually think of human consciousness simultaneously in subjective and objective terms – both in terms of how it feels and in terms of objectively identifiable goings-on in the brain.
The bats, however, force us to notice the distinction, precisely because we don’t have any subjective grasp of bat sensations, despite having plenty of objective information about them.
Science tells us a great deal about the bat’s brain. But not what it is like to be a bat.
Nagel thus identifies something about experience that escapes scientific description. We lack this subjective something with bats, even after knowing everything science can tell us about them.
The moral then applies to conscious experiences in general.
Even though we normally run subjective and objective together, we should never forget that these can be distinguished. And no amount of scientific description will convey a subjective grasp of conscious experiences.
The central problem of consciousness relates to mental states with a subjective aspect. In Nagel’s words, these are states that are “like something”. They are also sometimes called phenomenally conscious to emphasize their distinctive “what-its-likeness”.
The big challenge is to explain how subjective or phenomenal consciousness fits into the objective world. And in particular how it relates to scientific goings-on in the brain.
We face a number of choices at this point. Let’s look at the three options that will emerge: dualist, materialist and mysterian.
Are the subjective features of conscious experience genuinely distinct from brain activities? This is a natural assumption. But this is a dualist line which then raises further questions.
If the world contains subjective elements, then how do they interact with the normal physical entities which seem to fill up space and time? And what yet unknown principles govern the emergence of these subjective elements?
An alternative is to deny that subjective mind and objective brain are as distinct as they appear to be. This materialist option is suspicious of the divergence between subjective and objective conceptions of the mind-brain. It insists on a unity behind the appearances.
The problem for materialism is to explain how mind and brain can possibly be identical. If they appear so different.
Yet others despair of the problem and settle for the “mysterian” view that consciousness is a complete mystery.
The understanding of phenomenal consciousness is beyond human beings at present… And perhaps forever.
We will examine these options more closely later. For the moment let us simply agree, in the terminology of the Australian philosopher David Chalmers, that explaining phenomenal consciousness is the “hard problem” of consciousness.
Chalmers distinguishes between the “hard problem” and “easy problems” of consciousness. According to Chalmers, the easy problems concern the objective study of the brain.
At this level, we can ask about the causal roles played by different kinds of psychological states. And about how these roles are implemented in the brains of different creatures.
Of course, these problems are only “easy” in a relative sense. They can pose real challenges to psychologists and physiologists. But they are “easy” in seeming soluble by straightforward scientific methods, and not raising any insurmountable philosophical obstacles.
So, for example, we might analyse pain as a state that is typically caused by bodily damage, and which typically causes a desire to avoid further damage.
Then we can investigate how pain is realized in humans by a system of A-fibre and C-fibre transmissions, and by different physiological systems in other animals.
Similar objective studies can be carried out for other psychological processes like vision, hearing, memory, and so on.
But none of this “easy” stuff, Chalmers points out, tells us anything at all about the feelings involved. Stories about causal roles and physical realizations will apply just as much to unfeeling robots as to throbbing, excited, itching human beings. The “hard problem” is to explain where the feelings come from – to explain phenomenal consciousness.
Can we explain why it is “like something” to be us?
Another philosopher, the American Joseph Levine, calls this problem “the explanatory gap”. Objective science can only take us so far. In psychology, as elsewhere, it can identify how different states function causally, and can figure out the mechanisms involved. But in psychology this doesn’t seem to be enough. There is something else to explain.
Even after we have been told all about damage-avoiding states and A-fibres and C-fibres, we still want to say…
Yes, but why does all that feel like it does? Why does it hurt?
There seems to be a gap here between what science can tell us and what we most want to explain.
Sometimes we speak of creatures being conscious, rather than of their having phenomenally conscious states. For example, we say that humans are conscious and bacteria are not. And we might wonder whether fish are conscious, say, or snails.
But talk of “creature consciousness” isn’t significantly different from our earlier talk of phenomenally conscious states. “Creature consciousness” can easily be defined in terms of “state consciousness”. A creature is conscious if it sometimes has conscious states.
Whether fish are conscious simply comes down to the question of whether they sometimes have conscious pains, conscious visual experiences, and so on.