Philosophy and AI - Michael R D James - E-Book

Philosophy and AI E-Book

Michael R D James

0,0
9,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

The Historical Context of the philosophical debate relating to the issue of whether Artificial Intelligence is anything more than a hypothetical metaphor awaiting more exact characterisation dates back to the 1940's and 50's. The central figure who initiated this discussion was, of course Alan Turing, the mathematical genius who worked with English military intelligence on the Enigma Project. The invention of ACE(The automatic computing engine) and its role in solving a problem that had defeated the best minds in England was the initiating event of the claim that machines of this kind were in a sense "intelligent". Intelligence, of course, is a psychological term with a contested psychological definition as was evidenced by the discussions that followed Piaget's theories and the attempts to construct tools to measure this elusive capacity. This work argues that it is to Philosophy we must turn if we are to clarify a problem that was challenging theoreticians of the scientific and psychological community.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 198

Veröffentlichungsjahr: 2024

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Michael R D James
Philosophy and AI

Artificial Intelligence and its Discontents

Table of Contents:

Introduction: A Machine for All Seasons

Chapter one Zombies, the new Men and their machines

Chapter two The World is a Computer

Chapter three The Power of Being Human

Chapter four A Machine for All Seasons

Chapter five The Human is a machine

Chapter six Psychology, Neuroscience and Consciousness

Introduction: A Machine for All Seasons

Chat GPT was asked to write a 1000 word essay on Philosophy and Artificial Intelligence and the answers provided help us to understand at least how the programmers think and reason about the phenomenon they have created. The conclusion Chat arrived at was:

“The intersection of philosophy and artificial intelligence encompasses a vast array of profound questions that challenge our understanding of the mind, consciousness, ethics, knowledge, and human existence. As AI continues to advance, it becomes increasingly important to engage in philosophical reflections that guide the responsible development and deployment of this powerful technology. By exploring these philosophical dimensions, we can forge a more profound appreciation of human intelligence and its relationship with the rapidly evolving world of AI. Together, philosophy and artificial intelligence offer a unique perspective that can illuminate the path to a future where both human minds and machines coexist in harmony and understanding”.(20th July 2023)

The key words in the opening sentence are “challenge our understanding of the mind, consciousness, ethics, knowledge and human existence.” It is clear that Chat is taking an explorative, cautious approach to this question, and other questions we asked later, indicate that Chat does not quite engage with the arguments Philosophers have provided against using some of the language used above, e.g. understanding, intelligence, etc.. It almost seems as if it is the question of the peaceful coexistence of man and computer that primarily occupied the programmers and they are at pains not to take a definite defendable position on many of the issues that are raised about AI.

Chat was also asked to write a 1000 word essay on the topic of “Know Thyself” and two features of its answer stood out. Firstly, no connection was made between this topic and the topic of the importance of knowing what it is we do not know. Socrates is mentioned but not the fact that his entire philosophical adventure may have been sponsored by the statement of the Oracle that he was the wisest man in Athens because he knew what it is that he did not know. Secondly it is remarkable that Chat speaks about “our” personal “introspective” journey as if it regarded itself as part of the community of minds that form our human communities. It is clear here that the programmers are not programming chat in accordance with a clear conception of the “identity” of the machine (what it is in itself), but are rather importing their own identities into the equation. This may cause confusion in the future and gives rise to the Philosophical demand that the programmers form a clear picture of the machines powers and potentialities and programme the machine accordingly.

Joseph Weizenbaum, in his work “Computer Power and Human Reason”tells us about his experience of what he calls the artificial intelligentsia” in unflattering terms, calling them compulsive mad scientists. If these characters are our programmers we can certainly wonder whether they know what they don’t know. We shall offer a review of Weizenbaum’s work subtitled “From Judgement to Calculation” in a later chapter.

Introspection was a topic covered in volume one of my work (The World Explored, the World Suffered….”). In this volume there is a chapter on Plotinus, an ancient thinker who belonged to the Platonic school of Philosophy. Plotinus subscribed to a theory of the soul (psuché) that would reject confusing arte-facts with “forms of life”. When he discusses the senses and sensation there is no confusion of, for example, biologically related visual images, with the automated digital visual images (ADVI’s), that are so commonly encountered in the world of artificial intelligence. There is, that is, a clear recognition of the difference in distinction between techné and epistemé. This is part of the knowledge the Oracle and the everyday Greek took for granted, seeing in the former the need for a calculative form of reasoning that does not follow the principles of theoretical reasoning involved in epistemological knowledge- claims.

Plotinus claims that we humans use sensation to discriminate between experiences and this is certainly not the case with computers that cannot in any sense “feel” anything, since they do not possess the appropriate biological nervous system. The soul, for Plotinus, belongs to a realm of Thought and Being, and is likened unto a musician playing a physical harp that belongs to another realm of Being that relates to external objects. He points, in the spirit of Aristotle, to the melody emanating from a harp as the “principle”(arché) of the activity. The type of knowledge operative in this situation is obviously a non-observational type of knowledge and is, therefore, more practical and related to various practical aesthetic concerns that we human beings possess.

Kant’s third Critique3 discusses aesthetic judgement, and teleological judgment as well as themes relating to psuché, in a way that we are reminded of the hylomorphic approach to such themes. The most elementary power of psuché is the power of sensation that, for Kant, carries with it more than the power of discriminating one thing from another in experience. “Knowledge” in the form of an a priori intuitive awareness of space and time flows from the human body composed of a complex set of organs orchestrating a configuration of limbs which, according to O Shaughnessy, generate a body-image that is “known” non-observationally, and that “inhabits” space and time rather than merely occurring in a space time continuum in the way a grain of sand in a desert or a machine does.

Given the fact that a computer, or Turing machine, has a fundamental relation to mathematics that relies on a sequence of functions being arrayed in time, either simultaneously, or linearly, one after another, it is not particularly surprising to discover that the “alphabet” composing the so-called “information-strings” relating to such machines, is composed of 0’s and 1’s. This is the “language” of the machine, although one must hasten to add that the use of the term “language” to describe what is occurring here is attenuated to say the very least. The 0’s and 1’s may not refer to a space in the machine, but rather to whether a particular process is operating or not. The principle operating here is an energy regulation principle that is not entirely dissimilar to that energy regulating principle operating in the brain, with the caveat that the machine is constituted of inorganic matter moved by electrical currents, whilst the brain, on the other hand, is an organic system moved by both chemistry and electrical activity. This material-difference alone might rule out the possibility of any form of self-awareness occurring in the machine, and this self-awareness in turn may be the crucial element necessary for agency, i.e. for an act of will to occur.

This difference may also account for the very human act-of-knowing to occur, an act based on sensation and the feeling of the sensation. The difference I am drawing attention to here is similar to the difference that exists between a perceptual image of a castle and a digitally generated image of a castle, whether we are talking about images in motion such as those generated by film or television cameras, or alternatively “still-life” images that may be painted or drawn. The latter are, in the true tradition of Plato, simulated images of reality that are like the shadows projected upon the wall of the prisoners cave: they are arte-factual. Such images cannot form the basis for generating either an act of knowledge (episteme, justified true belief) or an action directed toward the good in the external world (arête, virtuous act).

Stanley Cavell in an interesting book on the Ontology of film, entitled, “The World viewed”, made the following claim:

“…an immediate fact about the medium of the photograph..is that it is not painting..A photograph does not present us with “likenesses” of things: it presents us, we want to say, with the things themselves”(Harvard University Press, Cambridge, 1971, Page17)

But Cavell immediately backtracks from this and claims that because the photo of the earthquake is not the earthquake itself we may feel uncomfortable with the above claim, as we might feel uncomfortable with showing a picture of a famous person and saying “that is not X”. He compounds the mystery surrounding the ontological structure of such images by claiming:

“So far as photography satisfied a wish, it satisfied a wish not confined to painters, but the human wish, intensifying in the West since the Reformation, to escape subjectivity and metaphysical isolation---a wish for the power to reach this world…” Page 21

Cavell asks himself the question of how photography managed to escape subjectivism and he gives himself the answer that it succeeds in doing this by the process of automation, a process that removes the human being completely from the artistic equation. This is an interesting discussion in the light of the questions we have been raising about artificial intelligence. Is not the human being here too, removed from the equation? There are, we know programmers behind what is happening on our computer screens as there are directors responsible for the films we view, but the question we need to ask here is the question Weizenbaum raises: “Have the programmers become like their machines, automatons, and robotic presences”. Machines, as we have noted, are not organic and therefore not in need of food, but Weizenbaum speaks of these programmers as beings in need of having their food brought to them.

Cavell in the introduction to his work invokes the spectre of Plato and asks whether the relation of the image to what it is an image of is not a relation of “participation?” The images in motion we encounter then, somehow announce the presence of the thing itself:

“…. A fundamental fact of film’s photographic basis: that objects participate in the photographic presence of themselves on film: they are essential in the making of their appearances. Objects projected on a screen are inherently reflexive, they occur as self-referential, reflecting upon their physical origins.”

The question I am raising with this discussion is whether we are not dealing with shadows on a cave wall but rather with the many objects in the world participating in the one idea of them, an idea that gives them their reality. Insofar as the images we encounter on our television and cinema screens are moving, and have a basis in photography, they must, in a sense, escape the argument that attempts to characterise them as subjective imaginings that have little contact with reality.

One of the messages of “The World Explored, the World Suffered” (Volume one)” is the theoretical destructiveness of the objective subjective distinction in metaphysical discussions (discussions about first principles). Sometimes this subjective-objective distinction is being used to neutralise first principle arguments, and sometimes we refuse its application in contexts where the issue is one of defending different forms of (logical?) solipsism. Perhaps the solution to this problem is to abandon this distinction altogether in favour of Aristotelian- Kantian frameworks which situate the human being in a framework well expressed by Heidegger’s term, “Being-in-the-world”6.

The important fact to remember in the context of this discussion is that phenomena in the world get their explanations from 3 different forms of science, (theoretical, practical and productive). Techné, has its roots in the productive sciences that situate themselves not in the faculties of the understanding but in relation to the associated mental faculty of Judgment. Technological instrumental equipment such as AI robots and computers are not worthy ends-in-themselves for humanity, and are therefore not something we can speak with a “universal voice” about. There is, at best, an appeal to instrumental practical reasoning that sets its sights on the means to ends rather than on the ends themselves

The form of reasoning we encounter in such contexts is the form of an instrumental hypothetical imperative that selects means to ends. Insofar as humans are concerned it is a measure of human intelligence, according to William James in his “Principles of Psychology”, that if we find our path to an end blocked, we can then choose an alternative means to the end. This kind of freedom of choice, however, is not available to computers and their programs in situ. So, there is no human element directly involved in this process, and this is why we have raised the issue of automation in relation to the images in motion in film.

AI is not entitled to the term “Intelligent” on James’ reasoning, because however real the cause-effect relations involved in the relation between the lines of the program and the operation of the computer, the effects are automated effects and not products of free human choices. Moreover James further claims that:

“The pursuance of future ends, and the choice of means for their attainment are thus the mark and criterion of the presence of mentality in a phenomenon” (Principles of Psychology (Vol. 1), James, W., Dover Publications, New York, 1890, P.8)7

This is in line with both Aristotelian and Kantian thinking, and James continues to contrast the criteria of mentality to automatic or machine-like deterministic activity, where there is both no possibility for choice or any relation to desire. James argues that there are reflex responses in living beings that appear to be in accordance with pure mechanical causation, but it is important to note that this admission is also to be acknowledged in the light of the above, that is, the reflex system can be both monitored by the mentality of a human organism and qualified by an immediate mental response which might, for example, explain that the reflex was not intended.

James was writing during the “times of troubles” for Psychology, that is, during the divorce proceedings between Philosophy and Scientific Psychology that had begun under the banner of the definition of Psychology as “The Science of Consciousness”. The Definition of Psychology William James coined was “ The Science of Mental Life, its phenomena and conditions”, and this was an attempt to summarise both the Philosophy of mind of his time, and the scientific research from all over the world (James was competent in both German and French). He was writing in a time of transition in Philosophy that he helped to initiate with his eventual creation of the school of “Pragmatism”. A transition that Brian O’Shaughnessy would echo and modify in his two volume work on “The Will: A dual-aspect theory”, Cambridge University Press, Cambridge, 1980):

“it is because we think of mans mind as vital and animal, and tied in its very essence to a sustaining world, that we lay great emphasis at the present moment on this familiar phenomenon. All else in the mind, Including consciousness itself, is from such a point of view of merely secondary significance.”(P.XIV)8

This excursion into the domain of Philosophical Psychology has consequences for any inquiry into the nature of Artificial Intelligence, which appears to have by-passed the Socratic stage of the investigation that always began with the question “Ask of everything what it is in its nature”. The inquiry also seems to have overlooked the Aristotelian definition of human psuché, namely, “rational animal capable of discourse”. By no stretch of the imagination is it possible to categorise mechanical devices as “alive” or “animal”, Furthermore, since the elements of the definition are integrated together, it also suggests that mechanical devices may not be capable of authentic discourse or rationality.

Later in this work we will draw attention to the failure of Chat GPT to understand the meaning of the statement “Promises ought to be kept”. Connected to this failure is what O’ Shaughnessy termed “self-consciousness” (part of the “essential dynamic character of consciousness”). An epistemological contact with reality is part of this process and O’Shaughnessy contrasts the normal function of self-consciousness with dreaming, which is what happens to mind when the normal controls of the mind are relaxed (inactivity of the motor and sensory systems). Action (initiated by the motor system), however, invades the domain of epistemology and this is evident in the way in which the practical world is stamped on all visual experience: the visual impression of the castle, on this account, is a place to visit by climbing the steep hill.

O Shaughnessy does not miss the Socratic and Aristotelian steps in his investigation as is evidenced by the claim:

“..what one is determines how and indeed what one knows”(P. XLVII)9

Freud is invoked in this discussion:

“One sees the landscape with a cool objective intelligent eye that endows it with colour and shape and depth and content, and at the very same time with an unconscious and deeply interested gaze that sees in it some primal entity concerning which one cares….According to Freud, the ego phenomenon of sense perception depends on and reverberates with the undercurrent of phenomena in the other great instinctual half of the mind….Epistemology is not the isolated psychic function one might at first think. Thus sight is a more total embrace than the model of the camera suggests: depending on sensation, and so body, but also on past experience, on present beliefs, on concepts, memory, indeed upon sanity and reason; and according to Freudian theory, ones very instincts”.(P. XLVIII)10

This also raises the question of whether the category of desire is relevant to the description of the activities of the AI machine. Indeed the fallacy we refer to later in this work is that, namely, of anthropomorphising the machine, may lie in the very structure of our perceptual contact with the world. We “see”, for example, the arms and legs of a chair and this is reflected in language by extending the use of linguistic terms metaphorically. Anthropomorphising a chair in everyday language is, of course, a different matter to the issue of the validity of the claims made by science and natural science, which, has tended toward cleansing its theories of all such tendencies, referring to all attempts at anthropomorphising, as “subjective”. But there is a deeper issue here, especially when we are discussing the so-called life sciences.

Kant, in his Third Critique, partly acknowledged this deeper issue on his discussion of the role of analogy in relation to the power of Judgement:

“The concept of a thing as intrinsically a physical end is, therefore, not a constitutive conception either of understanding or of reason, but yet it may be used by reflective judgement as a regulative conception for guiding our investigation of objects of this kind by a remote analogy with our own causality according to ends generally…..Organisms are, therefore, the only beings in nature that, considered in their separate existence, and apart from any relation to other things, cannot be thought possible, except as ends of nature. It is they, then, that first afford objective reality to the conception of an end that is an end of nature and not a practical end. Thus they supply natural science with the basis for a teleology, or, in other words, a mode of estimating its Objects on a special principle that it would otherwise be absolutely unjustifiable to introduce into that science---seeing that we are quite unable to perceive a priori the possibility of such a kind of causality.”(Critique of Judgement, Kant, I, Trans by Meredith, J., C., Clarendon Press, Oxford, 1952) Part two. P 2411

Teleological judgement will, of course, also be relevant to the claims we make about arte-facts such as computers in the name of the Productive sciences (as conceived of by Aristotle), but here the principles of techné will be more relevant to our judgements, than the principles of practical or theoretical reason. Kant follows up on this essentially Aristotelian position with a reflection on final ends, art and machines:

“Thus a house is certainly a cause of the money that is received as rent, but yet conversely, the representation of this possible income was the cause of the building of the house. A causal nexus of this kind is termed that of final causes. The former might, perhaps, more appropriately be called the nexus of real, and the latter the nexus of ideal causes, because with the use of terms it would be understood at once that there cannot be more than these two kinds of causality. Now the first requisite of a thing considered as a physical end is that its parts, both as to their existence and form, are only possible by their relation to the whole. For the thing is itself an end, and is, therefore, comprehended under a conception or idea that must determine a priori all that is to be contained in it. But so far as the possibility of a thing is only thought in this way, it is simply a work of art…..But if a thing is a product of nature….every part is thought as owing its presence to the agency of all the remaining parts, and also as existing for the sake of the others and of the whole, that is, as an instrument, or organ…the part must be an organ producing the other parts…In a watch one part is the instrument by which the movement of the others is effected, but one wheel is not the efficient cause of the production of the other. One part is certainly present for the sake of another, but it does not owe its presence to the agency of the other….still less does one watch produce other watches…nor does it repair its own casual disorders…For a machine has merely motive power, whereas an organised being possesses inherent formative power…” (PP 20-22)12

Descartes, we know, claimed to overthrow Aristotelian thinking in this area, partly with the absurd claim that animals are merely machines, thus creating category-confusions that have persisted to the present day. Kant’s description of the watch above is the template that ought to be used for the description of AI machines or robots. These machines were all designed for an “artificial” purpose and belong to the category of what Heidegger defined as “instrumentalities” that are “ready-to-hand”. Their form of Being-there (Dasein) is not the form of human-being.13 This then, ought to be sufficient justification to insist that the description of these machines and the explanation of their functions do not belong in the sphere of the theoretical or practical (moral) sciences.

This raises the issue of whether an arte-fact which is, seemingly, autonomously active can be said to want or desire anything. O Shaughnessy is categorical on this issue:

“the desire-force acts entirely within the psychological domain” (P LI)14

He continues to reason that the desire-force does not apply to phenomena in the mind or to the mind itself, but only to the man, the human being, that possesses the mind in question. Furthermore, it is argued this force-desire is responsive to intention, and therefore also to the agents judgment, reason, and values (P.LIV) O Shaughnessy sketches for us also the ontological divisions of the world, beginning with physical inorganic entities, continuing with living entities, which then possess psychological and mental powers: a sketch entirely consistent with Aristotelian and Kantian assumptions. Intention is located in both the psychological and mental domains, because it introduces both significance and control into action scenarios (LXII). Whether anything can have meaning for a machine, or be subject to autonomous control of the machine (independent of the designers and programmers of any software), is a burning question, which will be raised later in this work in different forms.