Home Introduction Introspection Psychophysics Mind-Body Problem Neural Correlates Psychosomatics Spirits Zombies Free Will Implicit Cognition Emotion/Motivation Coma Anesthesia Sleep Dreams "Hysteria" Hypnosis Absorption Meditation Psychedelics Self-Consciousness Development Conclusion Lecture Illustrations Exam Information The Current Scene On the Internet Farthing Text


Mind and Body

"The Emergence of Spirit and Matter", attributed to Shivdas, painted ca. 1828 during the reign of Maharajah Man Singh, Marwar (now Jodhpur), Rajasthan, northwest India.  From Folio 2 from the Shiva Purana, now in the  Mehrangarh Museum.  The painting is influenced by the ascetic Nath sect of yogis.  The left panel represents the "formless Absolute" that preceded Creation.  In the center panel, Spirit or Consciousness (the male figure) and Matter (the female figure) are linked through their gaze and posture.  In the right panel, Spirit and Matter are separated, with their arms crossed, no longer looking at each other.  From Garden and Cosmos: The Royal Paintings of Jodhpur, exhibit at the Arthur M. Sackler Gallery, Smithsonian Institution, 2008-2009.

Shivdas2.jpg (84765
              bytes)

Shivdas3.jpg (88189
              bytes)


Nagel.JPG (133745 bytes) Consciousness has many aspects, but the fundamental feature of consciousness is sentience or subjective awareness.  As the philosopher Thomas Nagel put it in his famous essay, "What Is It Like To Be a Bat?" (1979),



The fact that an organism has conscious experiences at all means, basically, that there is something it is like to be that organism.  There may be further implications about the form of the experience; there may even (though I doubt it) be implications about the behavior of the organism.  But fundamentally an organism has conscious mental states if and only if there is something that it is like to be that organism -- something it is like for the organism. We may call this the subjective character of experience.  It is not captured by any of the familiar, recently devised reductive analyses of the mental, for all of them are logically compatible with its absence.  It is not analyzable in terms of any explanatory system of functional states, or intentional states, since these could be ascribed to robots or automata that behaved like people though they experienced nothing.
The relation between mind and body is one of philosophy's great problems, but consciousness plays a special role in making it problematic.  It's no trick to make a machine that processes information (well, I can say that now!).  All you have to do is take stimulus inputs and process them in some way so that the machine makes appropriate responses.  Your home computer does this.  So does a 19th-century Jacquard loom.  From this perspective, as the computer scientist and artificial intelligence pioneer Seymour Papert once put it, "The brain is just a machine made of meat".  The trick lies in making a machine that is conscious.

As Nagel put it in his "Bat" essay, 

"Consciousness is what makes the mind-body problem really intractable.  Without consciousness, the mind-body problem would be much less interesting.  With consciousness, it seems hopeless."
How did we get to this point?


A Philosophical Disclaimer

I'm not a philosopher, and I don't have any advanced training in the history and methods of philosophy -- or any elementary training, either, for that matter.  In trying to make sense of philosophical analyses of the mind-body problem, which I think is necessary in order to have some perspective on the scientific work, I have relied on my own common sense, the tutelage of John Searle (who should not be held responsible for any of my own misunderstandings), and several excellent secondary sources. 

For highly readable introductions to Enlightenment philosophy, see:
  • The Story of Philosophy: The Lives and Opinions of the Greater Philosophers by Will Durant (1st Ed., 1926; 2nd Ed., 1962).  This is the classic one-volume history of philosophy -- and you've gotta love the subtitle.
  • The Dream of Enlightenment: The Rise of Modern Philosophy (2016) by Anthony Gottlieb, the second volume in his history of philosophy (the first volume, the Dream of Reason, covers the period from the ancient Greeks to the Renaissance.
  • The Age of Genius: The Seventeenth Century and the Birth of the Modern Mind by A.C. Grayling (2016). 
  • Soul Machine: The Invention of the Modern Mind by George Makari (2016).  
Reviewing Grayling's and Makari's books, the philosopher Colin McGinn notes that

There are three things that might be meant by "the emergence of the modern mind": first, the emergence modern ways of thinking about the universe; second, the emergence of modern conceptions of the mind; and third, the emergence of the mind itself with its distinctive human characteristics....  Grayling... deals with the first of these questions, arguing that the seventeenth century was the pivotal century during which modern thought took its rise, particularly with respect to physics and astronomy....  Makari... deals with the second question, tracing the way the old conception of an immaterial and immortal soul gave way to a view that replaced "soul" with "mind" and placed the latter firmly within the body.

Neither author discusses the third question, though it is perfectly legitimate, doubtless requiring us to go back to prehistoric times when Homo sapiens first evolved.

These authors don't discuss the evolution of mind and consciousness, but I'll have something to say about this topic later, in the lectures on "The Origins of Consciousness".


In addition to these popular general introductions to philosophy I have particularly relied on the following:

  • Mind and Body, an exhibition catalog assembled by Prof. Robert Wozniak (National Library of Medicine, 1992). Wozniak is a distinguished historian of psychology, and this exhibit was put together to celebrate the centennial of the American Psychological Association.  Link to Mind and Body website.
  • Matter and Consciousness by Paul M. Churchland (MIT Press, Rev. Ed., 1988). A great treatment of the classic positions: Despite my disagreement with Churchland's more recent views, I depended on it in writing these lecture supplements.
  • Philosophy of Mind: A Contemporary Introduction by John Heil (3rd Ed., 2013). Provides up-to-date coverage of recent developments.
  • Mind: A Brief Introduction by John R. Searle (based largely on Searle's undergraduate course at UCB, Philosophy 134).

See also:

  • How to Be Animal: A New History of What It Means to Be Human (2021) by Melanie Challenger (reviewed by John Gray in "The Mind's Body Problem", New York Review of Books, 12/02/2021).  Challenger writes: "For those with a deep belief in a creator, we have a soul that is unique to the human body.  For secular humanist thinkers, we have soul-like mental powers that are unique to the human brain.  Each is a reason for why humans aren't truly animals.  At least, not in any crucial way.  Gray writes: This discrepancy between what we know from scientific inquiry and the image we have formed of ourselves is Challenger's central theme: "The truth is that being human is being animal.  This is a difficult thing to admit if we are raised on a belief in our distinction". 
 

 

Dualism

All three major monotheistic religions -- Judaism, Christianity, and Islam -- hold that humans are different from other animals: because we have souls, and free will, we are to some extent separate from the rest of the natural world.  In much the same way, Hinduism and Buddhism accord humans greater spiritual worth than other animals.  Animist cultures, however, may not make such a clear distinction between humans and animals (or, for that matter, trees and rocks).   For more on cultural distinctions between humans and animals and other living things, see How to Be Animal: A New History of What It Means to Be Human (2021) by Melanie Challenger (reviewed by John Gray in "The Mind's Body Problem", New York Review of Books, 12/02/2021), discussed above.

In modern times, at least for many people, the soul has been replaced by the mind.  And the modern debate about mind and body has its origins at the beginnings of modern philosophy, with Rene Descartes (1596-1650), a 17th-century French philosopher who lived and worked in Holland for most of his life, and the other thinkers of the Enlightenment.

 

Descartes and His Impasse

Descartes.jpg
                (81842 bytes)Our story begins with Rene Descartes (1596-1650), a French philosopher whose project was nothing less than the achievement of perfect knowledge of things as they are, free of distortions imposed by our senses, emotion, and perspective.  To this end he adopted a posture of methodical doubt without skepticism.  He set out to doubt everything that had been taught by Aristotle and later Aristotelians, as well as by Aquinas and other Church fathers (although a devout Catholic, in this latter respect he showed the influence of the Protestant Reformation and its attacks on papal authority).  At the same time, he assumed that such certain knowledge was possible to attain.  So he began by doubting everything -- including the distinction between waking and dreaming.  For all he knew, there might be an "evil demon" who presents an illusory image of the external world to Descartes' sense apparatus, such that all of Descartes' sensory experiences are, in fact, illusory -- put bluntly, there's no external world, only mental representations created by this evil demon.  Descartes could wonder whether the world as it appeared to him was really the world as it existed outside the mind (if there was such a world at all; but he couldn't doubt that he was thinking about this problem.  Hence, he concluded that "I think, therefore I am" -- his famous formulation of cogito ergo sum.

Note, that, technically, the one thing that Descartes couldn't doubt was that he was thinking -- that thoughts exist.  The body might indeed be an illusion, but thoughts definitely exist, precisely because he's engaged in the act of thinking.  It follows, then, that thoughts exist independently of bodies (i.e., brains).

Descartes also argued for a strict separation between human beings and animals.

In this way, Descartes (in the Discourse on Method of 1637 and the Meditations of 1641) articulated a position which has since been known as dualism, and in particular a view known as substance dualism.  In this view, mind and body are composed of two different substances, both created by God.


Body and Soul, Body and Mind, Body and Brain

Unlike most Western philosophers who came before him,  Descartes was a layman, but he was also a dutiful son of the Catholic Church.  Like Galileo, he advocated a heliocentric view of the universe.  But when he heard of Galileo's persecution, he suppressed his own view.  As it happens, Church doctrine also influenced Descartes' thinking about body and mind.   Descartes made his argument for dualism in part on the basis of his religious beliefs: Church doctrine held that Man was separated from animals by a soul, and for Descartes, soul was equivalent to mind, and mind was equivalent to consciousness.  Mind is the basis for free will, which provides what we might call the cognitive basis for sin.

However, Descartes' dualism was also predicated on rational grounds.  It just seemed to him that body and mind were just -- well, somehow different.

And so it seems to the rest of us, too.  Paul Bloom, in Descartes' Baby (2004), notes that even young children -- once they have a concept of the mind, a topic we shall discuss later in the lectures on "The Origins of Consciousness" -- appear to believe that their minds are somehow distinct from their bodies.  And although they know about brains, they appear to construe their brains as "cognitive prostheses" that help them think, instead of organs that actually do all our thinking.

It has been suggested that dualism is intuitively appealing because the brain does not perceive itself, the way it perceives sounds and lights, the rumblings of your stomach and dryness of your mouth, your heart beating rapidly and your face flushing.  There's "no afference in the brain" -- just (just!) interneurons processing environmental events, manipulating thoughts, and organizing responses.  We can't feel ourselves thinking. 

Of all of Descartes' ideas, dualism is perhaps the most influential.  Not for nothing, I suppose, when he died his skull was buried in a different place than the rest of his body (see Descartes' Bones: A Skeletal History of the Conflict Between Faith and Reason by Russell Shorto).  Wolfram Eilenberger, in The Time of the Magicians (2020), his survey of 20th-century philosophy, describes modern philosophy as "the abstraction-fixated, anti-corporeal, consciousness-obsessed modern age of Rene Descartes and his methodical successors".




Descartes' dualism is also an interactive dualism.  Mind and body are composed of separate substances, but they interact with each other.

But how can a material substance affect an immaterial one, and vice-versa?  Descartes reasoned that this interaction took place within the pineal gland (also known as the pineal body), a structure located near the center of the brainHe chose the pineal gland partly because of its central location, but also because he believed it to be the only brain structure not duplicated in both hemispheres (there were other reasons as well).



It is now known that the pineal gland is sensitive to light, and when stimulated releases melatonin, a hormone that is important for regulating certain diurnal rhythms.  The pineal gland may not be where mind and body meet, but it does seem to be an important component in the biological clock.

And, for that matter, he was wrong on the first point, too: close microscopic examination reveals that the pineal gland is divided into two hemisphere-like structures.  But Descartes had no way of knowing this, so we shouldn't count it against him.

But aside from locating it in the pineal gland, Descartes had no clear idea how this mind-body interaction took place.  This is what the philosopher G.N.A. Vesey referred to as the Cartesian impasse (in The Embodied Mind, 1965).



Mind and Body in Opera

There was interest in the relations between mind and body before Descartes.  What is often considered to be the very first opera (or, perhaps, the first example of the oratorio form) was Rappresentatione di Anima e di Corpo ("Representations of the Soul and the Body), by Emilio de Cavalieri, an early composer of the Italian Baroque period, and produced on stage in 1600.  At least, that was Cavalieri's own claim (the honor of writing the first opera usually goes to Claudio Monteverdi, for Orpheo (1607).  In the opera, Soul and Body battle out in a kind of morality play, with cameo roles for characters like Intellect, Prudence, and Pleasure. 

Link to a performance of Rappresentatione by an ensemble led by Rene Jacobs (YouTube).


Varieties of Dualism

DualismVarieties.JPG (109910 bytes)Many later philosophers tried to find a way out of, or around, the Cartesian impasse.  In so doing, they adopted a dualistic ontology, but not necessarily of Descartes' form, involving different interacting substances.

 


Nicholas Malebranche (1638-1715) proposed a view called occasionalism.  In his view, neither mind nor body has causal efficacy.  Rather, all causality resides with an omniscient, omnipotent God.  The presence or movement of an object in a person's visual field is the occasion for God to cause the visual perception of the object in the observer's mind.  And the desire to move one's limb is the occasion for God to cause the limb to move. 

Baruch Spinoza (1632-1677) proposed dual-aspect theory (also known as property dualism).  In his view, both mind and body were causally efficacious within their own spheres.  Physical events cause other physical events to occur, and mental events cause other mental events to occur.  But both kinds of events were aspects of the same substance, God.  Because God cannot contradict himself, he thus ordains an isomorphism between body and mind.

Gottfried Wilhelm Leibnitz (1646-1716) proposed psychophysical parallelism.  In his view, mind and body were different, yet perfectly correlated.  Leibniz's parallelism abandons the notion of interaction.  He saw no way out of Descartes' impasse, because he could not conceive of any way for two different substances, one material and one immaterial, to interact.  He also rejected occasionalism as mere hand-waving.  The only remaining alternative, in his view, was parallelism: when a mental event occurs, so does a corresponding physical event, and vice-versa.  This situation was established by God when he created Man, and doesn't make use of the pineal gland or anything else.

These neo-Cartesian views may seem silly to us know, but it is important to understand why people were so concerned with working out the mind-body problem from a dualistic perspective. 

 

Monism

We know, intuitively, that mind somehow arises out of bodily processes, but we also know that mind somehow controls our bodily processes.  This "somehow" is the great sticking-point of dualism.  Apparently, the only way out is to abandon dualism for monism, the ontological position that there is only one kind of substance.  But if there is only one substance, which one to choose?

 

Idealism  

ImmaterialistMonism.JPG (82385 bytes)Faced with the choice, some monists actually opted for mind over body.  Chief among these was George Berkeley (1685-1753, pronounced "Barklee", like the basketball player), the Anglican bishop whose poem, "Westward the course of Empire wends its way", inspired the naming of Berkeley, California (pronounced "Berklee", like the music school in Boston).  By insisting that there is no reality aside from the mind that perceives it, Berkeley articulated a position variously known as immaterialism, idealism, psychic monism, or mentalistic monism.  For Berkeley, only two types of things exist: perceptions of the world, and the mind that does the perceiving.  

A Zen koan asks: If a tree falls in the forest, and nobody hears it, does it make a sound?  For Berkeley, the answer is a firm NoBerkeley's philosophical doctrine, known as esse est percipi -- "to be is to be perceived" holds that nothing exists independently of the mind of the person who perceives it.  Obviously, there's no sound unless the mechanical vibrations in the air generated by the falling tree fall upon the eardrum of a listener.  But for Berkeley, the tree itself doesn't exist unless there's an observer present to perceive it.  Strange as it may seem, Berkeley was what we would now call a philosophical "empiricist".  He believed, like Locke and other empiricists, that all our knowledge of the world comes from our sensory experience; but he concluded that if all we know of the world is our sensory impressions of it, then mind itself, not matter, must be the fundamental thing. 

Where do those sense impressions come from?.  From God (Berkeley wasn't an Anglican bishop for nothing).  Berkeley was an immaterialist monist, remember.  For him, matter doesn't exist.  Nobody's ever seen matter, or touched it, or heard it.  All we have are sense impressions of brightness, smoothness, loudness, etc.  In this sense, Berkeley is a Lockean: all our ideas are, indeed, derived from sense experience.  It's just that our sense experiences don't arise from our encounters with matter.  They are put into our minds by God, who arranges our sense experiences in such a way that they follow the laws of Newtonian mechanics.  We just assume matter exists, in order to account for our sense experiences.  But, in reality, matter is just an occult substratum.  Applying Occam's Razor, Berkeley just does away with matter as an unnecessary idea.  Instead, he posits that our sense impressions are put into our minds by God.  Then, employing the Lockean processes of reflection and imagination, we derive other ideas from those sense impressions. 

James Boswell (1740-1795) tells the story of what happened when he told Samuel Johnson (1709-1784), about his (Boswell's) inability to refute it (Boswell, Life of Johnson, Vol. 1, p. 471, entry for 08/06/1763):

After we came out of church we stood talking for some time together of Bishop Berkeley's ingenious sophistry to prove the non-existence of matter, and that every thing in the universe is merely ideal.  I observed, that though we are satisfied his doctrine is not true, it is impossible to refute it.  I shall never forget the alacrity with which Johnson answered, striking his foot with mighty force against a a large stone, till he rebounded from it, "I refute it thus."

To which Berkeley would have replied: "Actually, what happened is that God put the idea of the stone in your head, and when you kicked it he put the idea of the painful stubbed toe in your head too". 

And here's how he would have solved that Zen koan: of course it makes a sound, because God is everywhere and perceives everything.  It makes a sound because God hears it.

So, you've got to have an omni-present, all-perceiving God to make Berkeley's system work.  This didn't bother Berkeley at all, though it bothers some people nowadays.  Still, immaterialism has attracted some adherents in the years since.

Chief among these, the American psychologist Morton Prince held that consciousness was a compound of mental elements which he called -- no kidding -- "mind-stuff" (in The Nature of Mind and Human Automatism, 1885).  In his view, all material elements possessed at least a small amount of mind-stuff; when these elements are combined, their corresponding mind-stuff is also combined, so that consciousness emerges at a certain level of the organization of matter.  At first glance, this looks like property dualism, but Prince was really an immaterial monist.  In his words (and punctuation):



[I]nstead of there being one substance with two properties or 'aspects,' -- mind and motion [extension], -- there is one substance, mind; and the other apparent property, motion, is only the way in which this real substance, mind, is apprehended by a second organism: only the sensations of, or effect upon, the second organism, when acted upon (ideally) by the real substance, mind.


Materialism

MaterialistMonism.JPG (102650 bytes)However, most monists opted for body over mind, a position variously known as materialist monism, or simply materialism.




The Automaton Theory

Among the first materialist monists was Julien Offray de la Mettrie (1709-1751), who claimed that humans, like animals, were automata, with their behaviors governed by reflex.  The only difference between humans and animals was that humans were conscious automata, aware of objects and events in the environment, and of their own behavior. For de la Mettrie,  mental events exist, as Descartes had said, but they are caused by bodily events and lack causal efficacy themselves.  De la Mettrie's book was promptly burned by the Church, and de la Mettrie himself sent into exile.

But his essential position was subsequently adopted by Pierre Jean Georges Cabanis (1757-1808).  In a formulation that will be familiar to those who have read much more recent work on the mind-body problem, Cabanis claimed that 

"Just as a stomach is designed to produce digestion, so a brain is an organ designed to produce thought".


Epiphenomenalism

The views of de la Mettrie and Cabanis set the stage for of epiphenomenalism, a term coined by Shadworth Holloway Hodgson (1832-1912). In Holloway's view, thoughts and feelings had no causal efficacy.  They were produced by bodily processes, but they had nothing to do with what the body does.  

Whereas de la Mettrie thought that (inefficacious) consciousness set humans apart from animals, T.H. Huxley, arguing from the principle of evolutionary continuity, ascribed consciousness to at least some nonhuman animals as well.  But Huxley also argued, with Hodgson, that conscious beings were still conscious automata.

Epiphenomenalism solved half the mind-body problem, by concluding that mind and body do not interact. But the other problem persisted, which is how body gives rise to mind in the first place. Of course, if you don't think that mind is important, this question loses much of its hold. But if you think that mind is important, precisely because it has causal powers, than you can't be an epiphenomenalist, or even a proponent of conscious automata.  But if you don't want to be an idealist, or a dualist, then what?

Epiphenomenalism is perfectly captured in one of its earliest expressions, the steam-whistle analogy of T.H. Huxley (1868), Darwin's cousin and known as "Darwin's Bulldog" for his vigorous defense of the theory of evolution by natural selection:




The consciousness of brutes would appear to be related to the mechanism of their body simply as a collateral product of its working, and to be completely without any power of modifying that working as the steam whistle which accompanies the work of a locomotive engine is without influence upon its machinery.

Note, however, that Huxley refers to "the consciousness of brutes" (emphasis added). For all that he and Darwin believed about evolutionary continuity, he still seemed to believe that there was something special about the consciousness of humans --to wit, that it might indeed play a causal role in behavior.


The Explanatory Gap

However, even epiphenomenalism acknowledges consciousness -- it just thinks it's epiphenomenal.  So even epiphenomenalism can't escape what the philosopher Joseph Levine calls the explanatory gap.  Levine argued that purely materialist theories of mind, which rely solely on anatomy and physiology, cannot, in principle, explain how the physical properties of the nervous system give rise to phenomenal experience.  To illustrate his reasoning, consider the following two physicalist explanations:



Levine argues that the causal links from molecular motion to heat are clear (at least, to anyone who has passed a course in Newtonian physics).  But the causal link from electromagnetic radiation to red has a gap in it.  It's one thing to say that electromagnetic radiation excites "long wavelength" cones in the retina, which gives rise to neural impulses which pass along the optic nerve to opponent processes in the lateral geniculate nucleus (or whatever), and then to Area V1 (the primary visual cortex) and Area V4 (the "color area"), along with other cortical endpoints in the visual system.  But none of this chain of physical causation explains why 70nm radiation gives rise to the experience of red as opposed to yellow, green, or blue -- or, indeed, why it gives rise to any experience at all. 


Mysterianism

James.jpg (105760
                bytes)In the Principles of Psychology (1890), William James embraced the viewpoint that the purpose of consciousness was to aid the organism's adaptation to its environment.  James rejected both the automaton-theory (and epiphenomenalism) and the mind-stuff theory, devoting a chapter of the Principles to each view (James was close friends with Prince, but friendship has its limits).  But James himself never got beyond Descartes' impasse.  Convinced as a scientist that materialism was right, but also convinced as a human being that mind had causal efficacy, James ended up adopting a variant of psychophysical parallelism - that there was a total correspondence of mind states with brain states, but that empirical science can't say more than that.  James considered this stance to reflect an intellectual defeat, and he simply resigned himself to it and to the Cartesian impasse (did I mention that James was chronically depressed?). 

James's resignation is echoed in some quarters of contemporary philosophy in a doctrine that has come to be known as Mysterianism, so named by the philosopher Owen Flanagan after the 1960s one-hit rock group "Question Mark and the Mysterians" (their one hit was "96 Tears", which was originally supposed to be titled "69 Tears" until someone thought better of it).  Mysterianism holds that consciousness is an unsolvable mystery, and that the mind-body problem will remain unsolvable so long as consciousness remains in the picture.

An early version of modern Mysterianism was asserted by the philosopher Thomas Nagel, in his famous 1979 "Bat" essay, which was really an argument that consciousness is not susceptible to analytic reduction:




"[T]his... subjective character of experience...  is not captured by any of the familiar, recently devised reductive analyses of the mental, for all of them are logically compatible with its absence.  It is not analyzable in terms of any explanatory system of functional states, or intentional states, since these could be ascribed to robots or automata that behaved like people through they experienced nothing."

Now, on the surface, this could simply be taken as a statement of dualism.  But recall that Nagel's essay began on a note of despair that the mind-body problem is soluble at all (emphasis added):

"Consciousness is what makes the mind-body problem really intractable. Without consciousness, the mind-body problem would be much less interesting.  With consciousness, it seems hopeless."

Mysterianism.JPG (123803 bytes)A more recent version of Mysterianism was outlined by the philosopher Colin McGinn (1989, 2000), who assumes that materialism is true, but asserts that we are simply not capable of understanding the connection between mind and body -- in his term, gaining cognitive closure on this issue:

 

 

We have been trying for a long time to solve the mind-body problem. It has stubbornly resisted our best efforts. The mystery persists. I think the time has come to admit candidly that we cannot resolve the mystery....  [W]e know that brains are the de facto causal basis for consciousness, [but] we are cut off by our very cognitive constitution from achieving a conception of... the psychophysical link.

Whereas James thought that the mind-body problem was a more-or-less temporary failure of empirical science, McGinn argues that the mind-body problem is insoluble in principle, because we simply lack the cognitive capacity to solve it -- to achieve what he calls cognitive closure on the problem.

McGinn draws an analogy between the limits of psychological knowledge and Piaget's theory of cognitive development.  In Piaget's view, a child at the sensory-motor or preoperational stage of cognitive development simply cannot solve certain problems that can be solved by children at the stages of concrete or formal operations.  He implies that if there were a fifth stage of cognitive development, or maybe a sixth or seventh, maybe we could get closure on the mind-body problem.  But since there are inherent limitations on our cognitive powers, we can't.


A Post-Operational Stage of Thought?

Actually, there just might be a post-operational stage, or stages.  Piaget's theory implies that the various stages of cognitive development can only be known from the standpoint of a higher stage.  So if Piaget was able to identify a stage of formal operations, that implies that somebody -- Piaget himself, at least! -- must be at yet a higher stage of cognitive development. Call this Stage 5.  But even Piaget never claimed to have solved the mind-body problem, so if McGinn is right then cognitive closure will have to wait until somebody gets to a sixth or higher stage.

In fact, the promise of such higher stages of cognitive development is made explicit by the Transcendental Meditation (TM) movement founded by the Maharishi Mahesh Yogi. One of the Maharishi's followers, the late psychologist Charles N. ("Skip") Alexander, drew explicit parallels between Maharishi's stage theory of consciousness and Piaget's stage theory of cognitive development.  And, in fact, TM holds out the promise of achieving through meditative discipline a union of mind and body -- from which standpoint, presumably, the solution to the mind-body problem will be clear.  See, for example, Higher Stages of Human Development: Perspectives on Adult Growth, ed. by Alexander and E.J. Langer (Oxford University Press. 1990).

McGinn hastens to point out that psychology is not alone, among the sciences, in lacking cognitive closure on some problem.  Physics and mathematics are the most advanced (and arguably the most fundamental) of the sciences, but physicists labor under the implications of Heisenberg's uncertainty principle, and mathematicians suffer from Godel's incompleteness theorem.  Still, physicists and mathematicians manage to get a lot of work done, solving other problems, and that's what psychologists and philosophers should do too.

In other words: if the mind-body problem is insoluble, we should just get over it.

That's just the position taken by Michael Schermer, publisher of Skeptic magazine and author of the "Skeptic" column in Scientific American.  In "The Final Mysterians" (Scientific American, 07/2018), Schermer distinguishes among three forms of Mysterianism:

Although set aside by James in the latter part of the 19th century, and by the Mysterians in the latter part of the 20th century, the mind-body problem has continued to incite effort in the 20th and now the 21st centuries, including the widespread embrace of materialist monism but even a revival of dualism.  But first it had to go through a stage where the mind-body problem itself was simply rejected. 


A personal note: Frankly, in all the time I have taught this course, I have not lost a single night's sleep over the "Hard Problem" of consciousness -- or, for that matter, the "Easy Problem", either.  I'm content to take consciousness as a given, obviously and somehow a product of brain activity, and just leave it at that.  There are other questions about consciousness that I'd prefer to spend my time and effort on answering -- like the questions posed in the remainder of this course.

I suppose, then, that my stance is a little like the advice of Virgil to Dante, when the latter expresses concern with the distinction between "flesh and spirit" (Dante gets pretty much the same advice from Thomas Aquinas when he asks him how God could create an imperfect world in Paradiso xiii):

State contenti, umana gente, al quia.

Rest content, race of men, with the that
[i.e., be satisfied with what experience tells you, and don't think too hard about reasons]
(Inferno, iii, 37, translated by J.D. Sinclair)


Behaviorism

That was what behaviorism was all about.  These philosophers and psychologists effectively solved the mind-body problem by rejecting it, and they did this by rejecting the mind as a proper subject-matter for science. 


In Psychology

Within psychology, the seminal documents of the behaviorist revolution are by John B. Watson:

Watson's behaviorism was a reaction to both structuralism and functionalism in the new scientific psychology. To Watson (and he was not alone), introspection seemed to be getting the field nowhere, and had become bogged down in disputes over such issues as whether there could be imageless thought.  Watson himself also hated serving as an introspective observer.  Watson himself had been trained as a functionalist at Chicago, and had conducted a successful line of research on animal learning, but he balked at the functionalists' attribution of consciousness to nonhuman animals.    

BehavRevol.JPG
                (104877 bytes)Watson's central assertions were as follows:




Consciousness is simply irrelevant to this program, because consciousness has nothing to do with behavior.  New behaviors are acquired as a result of learning (construed as the conditioning of reflexes), and individual differences in behavior are the result of individual differences in learning experiences.  Watson's emphasis on learning is exemplified by Watson's dictum :

Give me a dozen healthy infants, well formed, and my own specified world to bring them up in and I'll guarantee to take any one at random and train him to become any type of specialist I might select -- doctor, lawyer, artist, merchant-chief, and yes, even beggar-man and thief, regardless of his talents, penchants, tendencies, abilities, vocations, and race of his ancestors.

Watson left psychology in 1920, when he was fired from his faculty position at Johns Hopkins in the midst of a messy divorce scandal involving his relationship with a graduate student.  Eventually Watson found work as the resident psychologist at J. Walter Thompson, the famous advertising agency, in which capacity he invented the "coffee break" as part of a campaign for Maxwell House coffee (See?  Psychology has practical benefits!).  But behaviorism wasn't harmed by this reversal of his personal fortune, and clearly dominated psychological research from the 1920s to the 1960s -- though not in its pure Watsonian form.  

In the 1930s, behaviorists moved beyond Watson's "muscle-twitch" psychology, in which behavior was construed as a string of innate and conditioned reflexes closely tied to the structure of the peripheral nervous system, toward an emphasis on molar rather than molecular behavior.  In the words of E.B. Holt (1931), what came to be called neobehaviorism was concerned with "what the organism is doing".  For example, at Berkeley E.C. Tolman showed that rats who had been trained to run a maze for food would also swim for food if the maze was flooded.  The maze was the same, but because running and swimming involved different muscle movements, what had been learned was a molar behavior -- e.g., "get from Point A to Point B" rather than a particular sequence of muscular movements in response to the maze stimulus.  

Moreover, many neobehaviorists held more liberal attitudes than Watson on the role of mental states in behavior.

Within neobehaviorism, Watson's radical stance on consciousness was maintained by B.F. Skinner, working first at Indiana and then at Harvard.  In The Behavior of Organisms (1938), Skinner traced the effects of different schedules of reinforcement on behavior.  Much more than Tolman and Hull, Skinner strictly abjured all mentalistic language (he wrote a famous paper on "Why I am Not a Cognitive Psychologist").  In his view, consciousness is an epiphenomenon -- a linguistic construct for ordinary folk, and a linguistic trap for scientists.  In Skinner's system, feelings are, at best, the products, not the causes, of behavior, and are irrelevant to predicting how organisms will behave.  In predicting behavior, all that matters is the organism's history of reinforcement.  Humans are no different from other animals, except perhaps that their reinforcement histories are somewhat more complex and less well known.  In such provocative books as Beyond Freedom and Dignity, Skinner argued that because humans are controlled by environmental stimuli, people are neither free in their behavior nor responsible for their actions.  As he put it: "We are all controlled, and we all control".


In Philosophy

Behaviorism in psychology had its parallel in philosophical behaviorism, whose roots in turn were in earlier approaches to the philosophy of mind.

Logical positivism began in the "Vienna Circle" of philosophers, such as Herbert Feigl and Rudolf Carnap, and was promoted in England by A.J. Ayer and in America by Ernest Nagel.  Logical positivism is based on the verification principle, which states that sentences gain their meaning through an analysis of the conditions required to determine whether they are true or false.  There are, in turn, two means of verifying the truth-value of a statement. 

Sentences which cannot be empirically verified, either analytically or synthetically, are held to be meaningless.   Because statements about people's mental states are empirically unverifiable, the logical positivists held -- or, at least, they strongly suspected -- that any statement about consciousness -- and indeed the whole notion of consciousness itself -- is simply meaningless.

Linguistic philosophy, which arose through the work of Ludwig Wittgenstein, Gilbert Ryle, and J.L. Austin, argued that most problems in philosophy are based on misunderstandings or misuses of language, and that they simply dissolve when we pay attention to how ordinary language is used.  Linguistic philosophers placed great emphasis on reference -- the objects in the objective, publicly observable real world to which words refer.  Because mental states are essentially private and subjective, linguistic philosophers harbor the suspicion that mental states don't refer to anything at all, and that mental-state terms like "consciousness" are essentially meaningless.

RyleGhost.JPG
                (78581 bytes)The Ghost in the Machine. While both logical positivists and linguistic philosophers focused a great deal of attention on the terms representing mental states, the philosophical behaviorists, like their psychological counterparts, tended to dispense with mental-state terms entirely.  For example, in The Concept of Mind (1949), Gilbert Ryle argued that the mind is "the ghost in the machine" (a phrase that inspired the classic rock album by The Police).   For Ryle, the whole mind-body problem reflects what he called a category mistake -- a mistake in which facts are represented as belonging to one domain when in fact they belong to another.  With respect to mind and body, the category mistake comes in two forms:


For Ryle, the mind doesn't reside inside the person, or inside any part of the person.  Rather, the term "mind" refers to how a person behaves or is disposed (i.e., likely) to behave.  Thus
Similarly, mental terms don't refer to private experiences -- otherwise, listeners could not check the accuracy of speakers who speak of their mental states.  They refer to behaviors and the circumstances under which they occur.  Moreover, one cannot teach another person to use a mental term to refer to private experience, because the teacher has no means to check whether the student is correct.  Rather, mind is identified with behavior or behavioral disposition; mental states with behavioral states and the circumstances under which they occur.  We teach people to use mental terms under certain circumstances -- to say "I want lunch" at noon, for example.

The bottom line in philosophical behaviorism is that we cannot identify mental states independent of behavioral states and the circumstances in which behavior occurs.  This is also essentially the bottom line for psychological behaviorism, to the extent that it talks about mental states at all.  Both psychological and philosophical behaviors seek to get around Descartes' impasse by refusing to talk about mind at all.


The Cognitive Revolution

The hegemony of the behaviorist program within psychology began to fade in the 1950s, with the gradual appreciation that behavior could not be understood in terms of the functional relations among stimulus, response, and reinforcement.

Interestingly, the deep sources of the cognitive revolution lie with some neobehaviorists who made their careers in the study of animal learning, but who critiqued salient features of the behaviorist program:

These early anomalous findings were generally ignored or dismissed by practitioners of behaviorist "normal science", in a manner that would make Thomas Kuhn proud.  But in the 1960s, evidence for cognitive constraints on learning mounted rapidly:

These and other studies, following on the pioneering research of Tolman and Harlow (and others) made it clear that we cannot understand even the simplest forms of learning and behavior in so-called "lower" animals -- never mind complex learning in humans -- without recourse to "mentalistic" constructions such as predictability, controllability, surprise, and attention.  Although the cognitive revolution was chiefly fomented by theorists who were interested in problems of human perception, attention, memory, and problem-solving, the results of these animal studies increased the field's dissatisfaction with the dominant behaviorist paradigm, and increased support for the cognitive revolution in psychology.

The cognitive revolution was also fostered by the development of the high-speed computer, which could serve as a model, or at least a metaphor, of the human mind:

Put concisely, the computer model of the mind showed how a physical system could, in some sense, "think" by acquiring, storing, transforming (i.e., computing) and using information.  In some respects, the computer metaphor solved the mind-body problem in favor of materialism, because it showed how a physical system could operate something like the mind.  But the fact that computers could "think" (or at least process information) raised the question of whether computers were, or could be, conscious.  Put another way, the fact that computers could "think" without being conscious suggested that maybe consciousness was epiphenomenal after all.  In any event, the fact that psychological discourse was re-opened to mental terms revived discussion of the mind-body problem, and promptly stuck us right back in the middle of Descartes' impasse -- exactly where William James had left us in 1890!

 

The Contemporary Materialist Stance

One difference between the beginning of the 21st century and the end of the 19th is that not too many cognitive scientists  seriously entertain dualism as a solution to the mind-body problem.  Almost everybody adopts some form of the materialist stance, for the same reason that James did -- because we understand clearly that mental states are somehow related to states of the brain.  At the same time, many people remain closet dualists, for reasons articulated earlier: religious belief, the persistence of folk psychology, or disciplinary commitment to the psychological level of analysis (that's my excuse!).  Others, however, have gotten beyond what they see as irrationality, sentimentality, or ideology to proliferate a variety of "new" materialisms -- or, at least, more precise statements of materialist doctrine.

 

Identity Theory

IdentityTheory.JPG (93243 bytes) Modern versions of materialism are based on one or another version of identity theory, which holds that mental states are identical to brain states.  This position is distinct from epiphenomenalism, which holds that mental states are the effects of brain states; and it is also distinct from psychophysical parallelism, which holds merely that mental states are correlated with brain states.  

 

Identity theory comes in two forms: 


Types, Tokens, and Categories

The type and token versions of identity theory are named by analogy to the type-token difference in categorization.  In categorization, type is the category (e.g., undergraduate class), while token is a particular instance of that category (e.g., Psychology 129 as taught in the Spring Semester of 2005.

The difference between the two versions of identity theory has important implications for scientific research.

  • Under type identity theories, every instance of a particular mental state must correspond to the same brain state.  Every time you see red, or think about your mother, you have to be in the same brain state as the last time you saw red or thought about your mother.  This is true both within subjects and between subjects, and also arguably true across species.  When I see red, I'm in the same brain state you are in when you see red. If type identity were true, we could foresee someday abandoning mental language for brain language, because a description of a brain state could substitute for a description of a mental state.  But type identity does not require this move. Because mental states and brain states refer to the same thing, which you choose is a matter of convenience.  Brain-imaging technologies tacitly assume that type identity is true.  When we image someone's brain, to determine its activity while he or she is engaged in some cognitive task (like looking at a patch of red or thinking about his or her mother), we do not take only a single image.  Science requires replication, so we take several images from the subject, and in fact we image the brains of a number of different subjects performing the same task, and then average across trials and subjects to eliminate noise and get a reliable picture of the brain at work.  But if one person's brain is in a different state every time he sees red, and if another person's brain is in a different state than the first person's brain, and in a different state than her own brain every time she sees red, then the various images will cancel themselves out.  Brain-imaging only makes sense if we can assume that both the same person and different people are in the same brain state every time they have a particular experience or engage in a particular activity.
  • Under token identity theories, different instances (tokens) of a mental state may be associated with different brain states, and this is true within an individual, between individuals, and across species.  If token identity is true, we can never abandon mental language for brain language, and mental language is the only coherent way we have of talking about behavior, and the only way we have of sharing our experiences, thoughts, and actions with each other.  Note, however, that if token identity is true, then brain-imaging becomes a very risky, uncertain business.

Many traditional cognitive scientists prefer token identity theories, which have the benefit of allowing mental talk and folk psychology to remain legitimate discourse.  But some cognitive scientists seem to prefer type identity theories, and seek to abandon mental talk as soon as neuroscience permits.

The distinction between type and token identity theories has important implications for the neuroscientific effort to identify the neural correlates of conscious mental states.  Consider how these experiments are done.  

  1. Subjects are asked to perform some cognitive task (e.g., detecting stimuli, or making discriminations, or reading words, or whatever).
  2. While they are engaged in this mental activity, the activity of their brains is recorded (e.g., by surface electrodes connected to an EEG, or by brain-imaging techniques such as PET or fMRI).
  3. The brain activity is then averaged across subjects to reveal the neural signature of the mental state -- what the brain is doing when subjects are engaged in this task.

Note that this methodology assumes type identity -- that every brain that is performing a particular task is in the same state as every other brain that is performing that same task.  Otherwise, it wouldn't make any sense to average across subjects.  If token identity were true, and every individual brain were in a different state when performing the same task, then the activities of the several individual brains would cancel each other out, and the researcher would be left with nothing but noise.  But the signal -- the brain signature -- emerges only when you average across individuals.

If researchers believed in token identity, they'd go about their studies in a very different way.  They'd scan a single individual, on many occasions, and average across trials but not individuals to identify the brain signal, and then conclude that some particular part of the brain is active when this particular individual is performing this particular task.  But who cares about particular individuals?  Nobody -- not at this level of analysis, at least.  The goal of cognitive neuroscience is to obtain generalizable knowledge about the relations between brain activity and cognitive processes.  Which is why cognitive neuroscientists average across subjects, not just across trials within subjects -- which shows that they embrace a type identity view of mind-brain relations after all.


Eliminative Materialism

The desire to abandon mental talk and folk psychology leads to a position on the mind-body problem known as eliminative materialism (Feyerabend, 1963; Feigl, 1968/1967).  Eliminative materialism seeks to abandon talk of the mental altogether, substituting talk of brain states instead.  It hopes to follow neuroscience as it develops, and let language evolve in response to scientific advances.

Eliminative materialism is all well and good, if you go for that sort of thing, but it is troubled by Leibniz' Law:

If two terms refer to the same object, then any property of the first term must also be a property of the second.

It turns out that when you talk about mind and body, Leibniz' Law isn't necessarily upheld:

Actually, it's not clear that Leibniz' Law actually applies to the mind-body question.  To require mind and body to have the same properties may be to make one of Ryle's category mistakes.  But John Searle has applied Leibniz' Law in an interesting way to argue that consciousness isn't going to be reduced to physical processes -- or, at least, not without a struggle.  This is because consciousness has a subjective, first-person ontology that brain states lack, while brain states have an objective, third-person ontology that mental states lack.    

Anyway, the materialist position has many current adherents, especially among those who wish to abandon folk psychology -- for example.

Of course, eliminative materialism is a program of reductionism -- specifically, eliminative reductionism, entailing the view that the laws of mental life discovered by psychology can (and should) be reduced to the laws of biological life discovered by biology -- and, presumably, thereafter to the laws of physics.  The implication of reductionism is that physics is the fundamental science, and everything else is just nonscientific "folk" talk, to be eliminated as soon as science allows.  As the physicist Ernest Rutherford once put it:

Never mind that Rutherford got the Nobel Prize in chemistry.  He regarded himself as a physicist, and is generally regarded as the father of nuclear physics.

Of course, such a position is unpalatable to anyone but physicists and those possessed of physics-envy (and perhaps philatelists).  Partly for that reason, the Churchlands, Patricia and Paul, have argued (together and separately) for a program of intertheoretic reductionism, which promises to preserve the legitimacy of the psychological level of analysis. 

Churchlands.JPG
          (113589 bytes)The Churchlands claim not to be eliminative reductionists, but they are also very clear that "science" is better than "common sense", and their writing betrays a somewhat sneering attitude toward "folk psychology" and therefore an implication that neuroscience is not merely different than folk psychology, but that neuroscience is better than folk psychology.  They eagerly anticipate the day when mentalistic concepts are abandoned.  Again, they deny that there are eliminative reductionists, and that intertheoretic reduction preserves the legitimacy of psychology.  But they don't seem to mean it.  Their actions betray their true preference for eliminative psychology: whereas folk-psychology wants to talk about mental states, real science prefers to talk about physical states.  Therefore, they argue that psychology must be grounded in what they call


"the real-world findings of neuroscience".

The clear implication is, that mental states aren't real, and that psychology, as the science of mental life, doesn't have anything to do with the real world.  

Note that it never occurs to them that neuroscience should be grounded in the real-world findings of psychology, which would be the case if they were really interested in the symbiotic relations between two theories, or two levels of analysis.  The clear implication of their work is that the only "real world" is the world of neuroscience.  The world of neuroscience can be objectively described, while the world of mind and consciousness is the world of ether, phlogiston, and fairies. 

You get a sense of what eliminative reductionism is all about when, as quoted in a New Yorker article on their views, Pat says to Paul, after a particularly hard day at the office,

"Paul, don't speak to me, my serotonin levels have hit bottom, my brain is awash in glucosteroids, my blood vessels are full of adrenaline, and if it weren't for my endogenous opiates I'd have driven the car into a tree on the way home. My dopamine levels need lifting. Pour me a Chardonnay, and I'll be down in a minute" (as quoted by MacFarquhar, 2007, p. 69),

It's funny, until you stop to reflect on the fact that these people are serious, and that their students have taken to talking like this too. 

But really, when you step back, you realize that this is just an exercise in translation, not much different in principle from rendering English into French -- except it's not as effective. You'd have no idea what Pat was talking about if you didn't already know something about the correlation between serotonin and depression, between adrenaline and arousal, between endogenous opiates and pain relief, and between dopamine and reward. But is it really her serotonin levels that are low, or is it her norepinephrine levels -- and if it's serotonin, how does she know? Does she have a serotonin meter in her head?  

Only by translating her feelings of depression into a language of presumed biochemical causation -- a language that is understood only by those, like Paul, who already have the secret decoder ring. 

And even then, the translation isn't very reliable. We know about adrenalin and arousal, but is Pat preparing for fight-or-flight (Cannon, 1932), or tend-and-befriend (S. E. Taylor, 2006)?  Is she getting pain relief or positive pleasure from those endogenous opiates? And after going through the first five screens of a Google search, I still couldn't figure out whether Pat's glucosteroids were generating muscle activity, reducing bone inflammation, or increasing the pressure in her eyeballs. 

And note that even Pat and Paul can't carry it off. What's all this about "talk" and "Chardonnay"? What's missing here is any sense of meaning -- and, specifically, of the meaning of this social interaction. Why doesn't Pat pour her own drink? Why Chardonnay instead of Sauvignon Blanc -- or, for that matter, Two-Buck Chuck? For all her brain cares, she might just as well mainline ethanol in a bag of saline solution.  and, for that matter, why is she talking to Paul at all?  Why doesn't she just give him a bolus of oxytocin?  But no: What she really wants is for her husband to care enough about her to fix her a drink -- not an East Coast martini but a varietal wine that almost defines California living -- and give her some space -- another stereotypically Californian request -- to wind down. That's what the social interaction is all about; and this is entirely missing from the eliminative materialist reduction. 

The problem is that you can't reduce the mental and the social to the neural without leaving something crucial out -- namely, the mental (and the social). And when you leave out the mental and the social, you've just kissed psychology (and the rest of the social sciences) good-bye. That is because psychology isn't just positioned between the biological sciences and the social sciences. Psychology is both a biological science and a social science. That is part of its beauty and it is part of its tension.

In her recent memoir, entitled Touching a Nerve (2014), Patricia Churchland writes that she turned from "pure" philosophy to neuroscience when she realized that "if mental processes are actually processes of the brain, then you cannot understand the mind without understanding how the brain works".  To which Colin McGinn, reviewing her book in the New York Review of Books, asked why she stopped there.  If mental processes are actually processes of the brain, and brain-processes are electrochemical processes, then she should have skipped over the neuroscience and gone straight to physics.  And so should everyone else, including historians and literary critics.  But if historians and literary critics don't have to be neuroscientists, why is this an obligation for psychologists?


Anomalous Monism

 Interestingly, it's possible to be a materialist monist but not a reductionist.  This is the stance that the late Donald Davidson (another UCB philosopher ) called anomalous monism (e.g., in "Mental Events", 2001).  Davidson agreed that the world consists only of material entities, thus rejecting Descartes' substance dualism; and therefore, all events in the world are physical events.  At the same time, Davidson denied that mental events, such as believing and desiring, could be explained in purely physical terms.  So, his theory is ontologically materialistic (because the world consists only of physical entities) but explanatorily dualistic (because the network of causal relations is different for mental events than for physical events).


Psychology and Neuroscience

In fact, the histories of psychology and of neuroscience show exactly the opposite of what the Churchlands discuss. In every case, whether it is concerned with visual perception or with memory or anything else, theoretical developments in neuroscience have followed theoretical developments in psychology, not the other way around.  

Consider, for example, the amnesic syndrome, as exemplified by patient H.M., who put us on the road toward understanding the role of the hippocampus in memory.  But what exactly is that role?  The fact is, our interpretation of the amnesic syndrome, and thus of hippocampal function, has changed as our conceptual understanding of memory has changed. 

Here, clearly, neuroscientific data hasn't done much constraining: the psychological interpretation of this neurological syndrome, and its implication for cognitive theory, changed almost wantonly, as theoretical fashions changed in psychology, while the neural evidence stayed quite constant.  

Here's another example: what might be called The Great Mental Imagery Debate -- that is, the debate over the representation of mental imagery -- or, more broadly, whether there are two distinct forms of knowledge representation in memory, propositional (verbal) and perceptual (imagistic).  

So, in the final analysis, neuroscientific evidence was neither necessary nor sufficient to resolve the theoretical dispute over the nature of knowledge representation. 

And, of course, interpretation of PET and fMRI brain images requires a correct description of the subject's task at the psychological level of analysis.  Without a correct psychological theory in hand, neuroscience can't find out anything about the neural substrates of mental life, because they don't know what the neural substrates are neural substrates of.  

As someone once put it (unfortunately, not me):

But I have said (Kihlstrom, 2010):  


Philosophical Functionalism

Other cognitive scientists prefer token identity theories. They don't seek correspondence between mental events and brain events.  Instead, they classify mental events in terms of their functional roles and ignore the physical systems in which these functions are implemented.  Philosophical functionalism is closely related to psychological and philosophical behaviorism, in that all that matters are the functional relations between inputs and outputs -- what goes on in between doesn't much matter.  

This philosophical functionalism lies at the basis of many programs of artificial intelligence (AI), which attempt to devise computer programs that will carry out the same operations as minds.  The term was originally coined by John McCarthy (then at MIT, later at Stanford), who also invented LISP, a list-processing language which still serves as the backbone of AI.

John Searle has famously distinguished between two views of AI:


Kenneth Mark Colby (1920-2001)

Colby, one of the pioneers of artificial intelligence, was a psychiatrist who graduated from the Yale School of Medicine in 1943, and computer scientist at Stanford and later UCLA.  While at Stanford, Colby developed Parry, a computer simulation of paranoia that is sometimes counted as the only computer program to pass the Turing test.  After leaving UCLA, Colby founded Malibu Artificial Intelligence Works, and produced a conversational therapy program called Overcoming Depression.

Parry was adapted from Eliza, a program developed by Joseph Weizenbaum at MIT that was the first conversational computer program, and simulated the interaction between a psychotherapy patient and a "Rogerian"  (as opposed to a Freudian) psychotherapist (Eliza was named for Eliza Doolittle, the central character in George Bernard Shaw's play, Pygmalion, which was in turn the basis for the Broadway musical My Fair Lady)According to Colby's New York Times obituary, Parry and Eliza once had a chat, part of which went like this:

Parry  In my opinion, you have a one-track mind.

Eliza  What makes you think I have a one-track mind?

Parry  I used to go to the track every week.

Eliza  What does that suggest to you?

Parry  I avoid racetracks now because the races are fixed.

Eliza  Is that the real reason?

Parry  I'd rather not discuss it.

Eliza  What else would you like to discuss?

Parry  That's something I don't want to go into.

For an engaging account of the Turing Test, and the Loebner Prize awarded annually to the "Most Human Computer", see the Most Human Human: What Talking with Computers Teaches Us About What It Means to Be Alive by Brian Christian (2011).  It turns out that there's also a prize for the "Most Human Human", and Christian's book charts his quest to win it.

Although I agree with Searle, I believe it is important to distinguish yet another form of artificial intelligence.  What I call Pure AI is not concerned with psychological theories or mental states at all, but only with the task of getting machines to carry out "intelligent" behaviors.  Most robotics take this form.  So does most work on chess-playing by machine.  Deep Blue, the IBM-produced program that beat Garry Kasparov, the reigning world champion, in 1997,  is a really good chess player, but nobody claims that it plays chess the same way that humans do (in fact, we know it doesn't), and nobody claims that it "knows" it is playing chess.  It is just programmed with the rules of the game and the capacity to perform incredibly fast information-processing operations.  For pure AI, programs are programs.

Kasparov, commenting on his loss, and on chess as a goal for artificial intelligence research, writes:

The AI crowd... was pleased with the result and the attention, but dismayed by the fact that  Deep Blue was hardly what their predecessors had imagined decades earlier when they dreamed of creating a machine to defeat the world chess champion.  Instead of a computer that thought and played chess like a human, with human creativity and intuition, they got one that played like a machine, systematically evaluating 200 million possible moves on the chess board per second and winning with brute number-crunching force.  As Igor Alexander, a British AI and neural networks pioneer, explained in his 2000 book, How to Build a Mind:

By the mid-1990s the number of people with some experience of using computers was many orders of magnitude greater than in the 1960s.  In the Kasparov defeat they recognized that here was a great triumph for programmers, but not one that may compete with the human intelligence that helps us to lead our lives.

It was an impressive achievement, of course, and a human achievement by the members of the IBM team, but Deep Blue was only intelligent the way your programmable alarm clock is intelligent.  Not that losing to a $10 million alarm clock made me feel any better ("The Chess Master and the Computer" by Garry Kasparov, reviewing Chess Metaphors: Artificial Intelligence and the Human Mind by Diego Rasskin-Gutman, New York Review of Books, 02/11/2010).

In 2009, computer scientists at IBM announced Watson (named after the firm's founder, Thomas J. Watson, Sr., not John B. Watson, the behaviorist), based on the company's Blue Gene supercomputer, intended to answer questions on "Jeopardy!", the long-running television game show   If successful, Watson would represent a considerable advance over Deep Blue, because, as noted in a New York Times article, "chess is a game of limits, with pieces that have clearly defined powers.  'Jeopardy!' requires a program with the suppleness to weigh an almost infinite range of relationships and to make subtle comparisons and interpretations.  The software must interact with humans on their own terms, and fast....  The system must be able to deal with analogies, puns, double entendres, and relationships like size and location, all at lightning speed" ("IBM Computer Program to Take On 'Jeopardy!'" by John Markoff, 04/27/2009).  

The IBM Jeopardy Challenge was taped in January and aired on February 14-16, 2011 (the first game of the two-day match was spread out over two days, to allow for presentations on how Watson worked).  It pitted Watson against Ken Jennings, who made a record 74 consecutive appearances on the show in in 2004-5, and Brad Rutter, who won a record $3.3 million in regular and tournament play.  In a practice round reported in January, 2011, Watson beat them both, winning $4,400 compared to Jennings' $3,400 and Rutter's $1,200.  In the actual contest, Watson won again, $77,147 against Jennings' $24,000 and Rutter's $21,600.  The machine made only two mistakes: responding "Toronto" to a question about U.S. cities (you could see the IBM executives' jaws drop open), and repeating one of Rutter's incorrect responses.  

In addition to his linguistic skills, fund of knowledge and rapid access to it, Watson may have had a slight advantage in reaction time, though.  Milliseconds count in Jeopardy, and if so it would be interesting to see a re-run of the contest with Watson constrained to something like humans' average response latencies.

In the contest, Watson displayed a remarkable ability to parse language and retrieve information.

Watson won the $1 million first prize, which IBM donated to charity.  Jennings and Rutter gave half of their winnings, $200,000 and $100,000 respectively, to charity as well.  Jennings, interviewed after the second game, said "I had a great time and I would do it again in a heartbeat".

In a later match, however, Watson was beaten by Rep. Rush Holt (D.-N.J.), who had been a physicist before being elected to Congress ("Congressman Beats Watson" by Steve Lohr, New York Times, 03/14/2011).  One category that Watson failed in was "Presidential rhymes" (as in "Herbert's military operations": What were Hoover's maneuvers?").  So Watson's not invincible (the machine had won only 71% of its preliminary matches, before the televised contest).  The point is not that Watson is always right, or that he wins all the time, or that he doesn't win all the time.  The point is that Watson is a pretty good question-answering machine.

It is not clear what kind of AI Watson represents.  Probably not Strong AI: nobody claims that Watson will "think", and the newspaper article uses the word "understand" in quotes.  Many of the principles undergirding Watson's software are based on studies of human cognition, which suggests that it is not Pure AI either, and might be a version of Weak AI.  IBM engineers, interviewed after the Challenge was completed, were reluctant to claim that Watson mimicked human thought processes (see "on 'Jeopardy!', Computer win Is All but Trivial" by John Markoff, and "First Came the Machine That Defeated a Chess Champion" by Dylan Loeb McClain, New York Times, 02/17/2011).  Watson's performance on pre-Challenge test runs is described by Stephen Baker in "Can A Computer Win on 'Jeopardy'?", Wall Street Journal, 02/05-06/2011).  

David Gelertner, professor of computer science at Yale (and victim of the Unabomber), wrote ("Coming Next: A Supercomputer Saves Your Life", Wall Street Journal, 02/05-06/2011):

Watson is nowhere near passing the Turing test -- the famous benchmark proposed by Alan Turing in 1950, in which a program demonstrates its intelligence by duping a human being, in the course of conversation over the Web or some other network, into believing that it's human and not software.

But when a program does pass the Turing test, it's likely to resemble a gigantic Watson.  It will know lots about the superficial structure of language and conversation.  It won't bother with such hard topics as meaning or consciousness.  Watson is one giant leap for technology, one small step for the science of mind.  But this giant leap is a major milestone in AI history.

Which strengthens the case for Pure AI.  But even if it had been only remotely successful, instead of spectacularly successful, the article notes that it will be a "great leap forward" in "building machines that can understand language and interact with humans".  For example, IBM is using an improved version of Watson's technology to develop a language-based interactive system for medical diagnosis.  

Will Watson play again?  Probably not.  Deep Blue never played chess again after defeating Kasparov (part of him is on display in the Smithsonian).  He's made his point, it's unlikely that any human will beat him when Jennings and Rutter couldn't, and Watson may have a slight mechanical advantage in reaction time at the buzzer.  

So let's give David Ferucci, who led the IBM development team, the last word (quoted in Markoff's 2011 article):

People ask me if this is HAL [the computer that ran amok in 2001: A Space Odyssey].  HAL's not the focus, the focus is on the computer on Star Trek, where you have this intelligence information seek[ing] dialog, where you can ask follow-up questions and the computer can look at all the evidence and tries to ask follow-up questions.  That's very cool.

Well, maybe not the last word.  Technological advances in artificial intelligence have led researchers to seriously consider the possibility of mind uploading -- that is, of preserving an individual's entire consciousness in what William Graziano, a neuroscientist at Princeton, calls the digital afterlife (see his 2019 book, Rethinking Consciousness: a Scientific Theory of Subjective Experience, and "Will Your Uploaded Mind Still Be You?", an essay derived from it published in the Wall Street Journal (09/14/2019).  Graziano argues that all we need to upload an entire person's mind would be to recreate, in a sufficiently powerful computer combining a connectionist architecture with knowledge from the Human Connectome Project, a complete map of the individual's 100 trillion-or-so synaptic connections among the 86 billion-or-so neurons in the human brain.  Graziano points out that the project is daunting: it took neuroscientists an entire decade to map all the connections among the 300 neurons in a crummy roundworm.  But in principle that's how it would go.  And when we flip the switch (quoting now from his WSJ essay):

A conscious mind wakes up.  It has my personality, memories, wisdom and emotions.  It thinks it's me.  It can continue to learn and remember, because adaptability is the essence of an artificial neural network.  Its synaptic connections continue to change with experience.... 

Philosophically, what is the relationship between sim-me and bio-me?  One way to understand it is through geometry.  Imagine that my life is like the rising stalk of the letter Y.  I was born at the base, and as I grew up, my mind was shaped and changed along a trajectory.  One day, I have my mind uploaded.  At that moment, the Y branches.  There are now two trajectories, each one convinced that it's the real me.  Let's say the left branch is the sim-me ad the right branch is the bio-me.  The two branches proceed along different life paths, with different accumulating experiences.  The right-hand branch will inevitably die.  The left-hand branch can live indefinitely, and in it, the stalk of the Y will also live on as memories and experiences.

 

The Two Functionalisms: A Confusion

Watson's behaviorism, has close ties to philosophical functionalism as well as to philosophical behaviorism.  So why did I write earlier that Watson was as much opposed to functionalism as he was to structuralism?  Because Watson was opposed to a different functionalism.  Philosophical functionalism hadn't even been thought of when Watson was writing in the 1910s (neither had philosophical behaviorism, for that matter).  

The "school" in psychology known as functionalism was skeptical of the claims made by members of another "school", structuralism, claim that we can understand mind in the abstract.  Based on Charles Darwin's (1809-1882)  theory of evolution, which argued that biological forms are adapted to their use, the functionalists focused instead on what the mind does, and how it works.  While the structuralists emphasized the analysis of complex mental contents into their constituent elements, the functionalists were more interested in mental operations and their behavioral consequences.  Prominent functionalists were:

  • William James, the most important American philosopher of the 19th century, and who taught the first course on psychology at Harvard, James's seminal textbook, Principles of Psychology (1890), is still widely and profitably read by new generations of psychologists.  True to his philosophical position of pragmatism, James placed great emphasis on mind in action, as exemplified by habits and adaptive behavior.
  • John Dewey (1859-1952), now best remembered for his theories of "progressive" education, who founded the famous Laboratory School at the University of Chicago.  
  • James Rowland Angell (1869-1949), who was both Dewey's student (at Michigan) and James's student at Harvard, who rejoined Dewey after the latter moved to the University of Chicago; later Angell was president of Yale University, where he established the Institute of Human Relations, a pioneering center for the interdisciplinary study of human behavior.  In contrast to Titchener, who wanted to keep psychology a "pure" science, Angell argued that basic and applied research should go forward together.

The functionalist point of view can be summarized in four points:

  • The adaptive value of mind.  Functionalists assume that the mind evolved to serve a biological purpose -- specifically, to aid the organism's adaptation to its environment.  Thus, functionalists are interested in what James called (in the Principles) "the relationship of mind to other things" -- how the mind represents the objects and events in the environment.   Functionalism also laid the basis for the application of psychological knowledge to the promotion of human welfare.
  • Mind in context.  From a functionalist point of view, the mind essentially mediates between the environment and the organism.  Therefore, the functionalists were concerned with the relations between internal mental states and processes and the states and processes in the internal physical environment (i.e., the organism) on the one hand, and the external social environment (i.e., the real world) on the other. 
  • Mind in body.  Because the mind is what the brain does, functionalists assumed that understanding the nervous system, and related bodily systems, would be helpful in understanding the workings of the mind.
  • Mind in action.  Because mind serves an adaptive purpose, functionalists are concerned with the role that mental states and processes play in the organism's actual behavior.  This is what the doctrine of mentalism is all about.  Within cognitive psychology, analyses of action are pretty impoverished.  But social psychology can be fairly construed as centrally focused on mind in action, because it is concerned with the relationship between the individual's beliefs, attitudes, etc., and his or her social behavior. 

Psychological functionalism is often called "Chicago functionalism", because its intellectual base was at the University of Chicago, where both Dewey and Angell were on the faculty (functionalism also prevailed at Columbia University).  It is to be distinguished from the functionalist theories of mind associated with some modern approaches to artificial intelligence (e.g., the work of Daniel Dennett, a philosopher at Tufts University), which describe mental processes in terms of the logical and computational functions that relate sensory inputs to behavioral outputs.


The Contemporary Scene

At the turn of the 21st century, theories of mind and body are fairly represented by the various books reviewed by John Searle in The Mystery of Consciousness (1997, reprinting several of Searle's review articles in the New York Review of Books), by the replies of the books' authors, and by Searle's rejoinders.  On first glance, most theorists appear to adopt some version of materialism, holding to the view that consciousness is something that the brain does.  

 


Searle: Biological Naturalism

Searle.jpg (112722 bytes)This is true even for Searle, who while a vigorous critic of the books under review nonetheless asserts that consciousness is a causal property of the brain (see The Rediscovery of the Mind, 1992).   

 

 

For Searle, consciousness emerges at certain levels of  anatomical organization.  Certainly, the human brain, with its billions and billions (apologies to Carl Sagan) of neurons, and umpteen bazillions of interconnections among them, has the complexity to generate consciousness.  This is probably true of the brains of nonhuman primates, which also have lots of neurons and neural connections.  It may also be true for dogs and cats (certainly Searle holds that his dog Ludwig is conscious).  It may not be true of snails, because they may not have enough neurons and interconnections to support (much) consciousness.  It's not true of paramecia, because they don't have any neurons at all.   

And it's certainly not true of thermostats.  Although some computers might be conscious, if they had enough information-processing units and interconnections, by and large Searle is deeply skeptical of any claim that consciousness can be a causal property of anything other than a brain.  Rather than calling himself a materialist, Searle prefers to label his view biological naturalism.  Brains produce consciousness, even if silicon chips (and beer cans tied together by string) don't.

But it's not just possessing neurons that's important for consciousness.  Anatomy alone isn't decisive.  General anesthesia, concussion, and coma render people unconscious, but they don't alter the structure of the nervous system.  However, they do alter physiology: how the various structures in the nervous system operate and relate to each other.

Although his biological naturalism is closely related to materialism, Searle strenuously opposes eliminative materialism and other forms of reductionism.  For him, consciousness cannot be eliminated from scientific discourse because objective, third-person descriptions of brain processes necessarily leave out the first-person subjectivity that lies at the core of phenomenal experience.  First and foremost, consciousness entails first-person subjectivity.  This cannot be reduced to brain-processes because any third-person description of brain-processes must necessarily leave out first-person subjectivity.  For that reason, every attempt to reduce consciousness to something else must fail, because every reduction leaves out a defining property of the thing being reduced -- in this case, the first-person subjectivity of consciousness.  (In making this argument, Searle is basically applying Leibniz's Law.)

Searle believes that first-person subjectivity is an irreducible quality of consciousness, and that this irreducible quality is produced as a natural consequence of human (and probably some nonhuman) brain processes, but he has no idea how the brain actually does it.  But, he says, that's not a philosophical problem.  The philosopher's job is to determine what consciousness is; once that's done, the problem can be "kicked upstairs" to the neurobiologists, whose job it is to figure out how the brain does it (Searle makes the same argument about free will). 


Others on Searle

Searle is very critical of some of his colleagues, but he takes as good as he dishes out.  In some ways it is unfortunate that Searle did not include reviews of his own work in The Mystery of Consciousness, and further exchanges with the reviewers.  Two reviews stand out in particular:

Somewhat unfairly, I think, Searle has sometimes been linked to the Mysterians, because, as Dmitri Tymoczko puts it, he holds that "consciousness is rooted in yet-to-be-understood capacities of matter " (Dmitri Tymoczko, in a joint review of Searle's Mind, Language, and Society: Philosophy in the Real World and Lakoff and Johnson's Philosophy in the Flesh: The Embodied Mind and Its Challenge to Western Thought, published in Civilization, 02/03/99, pp. 89-90).  But Searle does not agree with the Mysterians that the physical basis of consciousness cannot be understood in principle.  Tymoczko himself objects to Searle's appeals to common sense (rather than some sort of "deep and heavy" analysis, I suppose) in his effort to debunk his targets' claims. Contrasting Searle's approach to that of George Lakoff, a UCB professor of Linguistics who is also involved in Cognitive Science, Tymoczko writes :

"In the end, Lakoff and Johnson fail in much the same way that Searle does.  For just as the debunker needs a little bit of the visionary in him -- enough to leave us feeling that philosophy is worth debunking -- so too does the visionary need a keen instinct for what the debunkers will have to say in response....  The battle between visionaries and debunkers is most fascinating when it takes place within the mind of one individual" (p. 90).


Link to a TED talk by John Searle.

 

Dennett: The Zombie Bugaboo

The only way out of this problem, it would seem, is to deny the reality of consciousness -- to say that phenomenal awareness is something that does not really exist, that it is literally an illusion.  Or, at least, to say that to the extent that consciousness does exist, it is irrelevant to behavior: That is, consciousness plays no role in the operation of a virtual machine -- a software program that processes stimulus inputs and generates appropriate response outputs, just like a real machine would (which is what computational functionalists think the mind is).

This variant on functionalism appears to be Dennett's position in Consciousness Explained (1991).  According to his "Multiple Drafts" theory of consciousness, qualia are things that just happen to happen while stimulus information is being processed from the environment and responses being generated to them.  And intentionality doesn't exist either -- it's just a rhetorical "stance" we take when we talk about mind and behavior (Dennett also wrote a book entitled The Intentional Stance).

If reduction fails, as Searle says it does, then the only way out for a materialist is to deny the reality of consciousness. For Dennett, consciousness is literally an illusion, only one of many "multiple drafts" of phenomenology, and plays no role in the operation of a virtual machine.  When we ask people what they are thinking, one of these "drafts" pops into mind.  But that draft has no privileged status.  It's just one of many things running along in parallel connecting inputs and outputs.  For Dennett, only behavior matters, and only behavior actually exists.  Conscious mental states don't have any special significance, and they don't play any causal role in what is going on.  For Dennett, it's behavior that matters, not phenomenology, because: 

Those who recognize in Dennett the earlier arguments of John B. Watson are onto something.  For Dennett, the evidence for consciousness is to be found in behavior.  First-person subjectivity doesn't count because it's an illusion -- just one of many "multiple drafts" of phenomenology, none of which have any causal impact on behavior or the world.

Dennett's work is especially provocative for his reconstrual of what might be called the zombie bugaboo.  Philosophical discussions of consciousness often invoke the notion of zombies, defined as molecule-for-molecule replicas of humans, who can do all the things that humans do except that they do them unconsciously.  Thus, consciousness ostensibly differentiates humans from zombies.  But for Dennett, we're zombies too -- not because we're unconscious, but because zombies are conscious too.  From his perspective of radical functionalism, because their behavioral functions are indistinguishable from that of humans, zombies must have the same internal states as humans, including internal states of consciousness.  If we're conscious, so are they.  We're all zombies now.

Dennett recently reaffirmed his views in From Bacteria to Bach and Back: The Evolution of Minds (2017; given a surprisingly even-handed review by Thomas Nagel in "Is Consciousness an Illusion?", New York Review of Books, 03/09/2017).  Dennett, of course, says "yes" (Nagel, of course, would insist "No", which is why the tone of the review is so surprising.  Maybe Nagel got tired of fighting).  Adopting terms introduced by Wilfrid Sellars, a British philosopher, Dennett argues that there is a "manifest image" of the world as it appears to us, and there is a "scientific image" of the world consisting solely of particles in fields of force.  The scientific image is real; the manifest image is literally an illusion.  And it's the manifest image that we're consciously aware of.  The manifest image is a "user-illusion designed by evolution to fit the needs of its users".  And so is consciousness.  Dennett claims that our competence, or our ability to operate adaptively in the world, does not depend on our ability to comprehend what we are doing -- just like a computer (here Dennett invokes the Turing machine) can competently perform calculations without having any idea what it is doing.  But wait: if consciousness is a user-illusion designed by evolution, what are the user's "needs" that it serves?  For Dennett, consciousness allows us to monitor ourselves, and deal with other people.  The widely shared illusion of consciousness allows us to predict each other's behavior, as well as our own.  But consciousness doesn't play any causal role in this behavior.  It's an afterthought.  It's an illusion. 

For Dennett, first-person experience has no particular authority, and our attributions concerning our own behavior have no privileged status over other people's attributions.  They're just opinions:

Curiously, then, our first person point of view of our own minds is not so different from our second-person point of view of others' minds: we don't see, or hear, or feel, the complicated neural machinery churning away in our brains but have to settle for an interpreted, digested version, a user-illusion that is so familiar to us that we take it not just for reality but also for the most indubitable and intimately known reality of all.

About which Nagel concludes:

About the true nature of the human mind, Dennett is on one side of an old argument that goes back to Descartes.  He pays tribute to Descartes, citing the power of what he calls "Cartesian gravity", the pull of the first-person point of view; and he calls the allegedly illusory realm of consciousness the "Cartesian Theater".  The argument will no doubt go on for a long time, and the only way to advance understanding is for the participants to develop and defend their rival conceptions as fully as possible -- as Dennett has done.  Even those who find the overall view unbelievable will find much to interest them in this book.

A profile of Dennett by Joshua Rothman focuses on Dennett's personal life, as well as his ongoing debate with David Chalmers New  ("A Science of the Soul", New Yorker, 03/27/2017; UCB's Terrence Deacon also makes a cameo appearance).  . As an undergraduate at Harvard, Dennett studied with the philosopher W.V.O. Quine (recall, from the lectures on "Introspection", Dennett's essay on "Quining Qualia").  His dissertation advisor at Oxford was Gilbert Ryle, whose famous book about the mind was entitled  The Ghost in the Machine   A wonderful detail: Dennett, a talented wood-carver (and many other things, besides) fashioned a Russian "maryoshka" nested doll of Descartes; open up Descartes, and you find a ghost inside; inside the ghost, you find a robot.  That pretty much captures Dennett's view.

Link to a TED talk by Dan Dennett.

 

Eccles: Dualism Redux

In the contemporary scene, materialism dominates discussions of consciousness, as arguments rage about whether materialism offers the correct approach to the mind-body problem, and if so which version of materialism is the correct one.  But materialism doesn't exhaust the possibilities, and dualism has been revived as well. 

For example, Karl Popper, the distinguished philosopher of science, and Sir John Eccles, the Nobel laureate in medicine (who I think is largely responsible for this work) have proposed a tri-aspect interactionism which distinguishes among three "worlds":

According to P&E, World 3 is governed by normative principles such as the rules of logic, literary and musical forms (such as the sonnet and the sonata).  These principles are objectively valid, regardless of whether anyone follows them.  But, they argue, no physical system in World 1 can appreciate the rules of World 3.  Therefore, World 2 is needed as an intermediary -- something that understands the content of World 3 and can apply that understanding in World 1.  Because World 2 must be substantially different from World 1, P&E essentially argue for dualism -- the world of the mind is different from the world of the body.

In arguing against materialism, and for a substantial (no pun intended) difference between mind and body, Popper and Eccles have argued for a dualism that matches our introspections: mental states are not the same as physical states, yet mental states can affect the physical world.  Yet it's not clear that Popper & Eccles have cleared Descartes' impasse.  In fact, they may have made the situation worse, by giving us three worlds to worry about instead of just two!

And, in fact, there may well be a third world to worry about.  Many prominent mathematicians consider the laws of mathematics to be genuine discoveries, not mere products, of the mind -- in other words, even though lacking physical reality, they exist independently of mental activity, somewhat along the lines of Plato's forms.  Among these theorists is UCB's own Prof. Edward Frenkel, legendary instructor of a lower-division course in multivariable calculus,as represented in his book, Love and Math: The Heart of Hidden Reality (2013).  As summarized by Jim Holt ("A Mathematical Romance", New York Review of Books, 12/05/2013):

"How can it be," Einstein asked in wonderment, "that mathematics, being after all a product of human thought independent of experience, is so admirably appropriate to the objects of reality?"  Frenkel's take on this is very different from Einstein's.  For Frenkel, mathematical structures are among the "objects of reality"; they are very bit as real as anything in the physical or mental world.  Moreover, they are not the product of human thought; rather, they exist timelessly, in a Platonic realm of their own, waiting to be discovered by mathematicians.  The conviction that mathematics has a reality that transcends the human mind is not uncommon among its practitioners, especially great ones like Frenkel and [Robert] Langlands, Sir Roger Penrose and Kurt Godel.  It derives from the way that strange patterns and correspondences unexpectedly emerge, hinting at something hidden and mysterious.  Who put those patterns there?  They certainly don't seem to be of our making.
More recently, another philosopher, T.M. Scanlon has also suggested that certain objects exist "outside of space and time", not part of the material world (Being Realistic About Reasons, 2014).  different from the one covered by the natural sciences. 

Scanlon argues that normative reasons are not simply derived from something else, like a person's attitudes or desires that might justify his behavior.  In fact, the driver's behavior is justified by appeal to something that exists independent of his or anyone's attitudes.  This "something" is a normative reason.  (For an exposition of Scanlon's theory, see "Listening to Reason" by Thomas Nagel, New York Review of Books, 10/09/2014.)

Normative reasons play a big role in analyses of morality, which would take us far beyond the scope of this course.  My only point in mentioning Scanlon's work is that Popper and Eccles are not alone in thinking that there are things which have an observer-independent existence but which are not part of the spatially and temporally bounded reality explored by the natural sciences.

 

Chalmers: Closing the Explanatory Gap?

A rather different take on the problem of consciousness is offered by David Chalmers, in a provocative (and peculiar) blend of materialism and dualism which neither denies that consciousness exists nor reduces consciousness to neural functions.  Chalmers' work has been extremely provocative: Alone among the philosophers writing on consciousness, he has inspired a whole book of critical responses, and some people think he's actually solved the mind-body problem.  Certainly he does.



Chalmers argues that the easy problems are easy because they concern abilities and functions, such that explanation need only specify the proper mechanism.  The hard problem is hard because it remains after all the mechanisms are accounted for, and goes beyond function to experience -- precisely what Levine (1983) called the explanatory gap.



"How?" and "Why?"

In some respects, the distinction between easy and hard problems is essentially the distinction between "how" and "why?' questions.  Easy questions about structure and function are questions about How the system is built and How the system functions.  Hard questions about experience boil down to questions about Why the system generates consciousness and Why these functions simply don't transpire in the dark

For the record, before Chalmers wrote his book, Colin McGinn described consciousness as "the hard nut of the mind-body problem" (1989), and Galen Strawson called it "the hard part of the mind-body problem" (1989).  But Chalmers gets credit for formulating the distinction between the easy and hard problems of consciousness, and for articulating just what made "the hard problem" so hard.

If this is true, the next question is whether "Why?" is an appropriate question for science.  It seems analogous to questions like "Why is there air?" or "Why are there stars?" -- both of which seem way too metaphysical for science.

In fact, science answers "why" questions by changing them to "how" questions. Air arises from photosynthesis, and stars are a product of the Big Bang and nuclear fusion.

So, perhaps Chalmers has made an easy question into a hard question by changing the way the question is framed, from how to why.  And maybe he answers his own hard question by turning it back into an easy question by reframing it from why to how.

Chalmers asserts that some extra ingredient is needed to close the explanatory gap.  Because everything in physical theory is compatible with the absence of consciousness, we need a new fundamental assumption to get conscious experience out of a physical system.  Chalmers closes the explanatory gap by proposing that experience itself is a fundamental feature of the physical world, just like mass, space, and time.  Specifically, Chalmers proposes that experience is an aspect of information.

In information theory, as developed in the 1940s and 1950s by Claude Shannon at Bell Laboratories, information is defined as the amount of change that occurs between two states of a system. 


Claude E. Shannon (1916-2001)

Shannon's work on information theory, which identified the bit (binary digit) as the fundamental element in all communication, laid the foundation for modern computer science and communications.  In his master's thesis, which has been called the most important one produced in the 20th century, he showed that Boolean logic (which solves problems by manipulating just the symbols 0 and 1) could be implemented by a set of electrical switching circuits.  More important, Shannon realized, as he put it, that "a computer is a lot more than an adding machine".  Because he realized that 0s and 1s could represent anything, not just numbers, Shannon laid the foundation for artificial intelligence.  

After receiving his PhD (from MIT, in 1940), he worked on cryptography for the wartime government, and also AT&T's Bell Laboratories on the problem of noise-free transmission of messages.  In this context, he proposed that 

"the information content of a message... has nothing to do with its content but simply with the number of 1's and 0's [sic] that it takes to transmit it" 

(as quoted in "Claude Shannon, Mathematician, Dies at 84" by George Johnson, New York Times, 02/27/01).  

In his famous two-part paper, "A Mathematical Theory of Information" (1948), Shannon showed how adding extra bits to a message could keep it from being garbled by noise -- the genesis of contemporary error-correction codes in computer science.  Shannon, who moved to MIT in 1958 (he retired in 1978), was one of the world's great eccentric geniuses.   He used to ride a unicycle through the halls at Bell Labs, built a computer that could compute in Roman numerals, and developed a mathematical theory of juggling.  He suffered from Alzheimer's disease in his last years.  

Shannon's highly mathematical theory was rendered into readable English prose by Warren Weaver, and Shannon and Weaver together published a book-length treatment of the theory in 1949. 

More information on Claude Shannon and information theory can be found on the website of Scientific American magazine (search under Shannon for an article posted 10/14/02; see also 'Profile: Claude E. Shannon" by John Horgan, 01/90), and on the Bell Labs website.

A very engaging history of the concept of information, focusing on Shannon's work, is provided by James Gleick in The Information: A History, a Theory, a Flood (2011).  See also Gleick's essay, "The Information Palace", on the blog of the New York Review of Books, 12/08/2010.  The book was reviewed by UCB's Prof. Geoffrey Nunberg in "Data Deluge", New York Times Book Review (03/20/2011).

There is now a full-length biography of Shannon: A Mind at Play: How Claude Shannon Invented the Information Age by Jimmy Soni and Rob Goodman (2017).  This biography joins those of the other three pioneers of the Information Age: Alan Turing: The Enigma, by Andrew Hodges (1983); John von Neumann (John von Neumann and the Origins of Modern Computing, by William Aspray (1990); and Norbert Weiner (Dark Hero of the Information Age: In Search of Norbert Weiner, the Father of Cybernetics, by Flo Conway and Jim Siegelman (2005) .

In his double-aspect theory of information, Chalmers asserts that information has a physical aspect (which embodies a difference between one or more physical states of the world) and an experiential aspect (which gives rise to consciousness). Thus, information physically represented in the brain (e.g., by a pattern of neural firings) naturally gives rise to conscious experience, because conscious experience is one aspect of the state of being informative.  But human brains aren't the only things that embody information:

Although this move might strike some observers (including myself) as a deus ex machina, Chalmers insists that everything that embodies information is conscious, because information always has a dual aspect. Still, he never quite addresses the "hard question" of just how information translates into conscious experience.

  1. In the final analysis, Chalmers's theory is a restatement, in the terms of modern information theory, of Morton Prince's old mind-stuff theory.  Recall that Prince's solution to the mind-body problem was to assume that every physical object contained some amount of "mind-stuff", and that when enough of it accumulated, the physical object became conscious.  It's a position known as panpsychism -- the idea that consciousness pervades the physical Universe, and inheres not just to humans, not just to humans' close evolutionary relatives, not just to mammals, not just to animals, but also to plants and rocks.  It's a neat solution to the mind-body problem.  But it's a solution that William James tore apart in his Principles (despite the fact that Prince was a friend and colleague).  and it's a solution that should strike us as deeply unsatisfying, on two grounds.
  2.  It really is a deus ex machina.  It comes out of the blue, clearly concocted to solve the mind-body problem.
  3. It doesn't actually solve the mind-body problem.  To say that any physical system that represents information is conscious doesn't tell us how that state generates consciousness -- leaving us with Chalmers's "hard problem" unsolved after all.


A Deus Ex Machina?

In ancient Greek and Roman drama, when the plot got stuck, or the play had to come to an end, the writer occasionally arranged for an actor, playing a god, to be swung onto stage by a crane-like machine, decide the outcome of the play, and explain what had gone on.  Frankly, it's hard to avoid the feeling that Chalmers has done something similar -- the dual-aspect theory of information seems to come out of nowhere, and explains everything just a little too neatly.  (Illustration at left from DK Images.) 

 


At the very least, it seems that Chalmers has revived Morton Prince's old "mind-stuff" theory, against which James railed in the Principles, in the modern guise of information theory.  If that's what counts as a solution to the mind-body problem, then OK....



Panpsychism may seem like a desperate measure, but it has its proponents.  One of these is Philip Goff, a British philosopher, in Galileo's Error: Foundations for a New Science of Consciousness (2019; reviewed by Julian Baggini in "Of Mind and Matter", Wall Street Journal, 01/04/2020).  In my teaching, I have argued that Immanuel Kant (1724-1804) wrongly characterized psychology as an impossible science because the mind, being composed (as he thought) of an immaterial substance, can't be measured.  Goff takes us back another 150 years, before Kant, and even before Descartes (1596-1652), whose Meditations (1641) gave us the mind-body problem to begin with, to Galileo (1564-1642), who made a similar argument in The Assayer (1623), his seminal essay on scientific method.  For Galileo, measurement is the essence of the scientific method (I'm paraphrasing here: "Nature is a book written in the language of mathematics").   If it can't be measured, it can't be subject to scientific investigation.  And Galileo believed that sensory qualities couldn't be measured -- so that settled that, even before psychology got its name (that was with Christian Wolff, in his Psicologia empirica of 1732 and his Psicologia rationalis of 1734)!  Anyway, because subsequent science embraced Galileo's position on measurement, Goff argues, it rendered itself unable, in principle, to account for consciousness.  Goff rejects dualism, because it can't explain how minds can affect bodies we're talking about free will here); he also rejects materialism, because the implication that consciousness is an illusion strikes him as absurd.  But he embraces panpsychism, by asserting that consciousness is a fundamental property of matter.  Galileo's error was to assume that consciousness could not be part of the material world.

But really, panpsychism doesn't solve the hard problem of consciousness, which is what Chalmers and Goff, and other panpsychists, intend.  It just kicks the hard problem down one level of explanation.  So now, instead of asking neuroscientists to figure out how the brain generates consciousness, instead we ask physicists to figure out how thermostats and solar systems do it.  Doesn't seem like much of a solution to me.


Self-Organizing Complex Systems

A related idea comes from UCB's own Terrence Deacon, a cognitive anthropologist who has long been concerned with the problem of how life, not to mention sentient life, could have evolved from inanimate matter.  Deacon offers a solution in Incomplete Nature: How Mind Emerged from Matter (2012).   The essential ingredient for life, in his view, is the emergence of a complex system that is teleodynamic -- autonomous, self-maintaining, temporally stable, and resistance to entropy.  One such system is the biological cell -- the basic building-block of life.  And the brain, of course, is a complex system of cells.  Whereas Chalmers believes that consciousness is a property of any physical system that embody information, Deacon suggests a somewhat more restrictive view: that the physical system must consist of biological cells, which have the five properties just listed.  Deacon does not quite verge on panpsychism, as Chalmers does, ascribing consciousness to things like computers and thermostats.  But he comes close, asserting that there is sentience -- consciousness -- wherever we find self-preserving dynamic systems.  That includes brains, but also cells (like individual neurons) and molecules. 

But then he edges closer to panpsychism when he asserts that it's not just organic material that can be sentient.  Any entity that is self-organizing will have consciousness, precisely because -- you have to forgive the pun here -- it organizes its self.  That includes biological matter, of course, but presumably it can also include inorganic matter, like perceptrons.  Anyway, Deacon believes that while neurons and the things made of them are conscious, you don't need neurons to be conscious.  All you need is a self-organized, autonomous, temporally stable entity that resists entropy. The key to the argument seems to be that, in order to be conscious, you have to have some sort of self, and self-organizing, teleodynamic systems have a self by definition.

Still, when you ask how teleodynamic systems attain consciousness, simply by virtue of being teleodynamic, all that can be said is: "it's not clear".

For a critique of Deacon's ideas, see "Can Anything Emerge from Nothing?" by Colin McGinn, New York Review of Books, 06/07/2012. An exchange of views between Deacon and McGinn was published in the NYRB on 10/11/2012.

 

Damasio: Self and Mind

Much the same sort of argument seems to be made by Antonio Damasio, a neurologist who has written frequently on consciousness and related topics. Damasio focuses on the essentially subjective nature of consciousness, and notes that, in order to have subjectivity, or first-person experience, you have to have a sense of self -- a first person in the first place!. So, he tries to figure out which parts of the brain are critical for the sense of self. The general argument of Damasio's book, Self comes to Mind (2011) goes as follows:

For a critique of Damasio's ideas, see "The Mystery of Consciousness Continues" by J.R. Searle, New York Review of Books, 06/09/2011. An exchange of views with Barclay Martin was published in the NYRB 09/29/2011.


Link to a TED talk by Anthony Damasio.

 

Tononi and Koch: Differentiated and Integrated Information

Both Chalmers and Deacon exemplify what might be considered a search for the "secret sauce" -- an attempt to identify the one thing that makes the difference between consciousness and unconsciousness.

For Giulio Tononi and Christof Koch, the "secret sauce", once again, is information, but not just information. We'll meet up with Tononi and Koch again, in the lectures on the Neural Correlates of Consciousness, but before we get there, let me trace some connections.

Earlier, Koch joined Francis Crick in proposing that the key to consciousness was the synchronized firing of neurons at 40 cycles per second (hertz). Koch subsequently revised his view, stating only that 40-hertz firings were necessary for the formation of integrated percepts.

At roughly the same time, Tononi joined Gerald Edelman in proposing the dynamic core theory of consciousness, which proposed that consciousness is a product of both highly integrated and highly differentiated information (Tononi & Edelman, 1998; Edelman & Tononi, 2000).




More recently, and working both together and separately, Tononi (in "Consciousness as Integrated Information: A Provisional Manifesto, 2008; see also Balduzzi & Tononi, 2008) and Koch (in The Quest for Consciousness, 2004; Consciousness: Confessions of a Romantic Reductionist, 2012; The Feeling of Life Itself: Why Consciousness is Widespread but Can't be Computed, 2019) have refined the dynamic core theory of consciousness in a new information integration theory.



For a review of Koch's 2004 book, see "Consciousness: What We Still Don't Know" by J.R. Searle, New York Review of Books, 01/13/2005; also an exchange between Searle and Stevan Harnad, NYRB 06/23/2005.

For a review of Koch's 2012 book, see "Can Information Theory Explain Consciousness?" by J.R. Searle, New York Review of Books, 01/10/2013; also an exchange between Searle and Koch and Tononi (writing together), NYRB 02/.

 

Baars: Global Workspace Theory

Baars, a cognitive neuroscientist who is affiliated with Crick's Neurosciences Institute, has proposed a theory of consciousness known as Global Workspace Theory (GWT; 1988).

GWT holds that the function of consciousness is to combine a number of different cognitive processes into "a single coherent experience".  It does this by making the contents of various subordinate, modular information-processing functions available to each other -- so that, for example, we can think about what we're perceiving, remember similar experiences from the past, control our emotional reactions to them, and plan what to do next.  These different flows of information all come together in a global workspace -- hence the name of the theory.  This global workspace is, essentially, working memory. 

In a nod to neuroscience, Baars's global workspace is also known as the global neuronal workspace, and the neural version of GWT is known as Global Neuronal Workspace Theory  (GNWT; Dehaene, Consciousness and the Brain, 2014).  Based on recordings of event-related potentials elicited by subliminal and supraliminal stimuli, Dehaene has suggested that the neural substrate of GW is in the frontal lobes, and specifically in prefrontal cortex.  On the other hand, in a 2018 essay, Christof Koch identified a large swath of posterior cerebral cortex -- the so-called posterior hot zone -- as the neural basis of the GW, and thus (assuming that Baars is right) of consciousness ("What Is Consciousness?", Scientific American, 06/2018).  For more details, see the lectures on The Mysterious Leap from the Body to the Mind.

 

Morsella: Supramodular Interaction Theory

Similarly, Morsella (2005) has proposed a Supramodular Interaction Theory.  Following Fodor's (1983) doctrine of modularity, Morsella argues that the outputs of different low-level modules feed into "supramodules" that perform higher-level tasks, leading to overt behavior executed by various musculo-skeletal systems.  For example, one module (processing information from a low-level blood-sugar module) might give rise to feelings of hunger, initiating a search for food.  Another supramodule might take information from the skin senses, generate feelings of pain, and initiate escape or avoidance behavior.  The activities of these supramodules may conflict, as when the act of eating causes pain (because the eater has a toothache?).  The function of consciousness, in this view, is facilitate the resolution of conflict between different musculo-skeletal systems. He argues that only the skeletal musculature is controlled by conscious processes -- everything else done by various effector systems goes on unconsciously.  Thus, it is the phenomenal awareness of conflict between action tendencies that permits the conflict to be resolved.


The Turn Toward Panpsychism

One of the most interesting contemporary approaches to the mind-body problem involves a revival of panpsychism -- the idea that consciousness pervades the physical universe.  Panpsychism was implied in Morton Prince's notion of mind-stuff, and more explicitly implicated in David Chalmers's idea that consciousness is a property of any information system.  Panpsychism was a prominent topic at the 2016 conference on "The Science of Consciousness", and also ran through a series of talks on "The Rise of Human Consciousness" presented at the New York Academy of Sciences on May 23, 2016.

Most prominently, Giulio Tononi (Biological Bulletin, 2008; Oizumi, Albantakis, & Tononi, PLoS Computational Biology, 2014), a neuroscientist at the University of Wisconsin (and former student of Gerald Edelman), has taken a page from Chalmers's book and proposed an Integrated Information Theory (IIT) which holds that consciousness is a property of any physical system that has the ability to store and integrate a large amount of information.  Tononi has also proposed a mathematical unit, phi, to measure how much information integration -- that is, how much consciousness -- any entity has.  Information integration gives a physical system intrinsic causal power -- the ability of elements of the system to exert causal effects on each other through feed-back and feed-forward loops.  Any system with such a reentrant architecture will be conscious; and only such a system can be conscious.


The details of how phi is calculated are beyond my competence to explicate, so I'm not going to even try. 

It is important to note one implication of IIT, at least as noted by Christoph Koch in a recent essay ("Proust Among the Machines", Scientific American, 12/2019).  In IIT, consciousness is not just a property of the brain.  It is the property of any physical system that has achieved a sufficient degree of phi -- including computers (and robots).  In this respect, Koch believes that it differs from GNWT, its chief competitor (at least as of 2020).  In Koch's view, GNWT insists that consciousness emerges from neural systems that have certain properties: the brains of mammals have these properties; but other animals, like jellyfish, do not: like plants, and computers they are not and cannot be conscious because they lack the appropriate neural architecture.  But he might not be right.  In a recent exposition of GNWT, Dehaene et al. (Science, 2017) are clear that as they define it, any physical system with certain computational properties also has consciousness. 

Unfortunately, phi has some problems.  Scott Aaronson, a computer scientist, has calculated in a 2014 blogpost that, following Tononi's formula, a DVD player (technically, a component of a DVD player that corrects for errors induced by flawed media) has more consciousness than a human brain.  Aaronson argues that such a result contradicts our intuition that, however much consciousness a DVD player has, it can't have more consciousness than a typical human.

In reply (also posted to Aaronson's blog), Tononi noted that Aaronson's calculations depended on an earlier version of the Phi formula -- but then conceded that the latest version, known as Phi 3.0, confirms Aaronson's result, and actually increases the superiority of the DVD player.  More important, Tononi isn't bothered by the notion that a DVD player might have consciousness.  He points out, correctly, that empirical science has a long and distinguished history of correcting our intuitions about the nature of things.  And, in fact, he goes on to assert that an even simpler structure, a sufficiently large 2-dimensional grid of logic gates, when physically implemented, would have a large value of phi -- and, thus, be conscious.

Aaronson, in his rejoinder, calls this the "Copernicus of Consciousness Argument", after the astronomer who argued that, despite our intuitions, the sun does not revolve around the earth.  He insists that any formula that indicates that a "simple expander graph" has consciousness doesn't comport with common sense.  Or, as he puts it "When the clock strikes 13, it's time to fix the clock...".

Link to Aaronson's blogpost, which contains a very math-heavy explication of phi.

Link to Tononi's reply.

Link to Aaronson's rejoinder.

Still, the important point in this context is that Tononi has seriously proposed that nonliving matter, including any arbitrary physical system, can be conscious, so long as it can store and integrate a large amount of information. 

Despite this criticism, Max Tegmark (2015), a cosmologist at MIT (and UC Berkeley PhD), has embraced and expanded IIT.  Tegmark actually proposes that consciousness is a state of matter, like solid, liquid, gas, or plasma.  He calls it "perceptronium", in homage to Minsky and Papert's perceptrons -- early versions of self-organizing "neural networks" that were capable of learning.  Tegmark traces several steps that lie between the usual states of matter and states of consciousness.



  1. The substance ought to be able to store information -- that is, hold in a particular state -- for a relatively long period of time.  For this reasons, solids are more likely than liquids or gases to have consciousness.  If such a substance is also easy to write to and read from, it has the qualities of memory -- the "first warmup step toward consciousness" (2015, p. 3).
  2. A substance with consciousness also has to have what he calls computronium -- that is, the computer-like ability to process information. 
  3. Finally, a conscious substance will have perceptronium, which allows for subjective self-awareness.

Putting all of this together, Tegmark argues that any physical system will have consciousness if it satisfies six -- but really only four -- principles.




  1. Information: substantial storage capacity.
  2. Dynamics: information-processing capacity.
  3. Independence from the rest of the physical world.
  4. Integration of its "nearly independent parts".
  5. Autonomy is the combination of dynamics and independence (thus it doesn't really count as a fifth principle).
  6. Utility is not necessary (and thus doesn't really count as a sixth principle), but such a principle would explain why conscious systems evolved.

In some ways, Tegmark's version of panpsychism is preferable to Chalmers's and Tononi's, because it puts limits on what can be conscious.  As he writes, clocks and diesel generators tend to exhibit high autonomy, but lack substantial information storage capacity" (2015, p. 4).  Therefore, they are unlikely to be conscious.  Still, it seems to me that any materialist theory that has to solve the mind-body problem by invoking a new substance known as "perceptronium" might just as well cite Morton Prince and talk about "mind-stuff".  It gets us to pretty much the same place.  Which is nowhere.

Nevertheless, IIT, like GWT/GNWT, remains very popular.  In 2023, an international consortium of investigators(known as the Cogitate Consortium), including Stanislas Dehaene (a proponent of GNWT), and Christoph Koch and Giulio Tononi (proponents of IIT), reported on an "adversarial collaboration" in which proponents of the two theories agreed on terms for a comparison.  For example:

  1. IIT postulates that conscious contents will be instantiated in posterior brain regions (the "hot zone"), while GNWT predicts that activity in the prefrontal cortex is necessary for consciousness (remember, GWT essentially identifies consciousness with working memory, and GNWT identifies working memory with the executive functions localized in prefrontal cortex).
  2. IIT predicts sustained activity in the "posterior hot zone" so long as the stimulus is consciously represented, while GNWT predicts an "ignition" of activity in prefrontal cortex at the onset and offset of the stimulus.
  3. IIT predicts "short-range connectivity" within posterior cortex between low-level sensory areas (e.g., primary visual cortex) and high-level category-specific areas in (e.g., the fusiform face area), while GNWT "long-range connectivity" between these posterior areas and pre-frontal cortex.

The study used fMRI, magnetoencephalography, (mEEG), and electrocorticography (ECG), to record brain activity while subjects viewed suprathreshold (thus conscious) visual stimuli presented for various durations.  The paper was posted (06/23/2023) in advance of peer review, but in the authors' (there are lots and lots of them) view, the results came out about evenly split between the theories.  There was brain activity following stimulation (which I suppose counts as a plus!).

    1. In accordance with IIT, consciousness was accompanied by posterior activation, independent of activation in prefrontal cortex;
    2. but contrary to IIT, there was little evidence for synchronized activity between low- and high-level posterior areas.
    3. In accordance with GNWT, there was "ignition" in prefrontal cortex at the time of stimulus onset;
    4. but contrary to GWNT, there was no further "ignition" in prefrontal cortex at the time of stimulus offset.
So, as they say, more research is needed. 
Here's a link to an article in Science describing the adversarlial collaboration: "Consciousness hunt yields results but not clarity" by Elizabeth Finkel (06/30/2023). 

And, for good measure, Here's a link to a report of the study, posted prior to any peer review (usually a nursery-school no-no).

Still, the panpsychist implications of IIT bother some people (not without justification, we should note).   Just about the time that the Cogitate Consortium paper was posted online, another group (Including Bernard Baars, Patricia Churchland, and Daniel Dennett) posted a declaration that IIT was "pseudoscience".  It's not entirely clear why IIT is to be labeled "pseudoscience", though the authors of the declaration are clearly perturbed by its panpsychic implications.  Admittedly, any theory that predicts that a DVD player is conscious is likely to be wrong.  Interestingly, they are also concerned about the medicolegal implications of IIT: it predicts that fetuses are conscious even at very early stages of development (with implications for the legalization of abortion at early stages of pregnancy), and might be used to impute consciousness inappropriately to coma patients (with implications for the rights of family members to discontinue life support).  But that doesn't make it pseudoscience.  There's a difference between being wrong, as I think IIT is, and being pseudoscientific. 

See "Consciousness theory slammed as ‘pseudoscience’ — sparking uproar" by Mariana Lenharo, Nature, 2023).

And here's a link to the letter castigating IIT as "pseudoscience".


Zap and Zip

Issues of panpsychism aside, Tononi's integrated information theory of consciousness has inspired the development of a strictly biological measure of consciousness, which does not rely on either self-report or behavior, and which can be applied to any species that has a brain: the Perturbational Complexity Index (PCI; Massimini et al., 2005).  The general idea behind the PCI is that conscious processing involves the complex integration of various areas and systems of the brain, especially in the thalamocortical system.  To obtain a value for PCI, Casali et al. first deliver a pulse of transcranial magnetic stimulation (TMS) to the brain: this is the "Zap" of what has been called Zap and Zip.  The the electrical potentials evoked by this stimulus are recorded by EEG by means of a dense array of electrodes.  These recordings, taken from various regions of the cerebral cortex, are then aggregated by an algorithm derived from data-compression techniques (think zipfiles or MP3s) derived from computer  science: this is the "Zip" of Zap and Zip.  To take an analogy: imagine that a an organ sounds a chord in a great cathedral, and we measure how long it reverberates, and how the echo reaches, and is passed around, the various alcoves and chapels that surround the main nave.  PCI is kind of like that.  But there isn't just one "zap".  As with event-related potentials, there are dozens, even hundreds, of zaps.  Each individual subject then receives an aggregate score, ranging from 0 to 1; the investigator may also record the maximum individual PCI score, known as PCImax

In an initial study, Massimini, Tononi, and their colleagues Tononi and Massimini showed that PCI could distinguish between waking and sleeping in normal subjects (Massimini et al., Science, 2005).  A later study showed that PCI could distinguish between subjects who were awake, asleep (NREM sleep), or anesthetized (by various agents), as well as brain-injured patients in various stages of coma (Casali et al., Science Translational Medicine, 2013).  A more recent study found that a PCI value of .31 had 100% specificity and sensitivity in discriminating between subjects who were conscious and those who were unconscious (i.e., in NREM sleep or anesthetized).  That is, 100% of conscious subjects were identified as conscious were indeed conscious (including those who were dreaming), and 0% were unconscious; and 100% of those it identified as having lost consciousness were indeed unconscious (including those in dreamless sleep), and 0% were conscious.  Applying this criterion to patients in various stages of coma, PCI successfully identified all of those who were in the "minimally conscious state", and suggested that a minority of those diagnosed as in the "vegetative state" possessed, at least minimal levels of consciousness.  The upshot of all this work is that Tononi, Massimini, and their colleagues believe that they have created a biological index of consciousness that does not rely on either self-reports or behavioral responsiveness; and that this index is based on Tononi's Information Integration Theory of consciousness.


There is more on consciousness in sleep, anesthesia, and various stages of coma elsewhere in these lectures, where you'll also find further discussion of the Casali et al. (2016) paper.

  • Link to lectures on consciousness in coma, including the vegetative state (VS) and the minimally conscious state (MCS).  The clinical differentiation between the vegetative and minimally conscious states is inexact, so it is not particularly surprising that some patients in VS, traditionally presumed to be unconscious, actually retain some minimal level of consciousness; but nothing in this study suggests that all VS patients, or even the majority, are actually consciously aware of what is going on around them.
  • Link to lectures on consciousness in general anesthesia.  There is a persisting question of whether, and to what extent, anesthetized patients are aware of surgical events, even if they're amnesic for these events afterwards; this study suggests that they're not.
  • Link to lectures on consciousness in sleep and dreams.  Casali et al. appear to construe NREM sleep as dreamless, and thus a loss of consciousness, but there is evidence of dreaming, or at least some form of "thought-like" mental activity in NREM.
For an account of the discovery and implications of PCI, see "How to Make a Consciousness Meter" by Christof Koch, Scientific American, 11/2017.  Koch thinks that the PCI will permit us to objectively measure consciousness without relying on self-report.  I think he's mistaken, for reasons that will become clearer in the lectures on sleep and dreams.



Back At Descartes' Impasse

Impasse.JPG (80205 bytes)At the end of this tour, we appear, like James, to be left stranded at Descartes' impasse. As scientists, we must embrace materialism: the mind is what the brain does, and science must be based on public observation. But our experience as conscious beings inclines us toward dualism: the intuition that mental states are different from physical ones, and the belief that mental states have causal efficacy, and the view that there is something about consciousness that is essentially private and subjective.



As a result, philosophers and other cognitive scientists will continue to argue about the mind-body problem.

This situation, where after more than 350 years of hard thought we are pretty much back where we started, suggests that Searle is probably right that the mind-body question has been poorly constructed, and that to make progress we have to break out of received categories of mind and body, dualism and monism, idealism and materialism. We know that the mind is a function of the body: the mind is what the brain does, and our mental capacities are the product of biological evolution.  At the same time, dualism has its pleasures.  It comports with our experience of ourselves as sentient beings with free will.  Psychologists, who are primarily concerned with the nature of mental life, are by their very nature dualists of a sort.  Moreover, dualism keeps certain interesting problems alive, such as the problem of free will and the possibility of psychosomatic interactions.

Therefore, one project for neuroscience is to determine how the brain does it -- that is: 

In 2019, the Templeton World Charity Foundation, a unit of the Templeton Foundation, which supports scholarly research at the intersection of science and religion, announced a $20 million grant program called Accelerating Research on Consciousness (ARC) intended to identity the neural substrates of consciousness (Templeton has long been interested in the problem of free will, which boils down to the question of whether deliberate conscious mental activity plays a causal role in behavior).  The plan, as described in Science ("Rival Theories Face Off Over Brain's Source of Consciousness" by Sara Reardon, 10/18/2019) is to take about a half-dozen prominent theories and pit themselves against each other in what is known as adversarial collaboration, a process proposed by Daniel Kahneman (American Psychologist, 2003) in which the proponents of competing theories work together to test their theories against each other.  That is, the two parties agree on experiments that would decisively test between their respective theories.  The first adversarial collaboration, in which $5 million will be spread out across five international laboratories, will pit a version of Baars's Global Workspace Theory (GWT) against Tononi's Integrated Information Theory (IIT).  Stanislaus Dehaene (in Consciousness and the Brain, 2014) has proposed a version of GWT which emphasizes the role of the prefrontal cortex -- which is reasonable enough, given that the "Global Workspace" is tantamount to working memory.  By the same token, Tononi proposes that the interconnectedness of the brain, allowing the information integration to occur, is greatest in the parietal lobes.  So the research will involve using several methodologies -- fMRI, EEG, and electrocorticography (ECoG, to be used with neurosurgical patients) -- to measure the activation of various brain regions while subjects engage in activities involving conscious processing.  Like what, you say?  We'll see.  In any event, the one whose brain regions activate (or, I suppose, activate the most) during task performance will win the contest.

Actually, a specific research proposal was released in 2021 by a research group of scientists including Koch (Melloni et al., "Making the Hard Problem of Consciousness Easier", Science, 05/28/2021).  They interpret GNWT as predicting that consciousness will occur with an increase in activation of frontal-parietal areas within about 300 msec of stimulus onset, and again within about 300 msec of stimulus offset.  IIT, by contrast, predicts that activation of the posterior hot zone (PHZ) will arise within 300 msec of stimulus onset, and persist until stimulus offset.  For example, brain activity might be recorded by EEG, MEG, or fMRI, or perhaps even by an array of electrodes resting directly on (or in) the cortex, while subliminal and supraliminal stimuli are presented to the subjects.  The differential predictions are illustrated in the image at left.  According to GNWT, activation should be confined to the PHZ, while IIT predicts that activation will also be found in frontal and fronto-temporal areas.  GNWT predicts a period of inactivity between stimulus onset and stimulus offset, while IIT predicts constant activation during this period.  You get the idea. 

In addition, these also researchers proposed to test "first-order" and "higher-order" theories of consciousness against each other.  First-order theories, such as the one proposed by Ned Block, predict that consciousness of stimuli (visual, auditory, etc.) will be associated with activation in the corresponding sensory areas.  Second-order theories, such as the one proposed by David Rosenthal, predict that consciousness will also entail the activation of a "higher-order" brain network that, essentially, "points to" (as it were) the neural representation of the stimulus.  We can see how these theories could also be tested with the imaging methods used to test GNWT against IIT.

Stay tuned for further developments.  But while some cognitive scientists engage in endless debates over consciousness, functionalism, and reductionism, and search (I think fruitlessly) for the consciousness module, or system, or something, in the brain, there are still a number of interesting scientific problems to be addressed.  

 

Four Problems of Body and Mind

We also know, intuitively, that consciousness has causal efficacy.  Our thoughts, feelings, and desires cause us to do what we do, and our mental states can affect bodily states.  So another project for science is to find out how the brain does this, too. How do mental states affect bodily states and functions, as in placebo effects and psychosomatic effects.  Both projects are materialist in nature, but both projects take mind, and consciousness, seriously -- they don't try to write them out of the picture.

It turns out that there is not just one mind-body problem, but at least four.  Each aspect of the mind-body problem is taken up in separate lecture supplements:

The Mysterious Leap from the Body to the Mind: The Neural Correlates of Consciousness

The Puzzling Leap from the Mind to the Body: Psychosomatics

Mind Without Body?  Spiritualism and Parapsychology

Body Without Mind: Reflex, Taxis, Instinct -- and Zombies


This page last revised 10/09/2023.