HomeIntroduction Cognitive Psychology Cognitive Perspective Social Perception Social Memory Social Categorization Social Judgment Language Automaticity Self Social Neuropsychology Personality Social Intelligence Development Sociology of Knowledge Social Construction Conclusion Lecture Illustrations Exam Information


Social-Cognitive Neuropsychology (and Neuroscience)


For background, see also the General Psychology lectures on

"The Biological Basis of Mind and Behavior"

 

Mind and behavior can be analyzed at any of three broad levels:




Traditionally, social psychology has served to link psychology with the other social sciences, such as sociology, anthropology, economics, and political science.  And biological psychology has served to link psychology with the other biological sciences.  With the advent of brain-imaging technologies such as PET and fMRI, psychology is now in a position to ask questions about how mental processes are implemented in the brain.  This project is most advanced within cognitive psychology, with a plethora of studies of the neural bases of perception, memory, and the like.  But social psychology, and especially social cognition, has been part of this scene as well.  Hence the emergence of social neuroscience, also known as social-cognitive neuroscience, and, most recently, cultural neuroscience (Kitayama & Uskul, 2011).


Mind in Body: A Functionalist Perspective

The task of psychology is to understand the nature of human mental life, including the biological bases of mental life.  

From a functionalist perspective, psychologists try to understand the role that mental life plays in adaptive behavior -- and, more broadly, the relation of mind to other things (in James' phrase), including:

 

Structuralism and Functionalism

04James.gif (32097
            bytes)In the 19th century, William James (1842-1910) defined psychology as the science of mental life, attempting to understand the cognitive, emotional, and motivational processes that underlie human experience, thought, and action.  But even in the earliest days of scientific psychology, there was a lively debate about how to go about studying the mind.  Accordingly, there arose the first of several "schools" of psychology, structuralism and functionalism.

One point of view, known as structuralism tried to study mind in the abstract.  Wundt and others in this school argued that complex mental states were assembled from elementary sensations, images, and feelings.  Structuralists used a technique known as introspection (literally, "looking into" one's own mind) to determine, for example, the elements of the experience of a particular color or taste.   The major proponents of structuralism were

Structuralism was beset by a number of problems, especially the unreliability and other limitations of introspection, but the structuralists did contribute to our understanding of the basic qualities of sensory experience (see Boring's The Physical Basis of Consciousness, 1927).

Another point of view, known as functionalism, was skeptical of the structuralist claim that we can understand mind in the abstract.  Based on Charles Darwin's (1809-1882)  theory of evolution, which argued that biological forms are adapted to their use, the functionalists focused instead on what the mind does, and how it works.  While the structuralists emphasized the analysis of complex mental contents into their constituent elements, the functionalists were more interested in mental operations and their behavioral consequences.  Prominent functionalists were:

The functionalist point of view can be summarized in three points:

 

The Two Functionalisms

Psychological functionalism is often called "Chicago functionalism", because its intellectual base was at the University of Chicago, where both Dewey and Angell were on the faculty (functionalism also prevailed at Columbia University).  It is to be distinguished from the functionalist theories of mind associated with some modern approaches to artificial intelligence (e.g., the work of Daniel Dennett, a philosopher at Tufts University), which describe mental processes in terms of the logical and computational functions that relate sensory inputs to behavioral outputs.  This philosophical functionalism, closely related to Skinnerian functional behaviorism, is quite different from the psychological "Chicago functionalism" of James, Dewey, and Angell.

Social psychology, and in particular social cognition, serves some of these purposes well. 


Mind in Body

Until recently, however, social cognition, and social psychology more broadly, has pretty much ignored mind in body, the third leg in the functionalist triad.  Most social-psychological experiments and theories stick to the psychological level of analysis, and make little or no reference to underlying biological processes.  In this way, as in so many other ways, social cognition has followed the lead of cognitive psychology in general.  Social cognition has been no exception to this rule.

But in the last few decades, and especially since the 1990s, cognitive psychology (and, of course, cognitive neuroscience) has made great advances in understanding the biological basis of cognition and behavior.  Similarly, social psychology has become increasingly interested in biology, as indicated by several trends:

 

On Terminology

Contemporary social neuroscience has its origins in physiological psychology, a term coined by Wundt in the 1870s to refer to experimental (as opposed to speculative) psychology in all its forms.  Later, the term physiological psychology came to cover expressly physiological studies of various aspects of sensory function and behavior -- especially animal studies.

The term neuropsychology emerged in the 1950s to cover experimental studies of mind in behavior in human neurological patients suffering from various forms of brain insult, injury, or disease.  It appears that Karl Lashley was the first psychologist to refer to himself as a neuropsychologist (1955), and the journal Neuropsychologia was founded in 1963 for the express purpose of publishing human neuropsychological research.

EvolSocNeuro.JPG
            (72277 bytes)The term neuroscience was introduced in 1963, referring to an expressly interdisciplinary effort to study the nervous system in all its aspects, from the micro to the macro levels of analysis.

 

Initially, neuroscience consisted of three subfields:

Very quickly, however, behavioral neuroscience began to spin off a wide variety of subfields:

Link to a paper which traces the evolution of social neuroscience and related fields.

As a matter of personal preference, I prefer neuropsychology to neuroscience, because the former term places emphasis on the neural basis of mind and behavior, while the latter term places emphasis on -- well, neuronsNeuropsychology is a field of psychology, a discipline concerned with mind and behavior.  Neuroscience is a field of biology, especially concerned with the anatomy and physiology of the nervous system.  If you're interested in synapses and neurotransmitters, it makes sense to call yourself a neuroscientist.  But if you're interested in the mind and behavior of the individual person, it makes more sense to identify yourself as a psychologist.  But the term neuroscience is preferred by the majority of researchers in the field, incorporating neuropsychology as a particular subfield (focusing on behavioral studies of brain damage), and I'll bow to the inevitable.

Whatever you call it, neuropsychology or neuroscience, the field has its roots in physiological psychology.  In the 19th century, physiological psychology worked in parallel with psychophysics to put the study of mental life on an objective basis -- physiological psychology, by tying subjective mental states (like sensations and percepts) to objectively observable nervous system structures and processes; psychophysics by tying those same subjective mental states to objectively observable properties of physical stimuli.  But in the 20th century, physiological psychology took on the task of identifying the relations between behavior and physiological processes.


This is, essentially, the same task that behavioral neuroscience (including cognitive, social, affective, and conative neuroscience) undertakes today.  Earlier, David Marr had proposed that cognition could be analyzed at three quite different levels (things always seem to go in threes!):




So, to take an example from Palmer (1999), consider a thermostat:

Now,to take an example closer to social cognition, consider Anderson's "cognitive algebra" approach to impression-formation, as described in the lectures on Social Perception.

Gazzaniga, in the very first statement of the agenda for cognitive neuroscience, proposed to directly connect the algorithmic level to its implementation in brain tissue.  In other words, to relate the mind's cognitive functions to brain activity.




The Doctrine of Modularity

Perhaps the simplest view of mind-brain relations is to think of the brain as a general-purpose information-processor, much like the central-processing unit of a high-speed computer.  This is the view of the brain implicit in traditional learning theories, which assume that learning is a matter of forming associations between stimuli and responses.  It is also the view implicit in traditional "empiricist" theories of the mind, which assume that knowledge and thought proceed by associations as well.  Bits of knowledge are represented by clusters of neurons, and the associations between them are represented by synaptic connections -- or something like that.  

MorganKing1966.JPG (82522 bytes)This was the view of the brain which prevailed well into the 1960s, which acknowledged relatively small areas of the brain devoted to motor, somatosensory, visual, and auditory processing, but construed the vast bulk of the brain as association cortex -- devoted, as its name implies, to forming associations between stimuli and responses.

 

 

But if the brain were simply a general-purpose information-processing machine, there would be no reason to be talking about a distinctly cognitive or social neuropsychology or neuroscience, because there would be no reason to think that social cognition and behavior is performed any differently by the brain than is any other aspect of cognition and behavior.  

A distinctly cognitive or social neuroscience is justified only by an alternative view, variously known as functional specialization, or localization of function, which holds that different psychological functions are performed by different parts of the brain.  Thus, the brain is not a single, massive, undifferentiated organ, whose job it is to "think", but a collection of systems, each of which is dedicated to a particular cognitive, emotional, or motivational function.  This viewpoint prevails in contemporary neuropsychology and neuroscience in general, as well as social neuropsychology and neuroscience in particular.  Cognitive psychologists started taking the brain seriously in the late 1960s and early 1970s, but the whole idea of looking at the brain occurred to social psychologists only very recently (Klein & Kihlstrom, 1986).  

It is just this view which underlies the Doctrine of Modularity, articulated by the philosopher Jerry Fodor in 1983.  According to Fodor, the various input and output systems of the mind are structured as mental modules.  According to Fodor, these modules share a number of properties in common, of which three are most important for our purposes. 




It is the task of cognitive, affective, and social neuroscience to identify the brain structures corresponding to these mental modules.It should be understood that all of cognitive neuroscience is based, at least implicitly, on some version of the doctrine of modularity.  If the brain is just a general-purpose information processor, then it would make no sense to look for specific areas of the brain that perform particular mental functions.  We wouldn't expect to see discrete lesions having specific effects, and we wouldn't expect to see specific areas of the brain lighting up in an fMRI.

For a charming introduction to the application of the Doctrine of Modularity to neuroscience research, see "The Neuroanatomy Lesson (Director's Cut)" by Prof. Nancy Kanwisher at MIT.


Phrenology

26FowlerPhren.gif
            (67563 bytes)The neuroscientific doctrine of modularity traces its origins to Franz Joseph Gall (180-1919), Spurzheim, and others who promoted a pseudoscience known as phrenology.  Phrenology was popularized in America by Nelson Sizer and O.S. Fowler, and you can  sometimes purchase a ceramic copy of Fowler's "phrenological head" in novelty stores.  Well into the 19th century, the brain was thought to be a single, massive, undifferentiated organ.  But according to Gall and his followers, the brain is composed of discrete organs, each associated with some psychological faculty, such as morality or love.  In classic phrenological doctrine, there were about 35-40 such faculties, and the entire set of faculties comprises a common-sense description of human abilities and traits.  

 

The Phrenological Faculties

  1. Amativeness or Physical Love: the reproductive instinct, sexual attraction, and sexual desire (wouldn't you know that the phrenologists would name this one first!).
  2. Philoprogenitiveness or Parental Love: A particular feeling which watches over and provides for helpless offspring, or parental love.
  3. Adhesiveness or Friendship: A feeling or attraction to become friendly with other persons, or to increase social contacts.
  4. Combativeness: The disposition to quarrel and fight.
  5. Destructiveness: The propensity to destroy.
  6. Secretiveness: The propensity to conceal, predisposes the individual to Cunning and Slyness.
  7. Acquisitiveness:  The propensity to acquire.
  8. Self-Esteem: This sentiment gives us a great opinion of ourselves, constituting self-love.
  9. Approbativeness: This faculty seeks the approbation of others. It makes us attentive to the opinion entertained by others of ourselves.
  10. Cautiousness: This organ incites us to take precautions.
  11. Individuality: This faculty contributes to the recognition of the existence of individual beings, and facilitates the embodiment of several elements into one.
  12. Locality: This faculty conceives of places occupied by the objects that surround us.
  13. Form: This allows us to understand the shapes of objects.
  14. Verbal Memory: The memory for words.
  15. Language: Philology in general.
  16. Coloring: This organ cognizes, recollects, and judges the relations of colors.
  17. Tune: The organ of musical perception.
  18. Calculativeness or Number: The organ responsible for the ability to calculate and to handle numbers and figures.
  19. Constructiveness: The faculty leading to the will of constructing something.
  20. Comparison: This faculty compares the sensations and notions excited by all other faculties, points out their similitudes, analogies, differences or identity, and comprehends their relations, harmony or discord.Causality: This faculty allows us to understand reason behind events.
  21. Vitativeness or Wit: This faculty predisposes men to view every thing in a joyful way.
  22. Ideality: This faculty vivifies the other faculties and impresses a peculiar character called ideal.
  23. Benevolence: This power produces mildness and goodness, compassion, and kindness, humanity.
  24. Imitativeness: This organ produces a fondness for acting and for dramatic representation.
  25. Generation: This faculty allows us to come up with new ideas.
  26. Firmness: This faculty gives constancy and perseverance to the other powers, contributing to maintain this activity.Time: The faculty of time conceives the duration phenomena.
  27. Eventuality: This faculty recognizes the activity of every other, and acts in turn upon all of them.
  28. Inhabitiveness: The instinct that prompts one to select a particular dwelling, often called attachment to home.
  29. Reverence or Veneration: By this organ's agency man adores God, venerates saints, and respects persons and things.
  30. Conscientiousness: This organ produces a feeling of justice and conscientiousness, or the love of truth and duty.
  31. Hope: Hope induces a belief in the possibility of whatever the other faculties desire, it inspires optimism about future events.
  32. Marvelousness: This sentiment inspires belief in the true and the false prophet, and aids superstition, but is also essential to the belief in the doctrines of religion.Size: This organ provides notions of the dimensions or size of external objects.
  33. Weight and Resistance: This faculty procures the knowledge of the specific gravity of objects, and is of use whenever weight or resistance are worked upon with the hands, or by means of tolls.
  34. Order: This faculty gives method and order to objects only as they are physically related.

From: "Phrenology, the History of Brain Localization" 
by Renato M.E. Sabbatini, PhD (Brain & Mind, March 1997)

008OldPhrenHead.jpg (103326 bytes) Gall and other phrenologists inferred the associations between these abilities and traits and brain locations by looking at exemplary cases: if, for example, an obese person had a bulge in some part of his skull, or a skinny person had a depression in the same area, they reasoned that the area of the brain underneath must control eating.  In this way, the phrenologists inferred the localization of each of the mental faculties in their system.



Gall's phrenological doctrines were embraced by August Comte, the 19th-century French philosopher who is generally regarded as the father of sociology. In his Positive Philosophy (Comte was also the father of positivism), he expressed general agreement with Gall's system, differing only in the number of faculties (which Comte called the "internal functions of the brain", or instincts. Comte accepted only 18 such functions, which he divided into three categories of Affective functions (located in the back or lower part of the brain), Intellectual functions (located in the front part of the brain), and Practical functions (located in the center of the brain). The following list is taken from an article by James Webb, reviewing the correspondence of Comte with John Stuart Mill (Phrenological Journal and Science, May 1900).

  1. Nutritive Instinct (essentially, Gall's Alimentativeness)
  2. Sexual Instinct (Amativeness)
  3. Maternal Instinct (Philoprogenitiveness)
  4. Military Instinct (Destructiveness)
  5. Industrial Instinct (Constructiveness)
  6. Temporal Ambition or Pride and Desire of Power (Self-Esteem)
  7. Spiritual Ambition or Vanity or Desire of Approbation (Love of Approbation)
  8. Attachment (Friendship)
  9. Veneration
  10. Benevolence or Universal Love (sympathy, Humanity)
  11. Passive Conception (concrete)
  12. Passive Conception (abstract)
  13. Active Conception (inductive comparison)
  14. Active Conception (deductive)
  15. Expression, mimic, oral, written (Imitation, Language)
  16. Courage (Combativeness)
  17. Prudence (Caution)
  18. Firmness

Comte was quite clear that each of these internal functions was mediated by a separate part of the brain. In fact, Comte founded a Church of Humanity as an enlightened, secular alternative to Christianity, with himself as the inaugural Grand Pontiff (he must have been quite ambitious, if not downright dotty), one of whose rituals was that "disciples should tap themselves each day three times on the back of the head, where the impulses of "Good Will", "Order", and "Progress" were stored" (Thomas Meaney, "The Religion of Science and Its High Priest" [review of August Comte: An Intellectual Biography by Mary Pickering], New York Review of Books (10/25/2012). According to Meaney, the last operational Temple of Humanity is located in Paris, at 5 Rue Payenne.

 

010NewPhrenHead.jpg (104667 bytes)Gall's basic idea was pretty good, 49OldNewPhrenHead.gif (47215 bytes) but his science was very bad (which is why we call phrenology a pseudoscience).  Nobody takes phrenology seriously these days.  However, evidence from neuropsychology and neuroscience favors Gall's idea of functional specialization, though not in Gall's extreme form.  The modern, scientifically based list of mental "faculties", and corresponding brain regions, is much different from his.



SocialPhrenology.JPG (95430 bytes)Comparing the classical and modern phrenological heads, one thing stands out: while modern neuropsychology and neuroscience have focused on the brain structures and systems associated with nonsocial cognition, such as color vision or the perception of motion, the classical phrenologists were much more concerned with personality and social functioning.  Indeed, roughly half of the classic phrenological faculties referred to personality traits or social behaviors, as opposed to nonsocial aspects of sensation, perception, memory, language, and reasoning.  Although social cognition and behavior was slighted in the early years of neuropsychology and neuroscience, personality and social psychology have begun to catch up.  

 

Phineas Gage

Phrenology was popular in the 19th century, but it was also very controversial.  However, the doctrine of functional specialization gathered support from a number of case studies that appeared in the 19th century medical literature, which seemed to indicate that particular mental functions were performed by particular brain structures.  

Historically, perhaps the most important of these studies concerned various forms of aphasia -- a neurological syndrome involving the loss or impairment of various speech and language functions.  In neither case is there any damage to the corresponding skeletal musculature: the source of the psychological deficit resides in specific lesions in the central nervous system.



Broca's and Wernicke's cases suggested that different functions associated with speech and language were performed by quite different parts of the brain.  

But 017Gage1.jpg (109793
            bytes) before Broca, and Wernicke, a similar point was made, or at least debated, in the case of Phineas Gage reported by John Martyn Harlow (1848, 1850, 1868), a physician who practiced in rural Vermont in the mid-19th century.  Gage was a railway-construction foreman for the Rutland and Burlington Railroad whose job it was to tamp blasting powder into rock.  At 4:30 PM on September 13, 1848 (a Wednesday), near Duttonville (now Cavendish), Vermont, Gage accidentally set off an explosion which drove his tamping iron through his left eye socket and out the top of his head.  Gage survived the accident, was treated by Harlow, and after 12 weeks of recuperation returned to his home in Lebanon, New Hampshire.  

Certainly Gage2.JPG (151130 bytes)the most remarkable fact about the accident is that Gage lived to tell about it.  However, Harlow (1968) also reported that Gage underwent a marked personality change at the time of his accident. 



In short, Gage was "No longer Gage". 

The Definitive Skinny on Gage

Over the years, a great deal of mythology and misinformation has been perpetrated concerning the Gage case.  The definitive account of Gage's life, and a masterly survey of the role his case played in debates over phrenology and neuroscience, has been provided by Malcolm B. Macmillan, a distinguished Australian psychologist who is also the author of the definitive critique of Sigmund Freud's psychoanalysis.


The mythology is epitomized by an article by Hannah Damasio et al. (Science, 1994), which revived interest in the Gage case in the context of Anthony Damasio's theories of the role of prefrontal cortex in rationality and emotion -- to which Gage's allegedly "profound personality changes" (p. 1102), in which "Gage was no longer Gage" (Harlow, 1868, p. 327) are relevant. . 

Damasio's paper made an important contribution to our understanding of Gage by using skull measurements (Gage's skull, though not his brain, is preserved at Harvard Medical School) and modern brain-imaging technology to reconstruct his injury.  Harlow, based on his physical examination, had concluded that Gage's right hemisphere was undamaged.  Damasio et al. conclude that the damage included the prfrontal cortex of both hemispheres.

But at the same time, they built their account of Gage's behavior largely on secondary sources, which themselves are -- as Macmillan has shown -- grossly exaggerate the extent to which Gage had "taken leave of his sense of responsibility" and no longer could be "trusted to honor his commitments" (p. 1102). 

All we really know about Gage from first-hand medical examination comes from five reports: three from Harlow (two of them written close to the time of the incident, the third written 20 years later), and two from Bigelow, both written years after Gage's death.

There is no question that Gage was a responsible individual before his accident.  He was a foreman, after all, and railroads don't put high explosives in the hands of irresponsible, untrustworthy individuals.  While it is true that he lost his position as foreman, he did attempt to return to work in 1849, but eventually began to suffer from epileptic seizures presumably brought on by his brain damage.  

  • From 1849 to 1851 he traveled around New England, exhibiting himself with his tamping iron.
    • It is possible that he and his tamping iron were on display at Barnum's Museum in New York,  forerunner to the Ringling Brothers Barnum & Bailey circus.  However, Macmillan's search of the Barnum archives turned up no definitive documentation of this. 
  • Gage then found work handling horses in livery stables and stagecoach lines -- first back in New England (1851-1852), and then in Chile (1852-1859).
    • What was he doing in Chile?  He went there as an employee of another New Englander who was establishing a stage-coach business
  • In 1859 he emigrated to San Francisco, where his mother was living with his sister, Phebe, and her husband, David Dustin Shattuck.  By that time, of course, the Gold Rush was essentially over, and he worked as a farm laborer. 
  • While living in the Bay Area, Gage began to suffer epileptic seizures, and he died on May 21, 1860 (not 1861, as reported in some textbooks). 
  • Gage was originally buried, perhaps with his tamping iron, at Lone Mountain Cemetery, on Laurel Hill, near the site of a branch campus of UCSF. 
    • When the cemetery was closed, his remains were removed to Cypress Abbey cemetery in Colma, and interred in the Laurel Hill Mound marked by the Pioneer Monument. 
  • In 1867, Gage's body was exhumed and in 1868 his skull was taken (by his brother-in-law, David Dustin Shattuck, then a member of the San Francisco Board of Supervisors and married to Gage's sister, Phebe) to Harvard Medical School where it can be seen today.  Unfortunately, Gage's brain was not preserved.
    • Here, too, lies a story.  David Dustin Shattuck was one of many in the extended Shattuck family who emigrated from New England to California during the Gold Rush.  Another Shattuck, Francis K., was one of the founders of Berkeley -- Shattuck Avenue is named after him.  So if anyone wants to take Gage's move to California as evidence of his irresponsibility (I can imagine this argument), they'll have to apply it to the Shattucks, as well.

It should be noted that most of the contemporary accounts of Gage's personality change, including Harlow's claim that "Gage was no longer Gage", were published years after his death.  And, frankly, we don't know very much about Gage's pre-accident personality: he was just another man working on the railroadThe very first accounts, from Harlow (1948, 1849) and Bigelow (1950), focused mostly on the fact that Gage survived the accident -- although Harlow (1848) did refer to difficulties in controlling Gage.  Later, his case was presented as evidence that specific brain damage did not cause specific loss of function -- evidence supporting attacks on phrenology and other claims about localization of function (Ferrier, 1876).  Macmillan has uncovered evidence that Harlow himself was interested in phrenology, and it may well be that his reports of Gage's personality change were colored by his commitment to phrenological doctrine. It wasn't until 1878 that Ferrier himself, in his Gulstonian Lectures to the Royal College of Physicians, offered Gage as evidence favoring localization -- based on evidence from Harlow's later, posthumous, reports.  But even in Harlow's 1868 report, he noted that Gage was fond of pets, children, horses, and dogs. 

Gage's full story is told in Macmillan's book, An Odd Kind of Fame: Stories of Phineas Gage (MIT Press, 2000).   Those who don't have time for the whole book will find a useful precis in an earlier paper: "A Wonderful Journey Through Skulls and Brains: The Travels of Mr. Gage's Tamping Iron" (Brain & Cognition, 5, 67-107, 1986).


See also:

  • "Phineas Gage: A Case for all Reasons" in Classic Cases in Neuropsychology, (1994).
  • "Restoring Phineas Gage: A 150th Retrospective" (Journal of the History of Neuroscience, 2000).
  • "Rehabilitating Phineas Gage (with M. Lena), Neuropsychological Rehabilitation, 2010.
After Macmillan's book was published, Smithsonian Magazine published what appears to be the only known photograph of Gage (see above), posing with his tamping iron -- maybe taken during that short period in which Gage traveled around New England ("Finding Phineas" by Steve Twomey, smithsonian, 1/2010).

Macmillan also maintains an excellent webpage devoted to Gage, which contains much interesting material not in the book: http://www.deakin.edu.au/hbs/GAGEPAGE/.

Gage3.JPG (94016 bytes)According to some interpretations, Gage's case was consistent with phrenology, because his damage seemed to occur in the area of the frontal lobes associated with the faculties of Veneration and Benevolence.  Others disputed the exact site of the damage, and others the magnitude of the personality change.  Subsequently, Gage became a linchpin of arguments about phrenology in particular and functional specialization in general.  There is no question that Gage was injured, and that he subsequently underwent a personality change.  But Macmillan shows that most popular and textbook accounts of his personality change are not corroborated by the actual evidence reported by Harlow and others at the time.  And, Macmillan discovered, Harlow himself was a kind of closeted phrenologist, so his representations and interpretations of Gage's personality changes may well have been biased and colored by his theoretical commitments.  

 

Gage and Lobotomy

A common interpretation of Gage is that he suffered the accidental equivalent of a prefrontal lobotomy, and it is sometimes held that Gage inspired the development of psychosurgery in the 1930s and 1940s, which (in those days before psychotropic drugs) often sought to make psychiatric patients more controllable by destroying portions of their frontal lobes.

  • In 1935, Egas Moniz, a Portuguese neurologist, introduced the prefrontal leucotomy for the treatment of intractable mental illness.  In this procedure, a neurosurgeon opened up the skull of a patient and surgically severed the connections between the prefrontal cortex and the rest of the brain.  He reported that 7 of 20 "agitated" patients so treated became calmer, and the procedure began to catch on.  In 1949 Moniz shared the Nobel Prize in Physiology or Medicine with Walter Rudolf Hess, a Swiss physiologist who did pioneering research on the role of the diencephalon (a forebrain system consisting of the thalamus and hypothalamus) in the autonomic nervous system.  (Hess's work has stood the test of time, but Moniz's award ranks high on anyone's list of Nobel Errors.)
  • In the late 1940s, Walter Freeman, a neurologist practicing in New York City, and his colleague James Watts introduced a new version of the procedure, known as transorbital lobotomy which could be performed as an outpatient procedure without requiring surgery to open patients' skulls to reveal their brains.  In their procedure, the patient was first  anesthetized with a variant of electroconvulsive therapy, which renders the patient unconscious.  Then the surgeon used a hammer to insert a household ice pick -- no kidding! -- through the upper portions of the patient's eye sockets.  Then the surgeon would simply swish the ice pick around in the patient's brain, destroying what we now think of as the orbitofrontal cortex.  In the early 1950s, as many as 5,000 lobotomies were performed every year for the treatment of anxiety, depression, and substance abuse.  Freeman lost his surgical privileges in the 1960s.

Malcolm Macmillan's historiography of Gage (see below) shows pretty conclusively that this was not the case.  In the first place, the "new" Gage's symptoms were precisely the opposite of what prefrontal lobotomy was supposed to accomplish.

There are many film dramatizations of prefrontal lobotomies, including:

  • A Fine Madness, starring Sean Connery.
  • One Flew Over the Cuckoo's Nest", starring Jack Nicholson, based on the novel by Ken Keesey.
  • A Hole in One" (Richard Ledes, 2004), starring Michele Williams who seeks a lobotomy to rid herself of the anguish caused by her the criminal boyfriend (Meat Loaf), and including a character based on Freeman himself (Bill Redmond).

For definitive histories of lobotomy and other forms of psychosurgery, see two books by Eliot Valenstein (himself a distinguished neuroscientist):

  • Brain Control: A Critical Examination of Brain Stimulation and Psychosurgery (1977)
  • Great and Desperate Cures: The Rise and Decline of Psychosurgery and Other Radical Treatments for Mental Illness (1986)

GageMilestones.JPG (99809 bytes)The fact is, we simply know too little about Gage's personality, and his injury, to draw any firm conclusions about personality, social behavior, and the brain.  In the final analysis, the Gage case is only suggestive of what was to come.  But just as Broca's patient Tan and Scoville and Milner's patient H.M. became the index cases for cognitive neuroscience, Phineas Gage deserves a place as the index case for what has become social neuroscience.

 

Interestingly, and perhaps somewhat prophetically, Gage is an ancestor of Fred Gage, a Salk Institute neuroscientist who studies neurogenesis.  After Smithsonian Magazine published a newly discovered photograph of Gage (01/2010), two of his other descendants independently contributed yet another image of him with his tamping-iron (published in the magazine in 03/2010).  The original image was a daguerreotype, or something like it, which is a mirror image of the real object.  Here, the image has been rotated vertically, to correctly show that Gage's injury was to his left eye.



Methods of Neuroscience

So given the hypotheses the brain damage has consequences for social cognition, and that there are brain systems that are specialized for social-cognitive tasks, how would we know?  Neuropsychology offers a number of techniques for studying functional specialization. 




Historically, the most important method for neuropsychology involves neurological cases of patients with brain lesions, in which scientists observe the mental and behavioral consequences of damage to some portion of the brain. 

Whether lesions occur accidentally or deliberately, they are, for all intents and purposes, permanent.  Temporary lesions can sometimes be created by
But there are other techniques that do not cause permanent damage.

HDamasioBrainScans_Crop copy.jpg (115004 bytes)Here are some combined MRI and PET images of the brain of a single subject engaged in different kinds of thinking, collected by Dr. Hanna Damasio of the University of Iowa College of Medicine, and published in the New York Times Magazine (May 7, 2000).  Red areas indicate increased brain activation, as indicated by blood flow, while areas in purple indicate decreased brain activation.





Brain imaging techniques are gaining in popularity, especially in human research, because they allow us to study brain-behavior relations in human subjects who have intact brains.  But still, some of the clearest evidence for mind-brain relations is based on the lesion studies that comprise classic neuropsychological research.

 

Functional Dissociations

29HMhippocampus.gif (41744 bytes)The Case of H.M.  As an illustration of the role that neuropsychological evidence from brain-damaged patients can play in understanding functional specialization, consider the case of a  patient, known to science by the initials H.M., who suffered from a severe case of epilepsy involving frequent, uncontrollable seizures that seemed to arise from his temporal lobes.  As an act of desperation, H.M. agreed to surgical excision of the medial (interior) portions of his temporal lobe -- an operation that also excised his hippocampus and portions of the limbic system, including the hippocampus.  After surgery, he behaved normally with respect to locomotion, emotional responses, and social interaction -- except that he had no memory.  H.M. displayed a retrograde amnesia for events occurring during the three years prior to the surgery.  But more important, he displayed an anterograde amnesia for all new experiences.  He could not remember current events and he could not learn new facts.  The retrograde amnesia remitted somewhat after the surgery, but the anterograde amnesia persists to this day.  As of 2003, H.M. was still alive, but he remembers nothing of what has happened to him since the day of his surgery in 1953.  He reads the same magazines, and works on the same puzzles, day after day, with no recognition that they are familiar.  He meets new people, but when he meets them again he does not recognize them as familiar.

037MTLMemSystem.jpg (82879 bytes)Work with H.M. and similar patients reveals a brain system important for memory.  This circuit, known as the medial temporal-lobe memory system, includes the hippocampus and surrounding structures in the medial portion of the temporal lobe.  This circuit is not where new memories are stored.  But it is important for encoding new memories so that they can be retrieved later.  H.M. lacks this circuit, and so he remembers nothing of his past since the surgery.

For an insightful portrayal of H.M.'s life, read Memory's Ghost by Philip J. Hilts.

030Limbic.jpg (155461
            bytes)The Case of S.M.  Another example comes from the patient S.M., who suffered damage to the amygdala, another subcortical structure, but no damage to the hippocampus and other structures associated with the medial temporal lobes.  S.M. suffers no memory deficit, nor any other problems in intellectual functioning (Adolphs, Tranel, Damasio, & Damasio, 1994).  But he does display gross deficits in emotional functioning: a general loss of emotional responses to events; an inability to recognize facial expressions of emotion; and an inability to produce appropriate facial expressions himself.  Research on S.M. and similar patients suggests that the amygdala is part of another brain circuit that is important for regulating our emotional life, particularly fear (LeDoux).  

The contrast between patients H.M. and S.M. illustrate the logic of dissociation that is central to cognitive neuropsychology.  By "dissociation" we simply mean that some form of brain damage (or, for that matter, any other independent variable) affects one aspect of task performance but not another.  Thus, "dissociation" is analogous to the interaction term in the statistical analysis of variance.  In the case of H.M., damage to the hippocampus impairs performance on memory but not emotional tasks; but in the case of S.M., damage to the amygdala impairs performance on emotional tasks but not memory.  Therefore, we can conclude that the hippocampus is part of a brain system that is specialized for memory, while the amygdala is part of a separate (dissociable) brain system that is specialized for emotion.  Of course, this assumes that the experimenter has eliminated potential experimental confounds.  


Brain-Imaging Techniques

The fact that the nervous system operates according to certain electrochemical principles has opened up a wide range of new techniques for examining the relationship between brain and mind. Some of these techniques are able to detect lesions in the brain, without need for exploratory surgery or autopsy. Others permit us to watch the actual activity of the brain while the subject performs some mental function.

CT (CAT) Scans.  In x-ray computed tomography (otherwise known as CAT scan, or simply CT), x-rays are used to produce images of brain structures. This would seem to be an obvious application of x-ray technique, but there are some subtle problems: (1) radiation can damage brain tissue; (2) brain tissue is soft, and so x-rays pass right through it; and (3) x-rays produce two-dimensional images, and so it is hard to distinguish between overlapping structures (that is, you can see the edges of the structures, but you can't detect the boundary between them). The CT scan uses extremely low doses of x-rays, too weak to do any damage, or to pass through soft tissue. It also takes many two-dimensional images of the brain, each from a different angle. Then a computer program takes these hundreds of individual two-dimensional images and reconstructs a three-dimensional image (this requires a very fast, very powerful computer). CT scans allow us to determine which structures are damaged without doing surgery, or waiting for the patient to die so that an autopsy can be performed.

Magnetic Resonance Imaging (MRI).  The technique of magnetic-resonance imaging (MRI) is based on the fact that some atoms, including hydrogen atoms, act like tiny magnets: when placed in a magnetic field, they will align themselves along lines of magnetic force. Bombarding these atoms with radio waves will set them spinning, inducing a magnetic field that can be detected by sensitive instruments. In a manner similar to CT, readings from these instruments can be used to reconstruct a three-dimensional image of the brain. However, this image has a much higher resolution than CT, and so can detect much smaller lesions.   

MRI is such an important advance in medical technology that Nobel prizes have been awarded on several occasions for work relating to it.  The 2003 Nobel Prize for Medicine or Physiology was awarded to Paul C. Lauterbur, a physical chemist at the University of Illinois, and Peter Mansfield, a physicist at the University of Nottingham, in England, for basic research that led to the development of the MRI.    Lauterbur published a pioneering paper on 2-dimensional spatial imaging with nuclear magnetic resonance spectroscopy (when NMR was picked up by medicine, the word "nuclear" was dropped for public-relations reasons, so that patients would not think that the technique involved radiation -- which it doesn't).  Mansfield later developed a technique for 3-dimensional scanning.  Eager young scientists should note that Lauterbur's paper was originally rejected by Nature, although the journal eventually published a revision.  And if that's not inspiration enough, Mansfield dropped out of the British school system at age 15, returning to college later; now he's been knighted!  

Some controversy ensued because the prize committee chose not to honor the contributions of Raymond Damadian, a physician, inventor, and entrepreneur, who made the initial discovery that cancerous tissue and normal tissue give off different magnetic resonance signals.  Damadian also proposed that it be used for scanning tissues inside the body, and made the first working model of a MR scanner (now on display in the Smithsonian National Museum of American History).  Damadian subsequently took out expensive full-page advertisements in the New York Times and other publications to assert his priority.  But even before Damadian, Vsevolod Kudravcev, an engineer working at the National Institutes of Health, produced a working MRI device in the late 1950s by connecting an NMR to a television set: his supervisor told him to get back to his assigned job, and nothing came of his work.  (See "Prize Fight" by Richard Monastersky, Chronicle of Higher Education, 11/07/03).

The Nobel committee, following its tradition, has been silent on its reasons for excluding Damadian.  Everyone seems to agree that Lauterbur and Mansfield's work was crucial to developing MRI as a clinically useful technique.  Still, no one doubts the importance of Damadian's pioneering work, either, and the Nobel rules make room for up to three recipients of a prize.  The decision may just reflect an admittedly unreliable historical judgment.  On the other hand, science has its political elements, and there could have been other reasons for denying Damadian the prize.  It is possible that Damadian was denied the prize because he is a practicing physician and business entrepreneur rather than an academic scientist.  Perhaps because he is relentlessly self-promoting (as in his newspaper as, which are without precedent in Nobel history), and famously litigious (he won a settlement of $129 million from General Electric for patent infringement) and rubs people the wrong way.  Perhaps the Nobel committee did not want to give an award for biology, and thus at least indirect legitimacy, to someone who rejects the theory of evolution, the fundamental doctrine of modern biology, believes in a literal reading of the Bible, and has lent his support to creationism.  

Positron Emission Tomography (PET).  CT and MRI are new ways of doing neuroanatomy: they help us to localize brain damage without resort to surgery or autopsy. And that's important. But we'd also like to be able to do neurophysiology in a new way: to watch the brain in action during some mental operation. A technique that permits this is positron-emission tomography (PET). This technique is based on the fact that brain activity metabolizes glucose, or blood sugar. A harmless radioactive isotope is injected into the bloodstream, which "labels" the glucose. This isotope is unstable, and releases subatomic particles called positrons; the positrons collide with other subatomic particles called electrons, emitting gamma rays that pass through the skull and, again, are detected by sensitive instruments. When a particular portion of the brain is active, it metabolizes glucose, and so that part of the brain emits more gamma rays than other parts. Fast, powerful computers keep track of the gamma-ray emissions, and paint a very pretty picture of the brain in action, with different colors reflecting different levels of activity.

Functional MRI (fMRI).  This is a variant on MRI, but with a much shorter temporal resolution than standard MRI.  Like PET, it can be used to record the activity of small regions of the brain over relatively small intervals of time.  However, the temporal resolution of fMRI is smaller than that of PET, permitting investigators to observe the working brain over shorter time scales.  Whereas most brain-imaging studies have to "borrow" time on machines intended for clinical use, UC Berkeley has a very powerful fMRI machine dedicated entirely to research purposes.

Event-Related Potentials (Evoked Potentials).  As you can imagine, CT, MRI, and PET are all very expensive, and require lots of equipment (in the case of PET, for example, a small nuclear reactor to produce the radioactive isotope). A very promising technique that does not have these requirements is based on the electroencephalogram (EEG), and is known as event-related potentials (ERPs, sometimes known as evoked potentials). In ERP, as in conventional EEG, electrodes are placed on the scalp to record the electrical activity of the neural structures underneath. Then, a particular stimulus is presented to the subject, and the response in the EEG is recorded. If you present the stimulus just once, you don't see much: there are lots of neurons, and so there's lots of noise. But in ERP, the brain's response to the same (or very similar) stimulus is recorded over and over again: when all the responses are combined, a particular waveform appears that represents the brain's particular response to that particular kind of event. The ERP has several components. Those that lie in the first 10 milliseconds or so reflect the activity of the brainstem; those that lie in the next 90 milliseconds, up to 100 milliseconds after the stimulus, reflect the activity of sensory-perceptual mechanisms located in the primary sensory projection area corresponding to the modality of the stimulus (seeing, hearing, touch, smell, etc.); those that lie beyond 100 milliseconds reflect the activity of cortical association areas. Interestingly, the characteristics of these components varies with the nature of the subject's mental activity. For example, the N100 wave, a negative potential occurring about 100 milliseconds after the stimulus, increases if the stimulus was in the focus of the person's attention, and decreases if it were in the periphery. The N200 wave, another negative potential appearing 200 milliseconds after the stimulus, is elicited by events that violate the subject's expectations. The P300 wave, a positive potential about 300 milliseconds out, is increased by some unexpected, task-relevant event (such as a change in the category to which the stimulus belongs); it seems to reflect a sort of "updating" of the subject's mental model of the environment. And the N400 wave, a negative potential about 400 milliseconds after the stimulus, is increased by semantic incongruity: for example, when the subject hears a nonsensical sentence.  

ERPs can be recorded from the scalp as a whole, or they can be collected individually at lots of separate sites. In the latter case, the availability of powerful, high-speed computers permits a kind of brain-imaging: we can see where the ERP changes are the largest (or the smallest); and we can see how the ERP changes move with time. In this way, we can see how the brain shifts from one area to another when processing a stimulus or performing a task.

TMS.  CAT, PET, MRI, and RP are all non-invasive, passive, recording technologies: that is, they employ sensors on the outside of the skin to record activity that naturally occurs under the skin (inside the skull, to be exact) when subjects are engaged in various tasks.  Transcranial magnetic stimulation (TMS) is different, because it creates the functional equivalent of a temporary, reversible lesion in a discrete portion of brain tissue without requiring surgery to open up the scalp and skull (other techniques for creating temporary, reversible lesions, such as hypothermia and spreading depression, require surgical access to brain tissue).  In TMS, a magnetic coil is applied to the scalp, and a magnetic pulse is delivered. This pulse can approach the magnitude of 2 Tesla, about the strength of the MRIs used clinically, but not as strong as the pulses produced by the machines used for research at UC Berkeley.  The rapidly changing magnetic field induces an electrical field on the surface of the cortex.  This field in turn generates neural activity which is superimposed on, and interferes with, the ongoing electrical activity of nearby portions of the brain.  This temporary disruption of cortical activity, then, interferes with the performance of tasks mediated by parts of the brain near the site of application.  For example, TMS applied over a particular region of the occipital lobe interferes with visual imagery, supporting findings from other brain-imaging techniques that striate cortex is involved in visual imagery, as it is in visual perception.  TMS is useful because it has better temporal resolution, and equal spatial resolution, than other available techniques, such as PET and fMRI.  That is, it can record the activity of relatively small areas over very small units of time. 

Whatever the method, brain-imaging data is analyzed by some variant of the method of subtraction.  Brain images are generated while subjects perform a critical task, and then while performing a control task.  The pattern of activation associated with the control task is subtracted from the pattern associated with the critical task, leaving as a residue the pattern of brain activation that is specifically associated with the critical task.  Of course, the success of the method depends on the tightness of the experimental controls.  To the extent that the comparison between critical and control tasks is confounded by uncontrolled variables, the pattern of activation associated with the critical task may well be artifactual.  

Brain images are wonderful things, but 

brain imaging is no excuse for poor experimental methodology.

 

 

 The Modularity of the Social Mind

With the doctrine of modularity in hand, and neuroimaging techniques like fMRI, a new generation of social psychologists have embarked on their own "wonderful journey through skulls and brains" -- to discover the neural structures associated with various social-cognitive functions.  But what exactly are they looking for?


The Theory of Multiple Intelligences

The application of the idea of modularity to social cognition and behavior was anticipated in the theory of multiple intelligences proposed by Howard Gardner (1983), a cognitive psychologist at Harvard -- and author of The Mind's New Science (Basic Books, 1985), an excellent account of the cognitive revolution in psychology and the founding of interdisciplinary cognitive science.  The theory is proposed in the context of the debate over the structure of human intelligence.  According to Gardner, intelligence is not a single ability, such as the g (or "general intelligence") favored by Spearman and Jensen, but rather a collection of abilities, much like the primary mental abilities of Thurstone or Guilford's structure of intellect.  The system of multiple intelligences proposed by Gardner consists of seven separate abilities:

Whereas most prior theories of intelligence relied on data generated by standardized psychological tests, Gardner offers a wider array of evidence for his system.  However, he is particularly interested in "savants" (like the character portrayed by Dustin Hoffman in the film Rain Man), who have exceptional skills in particular domains, coupled with average or subnormal abilities in other domains; and in cases of specific intellectual deficits caused by brain damage (such as Broca's and Wernicke's cases of aphasia).

Gardner's theory is a psychological theory of intelligence, not a theory of the neural correlates of intelligence.  However, because he employs neurological evidence to bolster his case for multiple, independent intelligences, it implies that certain aspects of personality and social interaction -- including, presumably, certain aspects of social cognition -- are performed by specialized brain systems.


BAS, BIS -- and FF(F)S

Based mostly on animal research, Gray (1972, 1981, 1982, 1987) argued that there are two basic dimensions of personality, anxiety and impulsiveness, which have their biological bases in three separate and independent brain systems.

Gray's "biopsychological" system, in turn, laid the foundation for Carver and Schier's (1986, 1998, 2002) cybernetic control theory approach to self-regulation, which is built on the interactions between BAS and BIS (FFS having been left by the wayside) -- as well as many other theories.

Carver and White (1994) developed questionnares for the measurement of BAS and BIS activity, while Jackson (2009) developed alternative scales for BIS and BAS, and added scales for the three different aspects of FFFS.


Other Modular Proposals

In a discussion of the neural substrates of social intelligence, Taylor and Cadet (1989) proposed that there are at least three different social brain subsystems:

As part of an influential theory of childhood autism, to be discussed in the lectures on Social-Cognitive Development, Baron-Cohen (Simon, that is, cousin of Sacha, the comedian) has proposed in 1995 that there are four elements of mindreading -- each organized as a mental module, and each associated with a different neural substrate.



And in a popular treatment of the literature on social intelligence, Daniel Goleman ((2006) proposed several functions of the social brain, each presumably modular in nature:




The most extensive proposal concerning social-cognitive modules has come from Ray Jackendoff (1992, 1994), a linguist and cognitive scientist at Brandeis University, who was directly influenced by Fodor.   While Fodor was concerned only with mental modules for input and output functions associated with vision and language, he left room for central modules as well, and Jackendoff proposed a number of candidates:



In Jackendoff's view, the social cognition module processes information concerning the identity of other persons, and their relations to the person.  

Jackendoff's arguments for a mental faculty of social cognition are mostly philosophical in nature:

None of these arguments is completely convincing, but they do make an interesting case that we might have mental modules specialized for social functions as well as for perceptual and linguistic functions.  And of course, mental modules imply brain modules, or brain systems.  Hence, Jackendoff's arguments for a mental faculty of social cognition are also arguments for the functional specialization of brain structures, or systems of brain structures, for social cognition as well.

Fronto-Temporal Dementia

Everybody's familiar with Alzheimer's disease (AD), a chronic neurodegenerative disease form of dementia mostly (though not exclusively) associated with aging, and primarily affecting the temporal and parietal lobes of the brain.  The core symptoms of AD are "cognitive" in nature, especially affecting short-term as well as long-term memory -- which is why AD is often diagnosed with tests of memory, and AD patients are commonly cared for in facilities for "memory care".  

But there's another, less-well-known form of dementia, known as Pick's disease or fronto-temporal dementia, where the degeneration primarily (as its name implies) affects the frontal and temporal lobes.  Whereas AD is commonly diagnosed in the elderly, FTD is the most common form of dementia affecting people youner than 60.  And while the primary symptoms of AD are cognitive in nature, the primary symptoms of FTD are social and emotional (Levenson & Miller, 2007).  Call it a "social" dementia.  FTD was first described by Arnold Pick, a Czech neurologist, in 1892, but it went largely unnoticed as medicine, science, and health policy focused on the challenge of AD.

The characteristics symptoms of FTD are frequently mistaken for those of depression - -or even of "midlife crisis".:

In terms of neuropathology, FTD appears to be associated with loss of large, spindle-shaped "von Economy neurons" which congregate in the anterior cingulate cortex, frontal portion of the insula, the frontal pole, orbitofrontal cortex, and temporal pole.  At the cellular level, FTD seems to involve a build-up of tau proteins, but not the beta-amyloid that is characteristically seen, along with tau, in AD.



The suggestion is that these structures constitute a network that forms the neural basis of personality and character.  Degeneration of the neurons in this network leads to a breakdown in the individual's normal personality -- with the specific aspects lost depending on the site of the most serious damage.  Of course, these same centers are involved in various cognitive functions, such as judgment and decision-making (JDM), raising questions about whether FTD is specifically related to personality.  Then again, from a cognitive point of view, personality and JDM are closely related.

Many of the leading experts on FTD are associated with the UC, including Profs. Robert Levenson and Robert Knight at UC Berkeley, and Profs. Bruce Miller and William Seeley at UCSF.


Identifying Modules in the Social Brain

All of these proposals are more or less abstract, in which their authors have suggested that certain social-cognitive functions are modular in nature, without taking the next step to indicate where the modules are.  However, brain-imaging research has begun to identify brain modules, or systems, associated with particular social-cognitive functions.

Kanwisher.JPG (92597
            bytes)Early in the development of social neuroscience, for example, Nancy Kanwisher and her colleagues claimed to have identified three such modules:




Lieberman2007.JPG
          (52730 bytes)By 2007, Lieberman had identified more than 20 discrete brain areas that are activated when subjects perform various social-cognitive tasks.  Based on spatial proximity in the brain, he classified these modules into one of four categories based on two core processes.:




These days we talk about networks, or systems, not just modules, but the argument amounts to the same thing: that there are some brain systems, or networks, that are specifically dedicated to various cognitive tasks, and these include social cognition.  And these networks are identified by the same brain-imaging technology.  In a popular introduction to social neuroscience, Lieberman argues for several networks dedicated to social interaction (Social: Why Our Brains Are Wired to Connect, 2013). 

However, identification of brain areas associated with various social-cognitive functions is only as good as the experimental paradigms used to tap those functions.  This is not a trivial matter, as illustrated by certain areas of research.


Searching for the Self in the Brain

You will remember, from the lectures on The Self, that neuropsychological data, especially from amnesia patients, shed light on the self as a memory structure.  As summarized by Klein & Kihlstrom (1998), for example, it appears that amnesic patients retain accurate knowledge of their personality characteristics (semantic self-knowledge), even though they do not remember events and experiences in which they displayed those characteristics (episodic self-knowledge).

More recently, a variety of investigators have used brain-imaging technologies so search for a module specifically dedicated to processing self-relevant information.

In a pioneering experiment, Craik et al. (1999) employed PET to explore the neural correlates of the self-reference effect (SRE) in memory.  

The SRE was originally uncovered by Rogers and his colleagues, in a variant on the levels of processing (LOP) paradigm for the study of human memory (Craik & Lockhart).  In a typical LOP experiment, subjects are asked to make various judgments concerning words:

When the subjects are surprised with a test of memory for the stimulus words, the usual finding is that items subjected to semantic judgments are remembered best, and items subjected to orthographic judgments are remembered worst, with items subjected to acoustic judgments somewhere in-between.  The general explanation of the LOP effect is that "deep" semantic processing brings the stimulus items in contact with a rich, elaborate structure of pre-existing memory.  By contrast, the relatively "shallow" orthographic and acoustic processing do not contact rich elaborate memory structures.

Rogers et al. employed trait adjectives as their stimulus materials.  More important, they added a fourth condition to the standard LOP paradigm:

Rogers1977.JPG (53511
          bytes)On the memory test, Rogers et al. replicated the standard LOP effect.  But more important, the self-referent orienting task produced an even greater increment in memory.  Based on the standard interpretation of the LOP, Rogers et al. concluded that the self is a very rich, elaborate memory structure -- perhaps the richest, most elaborate memory structure of all.

 

 

Inspired by the SRE, Craik et al. (1999) presented their subjects, who were Canadian college students, with trait adjectives and scanned their brains while they made one of four decisions about each word:

063CraikLat.jpg (49582
          bytes)Because brain activity varies with task difficulty, it was important to show that the tasks were roughly equal in that respect, as measured by response latencies.

 

 

066CraikIncrease.jpg (111785 bytes)Using the subtraction method, they then sought to 065CraikDecrease.jpg (113142 bytes) determine whether there were any portions of the brain that were differentially active in the self-referent condition, compared to the other condition, as indicated by regional cerebral blood flow (RCBF) measurements.  In fact, there were some areas where self-reference showed increases, and other areas where self-reference showed decreases, compared to at least some comparison conditions.  But the most important comparison was between the self-referent and other-referent conditions.

Craik1999Summary.JPG (64081 bytes)The general area of activation is indicated on the accompanying picture -- except that this is a view of the left cerebral hemisphere, not the right.  Still you can get the idea of where the area is.

 


KeenanBaillet1980.JPG (60797 bytes)The Craik et al. experiment was important as a pioneering effort to locate the brain systems involved in self-referent processing, but it has some problems.  In particular, everything rests on the comparison of self- and other-reference.  In the Craik et al. experiment, the target of the other-reference task was probably not as familiar, or as well-liked, as the self.  In fact, as early as 1980 Keenan and Baillet had shown that the SRE was matched by an other-reference task, provided that the other person was well-known.  Processing trait adjectives with respect to a less-well-known other, such as Jimmy Carter (for American subjects), produced no advantage compared to standard semantic processing.  The problem is that the "other" in the Craik et al. experiment, Canadian Prime Minister Brian Mulroney, was probably as unfamiliar to their Canadian subjects as President Carter was to American subjects.  Put another way, Craik et al. might have gotten quite different results if they had used a more appropriate comparison condition -- one involving a highly familiar other person.

OchsnerBeer1.JPG
            (55339 bytes)Ochsner, Beer, and their colleagues (full disclosure: I was one of them!) reported just such a study in 2005, in which self-referent judgments were compared with judgments involving the subjects' best friends (there were also two control conditions involving judgments of positivity and counting syllables).  Again, as in the Craik et al. experiment, it was important to show that the various tasks were comparable in terms of difficulty (as indexed by response latency).   

 

OchsnerBeer2.JPG
            (86022 bytes)More OchsnerBeer3.JPG
            (79889 bytes)important, employing the same statistical parametric mapping technique as Craik et al., Ochsner and Beer found no brain area that was significantly more activated by the self-referent processing task than by the other-referent task.  Self-reference produced more activation in the medial prefrontal cortex compared to syllable-counting, but other-reference produced the same effect.  

 

Of course, the logic of these studies depends on the assumption that the SRE is about self-reference.  Unfortunately, research by Klein and his colleagues, summarized in the lecture supplement on The Self, indicates that the SRE is completely confounded with organizational activity.  Put bluntly, it appears that the SRE has nothing to do with self-reference after all.

But I digress.  In fact, between the Craik and the Ochsner/Beer studies a rather large literature had accumulated on the neural substrates of self-reference.  Gillham and Farah (2005) reviewed a large number of studies involving judgments about the physical self (face recognition, body recognition, agency) and the psychological self (trait judgments, autobiographical memory retrieval, and taking a first-person perspective).  In no instance did self-reference activate a different brain area, compared to other-reference.

GillhamFarah1.JPG (91471 bytes) GillhamFarah2.JPG (95874 bytes) GillhamFarah3.JPG (90850 bytes)

Perhaps the self is "just another person" after all!

 

A Left-Hemisphere "Brain Interpreter"?

50HemiAssum.gif
            (62837 bytes)The specialization of function extends to the two cerebral hemispheres, right and left, created by the central commissure.   Anatomically, the two hemispheres are not quite identical: on average, the left hemisphere is slightly larger than the right hemisphere.  Most psychological functions are performed by both hemispheres, but there is some degree of hemispheric specialization as well.  But in most individuals, Broca's and Wernicke's areas, the parts of the brain most intimately involved in language function, tend to be localized in the left hemisphere -- though in some people the right hemisphere does have some capacity for language as well.

An interesting aspect of hemispheric specialization is contralateral projection, meaning that each hemisphere controls the functions of the opposite side of the body.  Thus, the right hemisphere mediates sensorimotor functions on the left part of the body, auditory function of the left ear, and visual function of the left half-field of each eye.  The left hemisphere does just the opposite.  Ordinarily, the two hemispheres communicate with each other, and integrate their functions by means of the corpus callosum, a bundle of nerve fibers connecting them.  But in cases of cerebral commisurotomy (a surgical procedure in which the corpus callosum has been severed), this communication is precluded.  Remarkably enough, this situation rarely creates a problem: by virtue of eye movements, both hemispheres have access to the contents of both visual fields.  So, the right hemisphere can "see" what's in the left visual field, and the left hemisphere can "see" what's in the right visual field.  

However, if a stimulus is presented to the left (or right) stimulus field so briefly as to prevent it from being picked up by an eye movement, we can confine its processing to the contralateral hemisphere.  This creates a situation where, quite literally, the left hemisphere does not know what the right hemisphere is doing, and vice-versa.  

Michael Gazzaniga and his colleagues have explored this situation by presenting different pictures to each hemisphere, and then asking split-brain patients to point to an associated picture in a larger array of choices. 

In each case, the left hemisphere, which controls language function, is unaware of why the right hemisphere is doing what it's doing -- and so it makes up a plausible explanation.

On the basis of results such as these, Gazzaniga has proposed that the left hemisphere is the site of a brain interpreter that is specialized for seeking explanations for internal and external events as a basis for making appropriate responses to them.  In Gazzaniga's view, this brain module detects the relations between contiguous events, generates an explanation for the relationship, and then weaves that explanation into a "personal story".  

In other words, Gazzaniga is proposing that certain structures in the left hemisphere are specialized for causal attribution -- one of the fundamental tasks of social cognition.

However, there is a problem with this proposal, which is that the left hemisphere also controls language function.  It's possible that the right hemisphere also has the capacity to generate causal explanations, but simply can't express them through language.  


The "Fusiform Face Area"

Another area of social cognition that has attracted neuropsychological interest is face perception.  Obviously, the face is a critical social stimulus.

Given the importance of the face as a critical stimulus, it makes sense to suggest that there is a module in the brain devoted to face perception.


A Model of Face Perception

072FaceModel.jpg
            (62500 bytes)Bruce, Young, and their colleagues have offered an influential model of face perception involving a number of different cognitive modules or systems:





Prosopagnosia

Of particular interest to psychologists who study object perception is a neuropsychological syndrome known as visual object agnosia, in which patients can describe objects but cannot name them, recognize them as familiar, or demonstrate how they are used.  A particular form of visual object agnosia is prosopagnosia, in which the patient can perceive and describe a face, but cannot recognize the face as familiar or name the person to whom the face belongs.  Difficulties in face perception displayed by neurological patients were first described in the 19th century, by Charcot (1883) and Wilbrand (1892); a specific agnosia for faces was first described, and given the name prosopagnosia, by Bodamer (1947).

In terms of the Bruce & Young model of face perception, prosopagnosics appear to have impaired modules for face recognition and name generation (at least), but the module for structural description is spared. 

Prosopagnosia.JPG
            (87389 bytes)In the "pure" form of prosopagnosia, the patient has difficulty recognizing familiar faces, but has no difficulty with other objects.  This suggests that there is a brain system (or set of systems) that is specialized for the perception of faces as opposed to other objects.  In fact, prosopagnosic patients typically show bilateral damage in a portion of the visual association cortex at the boundary of the occipital and temporal lobes (Brodmann's areas 18, 19, and 37).  Kanwisher has named this area of the brain, centered on the fusiform gyrus, the fusiform face area (FFA). 

Actually, the identification of a specific area of the brain specialized for the perception of faces began with studies of inferotemporal (IT) cortex in monkeys.  In the early 1960s, Charles Gross and his colleagues, employing the technique of single-unit recording, discovered that IT played a role in vision -- that is, certain neurons on IT fired when specific visual stimuli were presented to the animals (they also discovered analogous neurons in the superior temporal cortex responsive to auditory stimulation).  As research progressed, they stumbled on a bundle of neurons that failed to respond to any of the visual stimuli presented to them so far.  More or less by accident, they waved their hands in front of the screen -- and the neurons responded.  In 1969, Gross and his colleagues reported that these cells were specifically responsive to hands -- in particular, monkey hands.  In 1984, Gross and his colleagues reported another bundle of IT neurons that responded specifically to faces -- and, in particular, to monkey faces.  As Gross recounts the story, this research went virtually unnoticed by the wider field of neurophysiology for more than a decade ("Processing the Facial Image: A Brief History" by C.G. Gross, American Psychologist, 60, 755-763, 2005).

Based on neuropsychological studies of prosopagnosic patients, Justine Sergent (1992) had identified the fusiform area as critical for face perception.  Her ideas were confirmed by Kanwisher's neuroimaging studies of neurologically intact subjects.



Given the status of humans as social animals, and the critical importance of the face for social behavior, it makes sense that a specific area of the brain might have evolved to process information about the face -- just as other areas of the brain, such as Broca's and Wernicke's areas, have evolved to process information about speech and language.  And precisely because the proposal makes evolutionary sense, it has been widely accepted.  As Kanwisher put it: Face Recognition is "a cognitive function with its own private piece of real estate in the brain" -- namely, the fusiform gyrus.


Further analysis has suggested that the FFA consists of at least six separate areas, or "face patches", each activated in response to a particular feature of the face, such as the separation between the eyes.

However, there are some problems with the available research.

In the first place, prosopagnosia is not always, or even often, specific to faces.  Most prosopagnosics have deficits in recognizing objects other than faces, too. 

Obama1.JPG (59587 bytes)Another, more important problem is that the standard tests of prosopagnosia typically contain a confound.





There's a big difference between the two tasks.  The what question requires the person to categorize the object only at the "basic object" level -- to recognize a house as a house, and a car as a car.  But the who question requires the person to categorize the face at a subordinate level -- to recognize a face not simply as a face, but as a specific face attached to a particular person.  

Obama2.JPG (64163 bytes) A better comparison would be to ask patients not only to recognize particular faces, but also to recognize particular objects -- their own house as opposed to a neighbor's, or the White House as opposed to Buckingham Palace, for example.  This rigorous comparison, which controls for the level of object categorization, is rarely performed.

 

 

Coupled with her argument about level of categorization is an argument about expertise.  Novices in a domain tend to identify objects in that domain at the basic-object level of categorization -- to say "bird" when we see a bird, "cat" when we see a cat.  But bird and cat experts don't do this.  They identify objects at subordinate levels of classification -- they say "robin" or "sparrow", or "calico" or "Blue-point Siamese".  

EntryLevelShift.JPG (65144 bytes)The acquisition of expertise in a domain is indicated by the entry-level shift:  At initial stages of training, novices are much slower to classify objects at the subordinate level, compared to the basic level.  But as training proceeds, classification speed progressively diminishes, so that experts classify objects at the subordinate level as quickly as they classify them at the basic level.  The entry-level shift, therefore, provides an objective index of expertise.

 

In fact, Isabel Gauthier, Michael Tarr, and their colleagues (Gauthier was Tarr's graduate student at Yale; she is now at Vanderbilt, he is now at Carnegie-Mellon) have performed a number of interesting studies, employing both experimental and brain-imaging methods with both normal subjects and brain-damaged patients, which strongly indicate that the fusiform area is not dedicated to face recognition, but rather is specialized for classification at subordinate levels of categorization.  more general deficit in recognizing objects at a particular (subordinate) level of categorization.  

GreebleFlakePrint.JPG (60040 bytes)Some 078BirdPelican.jpg (91595 bytes) of these experiments involved "natural" experts in cars and birds.  Consider, for example, a brain-imaging study of visual object classification, in which subjects were asked to classify objects at various levels of categorization (Gauthier, Anderson, Tarr, & Sudlarski, 1997).  For example, they might be presented with a picture of a bird and then asked one of two questions: Is it a bird? -- a basic-object-level question; and Is it a pelican? --  a subordinate-level question.  The basic finding of the experiment is that classifying birds at subordinate-level of categorization activated the same "fusiform face area" that is involved in face recognition.

BirdCar.JPG (86616
            bytes)In another experiment, bird experts activated the fusiform area when they classified birds but not cars, while car experts activated the fusiform area when they classified cars but not birds.

 

 

079Greeble.jpg
            (101402 bytes)In GreebleFFA.JPG
            (111824 bytes)other studies, subjects learned to classify artificial objects, designed by Scott Yu, working in Tarr's laboratory, known as "greebles" (a name coined by Robert Abelson, a distinguished social psychologist who was Tarr's colleague at Yale).  The greebles are designed so that they fall into several "families", defined by body shape, and two "genders", defined by whether their protuberances pointed up or down.  Greeble novices activated the FFA when they identified faces, but not greebles.  But when they became greeble experts, the same subjects activated the FFA when they classified both kinds of objects.

 

Lest anyone think that greebles are "face-like", and thus that the comparison is unfair, Gauthier and her colleagues have shown that a particular neurological patient, M.K., who suffers from a visual object agnosia that nonetheless spares face recognition, cannot classify greebles.  Apparently, from the point of view of M.K.'s brain, greebles are not face-like stimuli.

080Snowflake.jpg
            (85388 bytes)In other studies, the subjects learned to classify snowflakes, and fingerprints, with the same results.  

 

 


Yes, every snowflake is different, just as fingerprints are, but there are ways to classify snowflakes based on shared properties. SnowflakeTypes.jpg (149834 bytes)
And since you asked: yes, there are ways of classifying fingerprints, too. FingerprintTypes.jpg (68923 bytes)

 

The general finding of these studies is that subordinate-level classification activated the fusiform area, whether the task involved recognizing faces, greebles, or snowflakes (or birds or automobiles).  Moreover, fusiform activation increased as subjects became "experts" at making the classification judgments required of them.

On the basis of these sorts of findings, Gauthier and Tarr have proposed that the area surrounding the fusiform gyrus be considered to be a flexible fusiform area (not coincidentally, also abbreviated FFA), which is involved in classification of all sorts of objects at the subordinate level of categorization.  Faces are one example of such stimuli, but there are others.  The point is that while face recognition epitomizes subordinate-level classification, other recognition tasks can involve subordinate-level classification as well.


Gauthier and Tarr's challenge to the traditional interpretation of the Fusiform Face Area has not gone unchallenged.  One serious problem is that of spatial blurring.  That is, the relatively low spatial resolution of standard fMRI (SR-fMRI), typically used in the expertise studies, may not be able to distinguish that portion of the fusiform "real estate" from another area, actually a separate brain module, that performs expert classification in other domains.  Put another way, the non-face-selective regions may border the true FFA, and very high-resolution brain imaging may be needed to separate them.

Now, this looks like a reach, but in fact the expertise studies have used SR-fMRI, which has a relatively low spatial resolution.  And there are several studies using high-resolution fMRI (HR-fMRI), which documenting an FFA which doesn't seem to respond to non-faces.  At the same time, these HR-fMRI studies haven't taken account of expertise.  All they've done is to ask ordinary people to classify cars or birds. So you can see where this is going.  What's needed is an HR-fMRI study that takes account of expertise in the non-face domain.

Such a study was recently performed by Randal McGugin, working in Gauthier's laboratory (McGugin et al., 2012).  They employed a 7-Tesla (7T) MRI scanner, pretty much the most powerful scanner currently available, and ran it at both SR-fMRI and HR-fMRI settings.  Their subjects represented varying degrees of expertise at automobile identification, as assessed behaviorally prior to the scanning runs (remember, everyone's an expert at face-identification).  During the scanning runs themselves, they were presented with five classes of stimuli -- faces, animals, automobiles, airplanes, and scrambled images (these last served as controls).  Two images were presented at once, and they were asked to determine whether they belonged to the same person, same make and model of car, etc. 






The results of the study are a little complicated, but here's the gist of it:

All of which seems to suggest that Tarr and Gauthier had it pretty much right the first time, and that the "FFA" isn't really selective for faces, but rather is specialized for subordinate-level object-identification, of which face recognition is the best and most familiar example.  Still, McGugin's experiment illustrates the complexity of the project of doing cognitive neuroscience right.  If it's this hard to identify a specific brain module for something as plausibly innate as ready-for-modularity as face recognition, think how hard it's going to be for other social-cognitive tasks!

Gauthier and Tarr's reconstrual of the FFA remains somewhat controversial, and further research may lead them to revisit their conclusions.  Still, Gauthier's work is good enough that in 2008 she received the Troland Award from the National Academy of Sciences, which honors outstanding young scientists in the field of psychology.
Face selectivity may still be possible, however, except that the selectivity may be a property of individual neurons -- not unlike the "Jennifer Anniston" and "Halle Berry" neurons discussed in the lectures on Social Memory

Some evidence in this regard comes from a new study by Quian Quiroga et al. (Neuron, 2014), employing the same single-unit recording methodology employed in their study of "grandmother neurons", described in the lectures on "Social Memory" (see also "Brain Cells for Grandmother" by Rodrigo Quian Quiroga, Itzhak Fried, & Christoph Koch, Scientific American, 02/2013).  The research involved patients undergoing surgical treatment for epilepsy, who had micro-electrodes embedded in various portions of their temporal cortex in order to identify the point of origin of their seizures.  As in the earlier study, Quian Quiroga et al. presented these patients with pictures of familiar people (e.g., Jennifer Aniston) and locations (e.g., the Sydney Opera House) and identified a number of locations that responded specifically to particular stimuli.  For example, they identified in one patient a unit that responded selectively to photographs of President Bill Clinton, but not to photos of George Bush.  They then created a composite of the two presidents' faces (call it "Clintush").  As the image switched from Clinton to Bush, the unit ceased activity.  They then looked at responses to the "Clintush" composite.  When subjects are first presented with a picture of Clinton, they perceived "Clintush" as Bush, and vice-versa -- a phenomenon of sensory adaptation.  The "Clinton neuron" responded accordingly.  When the subject perceived the composite as Clinton, the unit became active; when he perceived it as Bush, it was inactive.  While there are regions of fusiform cortex that are selectively responsive to particular faces, as Kanwisher argues, other fusiform areas are responsive to other categories of stimuli, as Gauthier argues.  But there also appear to be individual eurons (or, more likely, small clusters of neurons) in the fusiform cortex that are differentially responsive to particular faces(Image taken from "The Face as Entryway to the Self" by Christof Koch, one of the co-authors of the Quian-Quiroga studies, Scientific American Mind, 01-02/2014.)

Perhaps definitive evidence comes from research on the perception of (human) faces in macaques by Le Chang and Doris Tsao at Cal Tech (Cell, 2017; see also the commentary by Quian Quiroga accompanying the article; for a less technical account of this research, see "Face Values" by Doris Tsao, Scientific American, 02/2019).  Employing a variant of Hubel and Wiesel's classic technique, they recorded from single units in the "face patches" of the fusiform area -- areas of adjacent neurons (there are maybe six of them) that appear to be particularly responsive to faces.  Instead of presenting individual faces (familiar or not), as Quian Quiroga did in the "Jennifer Anniston" study, Chang and Tsao systematically varied various features of the faces, such as roundness, interocular distance, hairline width, skin tone, and skin texture.  They found that individual cells responded to particular features -- or, more precisely, to higher-order combinations of particular features.  C&T argue that individual faces can be located at points in a multidimensional space (consisting of 50 or more dimensions!), and individual neurons respond to departures from the average value on each of these dimensions.  If a face does not present the particular combination of features to which an individual cell has been "tuned", it will not respond to it; but it will respond to different faces that have that particular higher-order feature.  Chang and Tsao estimate that about 200 individual neurons are required to uniquely identify each face -- a different combination of neurons for each face.  Then, employing techniques pioneered by UCB's Jack Gallant, they were able to reconstruct individual faces from recordings of the activity of a population of about 200 neurons -- not just the front view, but also the profile view as well.  It's a great paper; even the Scientific American version is a great read.

So here's the situation as it now stands (I think).  There isn't a "fusiform face area", exactly, though there are anatomically segregated patches of neurons that are particularly sensitive to faces.  Individual neurons within each of these patches respond to variations in facial features along 50 or more dimensions.  The particular pattern with which these neurons fire is the neural signature of an individual face.  In addition, faces that are very familiar are also represented by individual neurons, or perhaps small groups of adjacent neurons, in the hippocampus -- much the way other memories might be. 

 

Mirror Neurons

In a series of studies, a group of Italian neuroscientists studied the behavior of individual neurons while macaque monkeys engaged in various activities, such as grasping a piece of fruit.  The initial goal of this research was to extend the line of research initiated by Barlow, Lettvin, and Hubel and Wiesel (discussed in the lectures on "Social Memory" from the perceptual domain to the domain of action.  Recall that Hubel and Wiesel discovered single cells in the frog's visual system that appeared to respond to particular features of the stimulus -- edges, points of light and dark, and the like -- "bug detectors", if you will.  Giacomo Rizzolatti and his colleagues (1988) at the University of Parma, in Italy, were doing much the same thing with action.  Recording from single cells in the ventral premotor cortex in macaque monkeys, they tried to identify particular neurons that fired whenever the animals performed specific actions with their hands or mouths -- reaching for an object, for example, or grasping or manipulating it.  In theory, activation of these clusters of neurons enable the organism to perform the specified action automatically.

Subsequently, these investigators discovered that these same neurons also fired when the monkeys, albeit restrained, merely observed such actions displayed by another organism (including, according to a famous "origin myth", a human research assistant licking a cone of gelato).  DePelligrino et al. (1992) and Gallese et al. (1996) dubbed these cells mirror neurons, because it seemed that the neurons were reflecting actions performed by other organisms.   

Iacobini_2009_MacaqueMNs.jpeg (162977 bytes)Further search located another set of mirror neurons in the inferior portion of the parietal lobe -- an area that is connected to the premotor cortex.   This makes sense, because while the frontal lobe controls motor activity (such as reaching) by various parts of the body, the parietal lobe processes somatosensory information from those same body parts.


Based largely on a principle of evolutionary continuity, it seems reasonable to expect that humans have mirror neurons, too.  However, practical and ethical considerations make it difficult (though not impossible) to perform single-unit recordings of human brain activity.  Instead, researchers have used various brain-imaging techniques to look for activations in the frontal and parietal lobes when humans observe other humans in action.  Because we only know what is happening to larger brain regions, as opposed to individual neurons, these regions are generally considered to be mirror neuron "systems" (MNSs).

Regions of the frontal and parietal lobes have been designated as a mirror neuron system for action.




 There also appears to be a mirror neuron system for emotion, revealed when subjects observe the emotional reactions of another person, such as pain or disgust.




Here's an elegant study that reveals the operation of the mirror neuron system for action in humans.  Calvo-Merino et al. (2004) used fMRI to record brain activity in skilled dancers while they watched other dancers perform.  Some of the subjects were professional ballet dancers; others were practitioners of capoeira, a Brazilian martial art that has some dance elements to it; there were also control subjects who had no skill in either dance form.  The frontal and parietal MNSs was activated when ballet dancers watched other ballet dancers, but not when they watched capoeira dancers; the MNSs of the capoeira dancers were activated when they watched other capoeira dancers, but not when they watched ballet dancers. In other words, the MNSs were activated only when the subjects watched other people perform actions which they also knew how to perform.

The discovery of mirror neurons has important implications for theories of the neural substrates of visual perception.  Ordinarily, we would think that the recognition of visible actions would be accomplished by the occipital lobe, which is the primary visual cortex.  But it appears that the same neural system that is involved in generating actions -- the premotor cortex -- is also involved in recognizing them.  At least so far as perceiving actions is concerned, there appears to be more to perception than vision.  Perception of action also recruits parts of the brain that are critical for producing action.  Put more formally, it appears that perception and action share common representational formats in the brain.  The same areas of the brain are involved in perceiving bodily movements as in producing them.  

At the same time, it has been suggested that mirror neurons may be important for monkeys' ability to imitate and understand each other's behavior.  If the same neurons fire when the monkey grasps a cup as when a monkey sees another monkey grasp a cup, the first monkey can understand the second monkey's behavior by saying to itself, as it were, "this is what I'd be doing if I did that".  

Mirror neurons have also been discovered in the perception of facial actions, suggesting that they may play a role in the perception of facial expressions of emotion --  "this is what I'd be feeling if I looked like that". 

Which is a very informal way of illustrating the ideomotor framework for action, which holds that perceiving an action and performing an action are closely linked.  

A human example of the ideomotor framework for action is Alvin Liberman's motor theory of speech perception, which argues that, instead of identifying patterns of speech sounds as such, the listener identifies the actions in the motor tract -- movements of the lips, tongue, larynx, and the like -- that would produce those sounds -- "this is what I would be saying if I did that".  


Monkeys to Men (and Women)

Iacobini_2009_HumanMNS.jpeg (222971 bytes)It did not take long for various theorists to suggest that mirror neurons played a critical role in human social cognition -- in particular, our ability to understand and to empathize with each other's emotional states.  Unfortunately, documenting mirror neurons in humans is made difficult by the virtual impossibility of doing single-unit recording in living human brains (yes, Quian-Quiroga did it, but only in the special circumstance that their subjects were undergoing necessary brain surgery anyway).  Instead, researchers have had to be content with neuroimaging studies of brain activity while subjects are engaged, like the monkeys, in observing other people's actions and facial expressions.  Neuroimaging does not have the spatial resolution that would be required to identify particular neurons associated with particular actions, but they can identify the general locations in the brain where such neurons would like.  And, in fact, Iacoboni et al. (1999)  identified homologous structures in the human brain -- in the inferior frontal gyrus and the inferior parietal lobule -- that are active when observing other people's actions or facial expressions.  These areas of the brain are now called the frontal and parietal mirror neuron systems.  

Mirror neurons are quickly becoming the hottest topic in social neuroscience.  V.S. Ramachandran, one of the world's leading cognitive neuroscientists, have called mirror neurons "the driving force behind the 'great leap forward' in human evolution" -- that is, they are the neurological difference between humans, and our closest evolutionary relatives, and the rest of the animal world. 



As noted, they've been implicated in the perception of facial expressions of emotion, and also in empathy.  They've been implicated in imitation, which is a basic form of social learning (see the lectures on "Personality and Social Cognition").  It's been suggested that they comprise the neural substrate of the "theory of mind" by which we understand other people's beliefs, desires, and feelings (see the lectures on "Social-Cognitive Development").  It's been suggested (by V.S. Ramachandran) that autistic individuals, who seem to have specific deficits in social cognition, have suffered insult, injury, or disease affecting their motor neuron systems (for more detail, see the lectures on "Social-Cognitive Development").  And, of course, it's been suggested that girls and women have more, or more developed, mirror neurons than boys and men.  Greg Hickok has called mirror neurons "the rock stars of cognitive neuroscience".

Based on studies of mirror neurons and mirror neuron systems, Gallese and his colleagues (2004) have proposed a unifying neural hypothesis of social cognition.  That is, mirror neurons are the biological substrate of social cognition, creating "a bridge... between others and ourselves". 




In this way, Gallese et al. argue, the activity of the mirror neuron system doesn't simply support thinking about other people (though it does that too); it leads to experiential insight into them.


Qualms About Mirror Neurons...

Maybe.  Frankly, it's just too early to tell.  As Alison Gopnik has noted ("Cells That Read Minds? in Slate, 04/26/2007), the enthusiasm for mirror neurons is reminiscent of another enthusiasm, that marked the very beginnings of cognitive neuroscience -- the distinction between the "right brain" and the "left brain".  Sure, there is considerable hemispheric specialization -- not least, language function appears to be lateralized to the left cerebral hemisphere.  But that doesn't warrant speculation that the right hemisphere is the seat of the unconscious, or that men, with their alleged propensities for "linear thinking" are "left-brained", while women, with their holistic tendencies, are "right-brained".


It's possible that mirror neurons are responsible for everything that makes us human.  But then again...

As noted earlier, the search for mirror neurons was inspired by Hubel and Wiesel's discovery of "bug detectors" in the frog's brain.  But in the years following that very important work, we've learned that even something as simple as bug-detection is much more complicated than some specific neuron firing to whenever some specific feature appears in the environment.  If something as simple as telling a frog that there's a bug within its field of vision requires large numbers of neurons working together, think how many neurons are required to tell a human that the person he's looking at right now is sad, despite the fact that she's smiling.  

Besides, it's not even clear that mirror neurons are critical for action perception in macaques (see G. Hickok, "Eight Problems for the Mirror Neuron Theory of Action Understanding in Monkeys and Humans, Journal of Cognitive Neuroscience, 2009).

The critical point about mirror neurons is that they are critical for understanding action.  But Hickok (2011) argues that mirror neurons don't have the semantic that would be required for understanding action.  Those semantics are stored elsewhere in the brain -- along with everything else that we know.  Hickok points out that we can understand actions without being able to execute them, as in the example of Broca's aphasia above.  More prosaically, he points out that we can understand spectator sports without ourselves being able to perform the motor actions we observe the players enacting.  As an alternative, Hickok argues that the MNS plays a role in action selection, supporting sensorimotor integration -- that is, with integrating what we observe with what we do in response.  In his view, the MNS is activated not just by any observation, but rather by observation in conjunction with the observer's own goals.  The MNS becomes involved only because the actions of others are relevant for our selection of our own actions.

Do Mirror Neurons Make Us Human?

In The Tell-Tale Brain: A Neuroscientist's Quest for What Makes us Human (2011), V.S. Ramachandran argues that mirror neurons will play the same role in psychology as the discovery of DNA plays in biology.  Reviewing the book, however, the philosopher Anthony Gottleib offers a demurral ("A Lion in the Undergrowth", New York Times Book Review, 01/30/2011):

Although Ramachandran admits that his account of the significance of mirror neurons is speculative, he doesn't let on just how controversial it is.  In the past four years, a spate of studies has dented every part of the mirror-neuron story.  Doubt has been cast on the idea that imitation and the understanding of actions depend on mirror neurons, and on the theory that autism involves a defect in these systems of cells.  It has even been claimed that the techniques used to detect the activity of mirror neurons have been widely misinterpreted.  Ramachandran may have good reason to discount these skeptical studies, but he surely should have mentioned them.

 

 

... and About Von Economo Neurons

VEN_Watson2006.jpg (52265 bytes)Similar considerations apply to another class of neurons that has been implicated in social cognition, known as von Economo neurons (VENs) after Constantin von Economo, an Austrian neuro-psychiatrist who first described them in 1925.  In the course of his cytoarchitectonic studies of brain tissue (a failed attempt to supplant Brodmann's 1909 mapping of the brain), von Economo identified a large neuron with an unusual spindle shape, and a very sparse dendritic tree, located exclusively in two areas of the frontal lobe: the anterior cingulate cortex (ACC, Brodmann Area 24) and the fronto-insular cortex (Brodmann's area 12).  These are areas of the brain that are damaged in fronto-temporal dementia, a form of dementia.  Unlike Alzheimer's disease, in which the primary symptoms involve memory functions, fronto-temporal dementia is characterized by marked deficits in social and emotional functioning.  

It was originally thought that VENs appeared uniquely in hominid brains (e.g., chimpanzees as opposed to macaques), but they have now been found in the brains of other social species, such as elephants.  Still, the presence of VENs in the brains of social animals, especially in regions of the brain associated with social and emotional processes (such as the ACC and the frontal insula), has suggested that they are part of the neural substrate of social intelligence.  

Maybe, but again, at least for now, this is little more than a good story.  

In the first place, we don't really know what the functions of VENs are.  For that matter, we don't know what the function of the ACC is -- does it monitor conflict, or correct errors, is it part of the reward system, or something else?  The best guess about the ACC is that it's involved in everything that's interesting.

We don't even know if VENs are structurally different from other neurons.  As Terry Deacon has noted, it is entirely possible that they're just on the "high" end in terms of size (also also more spindle-like, less pyramidal), and on the "low" end in terms of the number of dendrites.  It's interesting that VENs are concentrated in the ACC, but that may have more to do with their physiological properties than with their psychological function: by virtue of their size, VENs conduct neural impulses very rapidly; and by virtue of their sparse dendritic foliage, they conduct neural impulses very selectively.  Maybe these are good properties for neurons in a brain system that processes conflict.

Link to video of a debate between V.S. Ramachandran and M.A. Gernsbacher on the nature of mirror neurons.


Beyond Modules

Just as social neuroscience began to settle into the search for dedicated social-cognitive modules in the brain, a new approach to brain imaging has begun to emerge, known as brain mapping (Gallant et al., 2011).  The essential feature of brain mapping is that records the entire activity of the brain as the subject performs some task -- not just a relatively circumscribed region of interest, such as the fusiform gyrus.  And then investigators attempt to reconstruct the stimulus from the entire pattern of brain activity.  So far, brain-mapping has been attempted with various perceptual tasks that are not particularly social in nature.  But as a matter of historical fact, social neuroscience lags cognitive neuroscience by about 5-10 years, so it is only a matter of time before social neuroscientists turn to brain-mapping as well.


Levels of Analysis and the Rhetoric of Constraint

084Levels.jpg (44989
            bytes)The emergence of neuropsychological and neuroscientific methods and theories into the study of social cognition underscores the fact that human experience, thought, and action can be analyzed at three quite different levels:



Each of these levels of analysis is legitimate in its own right, and none has any privileged status over any of the others.

Part of the appeal of cognitive and social neuroscience is that it promises to connect psychology "down" to the biophysical level of analysis, shedding light on the neural substrates of cognition and social interaction.  But some neuroscientists have offered another rationale, which is that neuroscientific evidence can affect theorizing at the psychological and even sociocultural levels of analysis.  The general idea is that neuroscientific findings will constrain psychological theories, forcing us to choose, between competing theories, which one is right based on its compatibility with neuroscientific evidence.

The idea of constraint was explicitly evoked by Gazzaniga in his statement of the agenda for cognitive neuroscience.  Recall that Marr had proposed that the implementational level could be attacked only after work had been completed at the computational and algorithmic levels of analysis.  But Gazzaniga supposes that evidence of physical implementation can shape theory at the computational and algorithmic levels, as well.



The rhetoric of constraint comes up in an even earlier argument for cognitive neuroscience, by Stephen Kosslyn, which drew an analogy to architecture (but see if you can spot the weakness of the analogy).  A little later, writing with Kevin Ochsner, he invoked a version of Marr's three-level organization of cognitive research -- but, like Gazzaniga, he argued that explanations at the computational and algorithmic levels depended on the details at the implementational level.


This viewpoint has sometimes been carried over into social-cognitive neuroscience.

For example, in an early paper, Cacioppo and Berntsen (1992) asserted that "knowledge of the body and brain can usefully constrain and inspire concepts and theories of psychological function."  In a more recent advertisement for social neuroscience, Ochsner and Lieberman (2001) argued that social neuroscience would do for social psychology what cognitive neuroscience had done for cognitive psychology, "as data about the brain began to be used to constrain theories about... cognitive processes...."



And Goleman (2006) drew an analogy to theorizing at the sociocultural level, arguing that "the basic assumptions of economics... have been challenged by the emerging 'neuro-economics'... [whose] findings have shaken standard thinking in economics."




To some extent, Fiske and Taylor (2013) appear to have bought into this idea, where they write that "Brains matter..." because "social-cognitive processes can be dissociated on the basis of distinct neuroscientific responses".  By which they seem to mean that neuroscientific evidence can identify different cognitive modules underlying social cognition. 



With respect to the Goleman quote, it has to be said that the basic assumptions of economics were shaken, all right, but not by any data about the brain.  They were shaken, first, by Herbert Simon's observational studies of organizational decision-making, which led him to propose satisficing as a way of coping with bounded rationality; and Kahneman and Tversky's questionnaire studies of various kinds of choices, which led them to their heuristics and biases program for studying judgment and decision-making, and prospect theory as an alternative to traditional rational choice theory. 

HippocampalAmnesia.JPG (101845 bytes)It is instructive, as well, to consider how the rhetoric of constraint worked out in cognitive neuroscience -- which, after all, is what inspired social neuroscience in the first place.  Cognitive neuroscientists often cite cases of hippocampal amnesia (such as Patient H.M.) as an example of how neuroscientific findings led to a revolution in theories of memory.  While it is true that studies of H.M. and other amnesic patients, as well as neuroimaging studies, showed that the hippocampus played an important role in memory, the explanation of precisely what that role was has changed greatly over the years.  In each case, the explanation of the function of the hippocampus followed, rather than preceded, an advance in the theory of memory. 

  1. First, the conclusion was simply that the hippocampus was important for "learning".
  2. Then that the hippocampus was important for long-term but not short-term memory.
  3. Then that the hippocampus was important for encoding but not retrieval.
  4. Then that the hippocampus was important for deep but not shallow processing.
  5. Then that the hippocampus was important for declarative but not procedural memory.
  6. Then that the hippocampus was important for episodic but not semantic memory.
  7. Then that the hippocampus was important for explicit but not implicit memory (this remains my personal favorite).
  8. And, most recently, that the hippocampus was important for relational but not non-relational memory.
The point is that the hippocampal damage remained constant through all this.  Our understanding of the function of the hippocampus changed as our understanding of memory changed.  But the discovery that the hippocampus played a role in memory did not alter our understanding of memory processes at all.  

As a general rule, it seems that the proper interpretation of neuroscientific data ultimately depends on the availability of a valid psychology theory of the task being performed by the patient or the subject.  The constraints go down, not up -- meaning that psychological theory constrains the interpretation of neuroscientific data.  Unless the psychology is right, the neuroscience will be wrong.

Here's one possible counterexample: Zaki et al. (2011) employed fMRI in an attempt to resole a longstanding theoretical dispute over the nature of social influence in Asch's (1956) classic study of conformity.  Asch had found that even in a fairly simple perceptual task, subjects tended to go along with the incorrect judgments of a unanimous majority.  But Asch himself raised the question of whether his subjects' behavior reflected private acceptance, or conformity at the level of perception -- that is, actual changes in perception in response to social influence -- or merely public compliance, or conformity at the level of behavior -- that is, in the absence of any change in the subjects' perceptual experience.  Zaki et al. had their subjects rate the physical attractiveness of photographs of faces; they rated the faces again after receiving normative feedback about how their peers also rated the faces.  There was a clear conformity effect: subjects shifted their ratings in the positive direction if the peer judgments were higher than their initial judgments, and they shifted their judgments lower if the peer judgments were lower.  Simultaneous fMRI recordings showed activation in the nucleus accumbens (NAcc) and orbitofrontal cortex (OFC) during the second rating session; levels of activation in these areas are thought to reflect the values assigned to stimuli.  were also activated when the subjects anticipated, and won, monetary prizes.  Zaki et al. concluded that conformity "is accompanied by alterations in the neural representation of value associated with stimuli" (p. 898) -- i.e., that conformity in their experiment was at the level of perception, not merely at the level of behavior.   At the same time, the NAcc and OFC perform many and varied functions.  For example, the NAcc is involved in the processing of reward and aversion, and might well be activated just by looking at faces that are rated as especially attractive or unattractive.  And the OFC is important for decision-making, and might be activated by the process of considering the relationship between the subjects' judgments about the faces, and those of their ostensible peers.  In any event, the functions of both NAcc and OFC are both determined by behavioral and self-report evidence in the first place, so it can't be said that the neuroscientific facts psychological theories all on their own.

And another one: Robin Dunbar (2014)'s social-brain hypothesis holds that the size of the cerebral cortex permits, and also constrains, the number of social contacts on can have.  He argues that our brain size, big as it is (relative to the rest of our bodies) evolved to allow us to deal with social groups of about 150 people.  More than that, we just don't have enough neurons to keep track of everyone.  He and his colleagues have reported a brain-imaging study which shows that the volume of the ventromedial prefrontal cortex (VMPFC) predicts both the size of an individual's social networks, and the number of people whose mental states we can keep in mind at once (Lewis et al., NeuroiImage 2011).  While it's not clear that this result is anything more than a corollary to the well-known limits on short-term or working memory, the social-brain hypothesis at least relies on biological data to set hypothetical upper limits on both the number of acquaintances and levels of intentionality.


 

Pardon the Dualism, but...

Only when we know what the mind is doing can we begin to understand what the brain is doing.

Physiology is a tool for psychology, not an obligation.

(Anonymous, c. 1980)

Psychology isn't just something to do until the biochemist comes 

(Neisser, 1967).

Psychology without neuroscience is still the science of mental life, but neuroscience without psychology is just the science of neurons.

(Kihlstrom, 2010)


Even though it's doubtful that neuroscientific evidence can contribute to psychological theory, neuroscientific evidence is the only way we have of discovering the neural substrates of cognitive or social processes.  And that's enough.

 

This page last revised 05/21/2019.