Home Curriculum Vitae Publications Conference Reports Forthcoming Extramural Colloquia Expert Testimony Teaching Healthcare The Human Ecology of Memory Research Archive Publications and Reports Rants

Social Neuroscience:

The Footprints of Phineas Gage

 

John F. Kihlstrom

University of California, Berkeley

 

Note:  This is an expanded version of a keynote address presented at a conference on the Neural Systems of Social Behavior, University of Texas, Austin, May 2007.  The paper was subsequently published:

Kihlstrom, J.F.  (2010).  Social neuroscience: The footprints of Phineas Gage.  Social Cognition, 28, 757-782.

For responses to commentary on this paper, scroll down to the bottom of the page, before the reference list.

 

Lives of great men all remind us

We can make our lives sublime

And, departing, leave behind us 

Footprints on the sands of time.

Henry Wadsworth Longfellow

"A Psalm of Life" (1838)

 

 

This conference marks an important milestone in the evolution of both neuroscience and the social sciences.

Slide02.JPG (36714 bytes) The social sciences got their start in the 19th century, as August Comte invented sociology and foresaw the emergence of a "true final science" -- which he refused to call psychology, on the grounds that the psychology of his time was too metaphysical (Allport, 1954). His preferred term was la morale (nothing metaphysical about that!), a science of the individual combining cognition, emotion, and motivation with action. But he really meant psychology. Our metaphysical days are over (mostly), and modern psychology has links to both biology (through physiological psychology) and sociology (through social psychology). 

EvolutionNeuroscience.jpg (71774 bytes)Neuroscience got its start, and its name, only in the early 1960s (Schmitt, Melnechuk, Quarton, & Adelman, 1966). Before that there was just neurology, a term dating from the 17th century, neurophysiology (first appearing in English in 1859), and neuroanatomy (1900). As a biological discipline, neuroscience was initially organized into three branches: molecular and cellular neuroscience, concerned with neurons and other elementary structures of the nervous system -- the whole legacy of what Elliot Valenstein (2005) has called "the war of the soups and the sparks"; then there was systems neuroscience, concerned with how the various pieces of the nervous system connect up and interact with each other; and behavioral neuroscience, concerned with everything else -- but in particular with motor activity, basic biological motives such as hunger and thirst, and the operation of sensory mechanisms -- mostly without reference to mental states as such. 

But pretty quickly there began to emerge a fully integrative neuroscience (Gordon, 1990), concerned with making the connections between the micro and the macro, the laboratory and the clinic, and between neurobiology and psychology, followed by specialized neurosciences. First to make its appearance was cognitive neuroscience (M.S Gazzaniga, 1988), concerned with the neural bases of cognitive functions such as perception, attention, and memory (Michael I. Posner & DiGirolamo, 2000; M.I. Posner, Pea, & Volpe, 1982). Some practitioners of cognitive neuroscience defined "cognitive" broadly, so as to include emotional processes as well -- really, any internal state or process that intervened between stimulus and response. But there soon emerged a full-fledged affective neuroscience as well, running parallel to cognitive neuroscience, and intending to do for emotions and feelings what the cognitive neuroscientists had done for perception and attention (Davidson, Jackson, & Kalin, 2000; Panksepp, 1992, 1996, 1998). Now I have not yet seen the term used in print, but I can't help but think that a conative neuroscience, concerned with the neural basis of complex social motivation, not just survival motives like hunger, thirst, and sex, will arise shortly, completing the neuroscientific analysis of Kant's trilogy of mind (Hilgard, 1980).1  

In addition, what began as a proposal for a social-cognitive neuropsychology (Klein & Kihlstrom, 1998; Klein, Loftus, & Kihlstrom, 1996), and then morphed into a social-cognitive neuroscience (Blakemore, Winston, & Frith, 2004; Heatherton, 2004; M.D. Lieberman, 2005; M.D Lieberman, 2007; Ochsner & Lieberman, 2001) has now begun to evolve into a full-fledged social neuroscience (Cacioppo & Berntson, 1992; Cacioppo, Berntson, & McClintock, 2000; Cacioppo et al., 2005; Cacioppo et al., 2006; Harmon-Jones & Devine, 2003). 

 

The Social Neuroscientific Approach

Slide04.JPG (46660 bytes) When Stan Klein and I suggested that social psychology take an interest in neuropsychology (the term I at least still prefer, because it puts equal emphasis on mind and brain), our idea was only that brain-damaged patients might provide an interesting vehicle for advancing social-psychological theory.



PtWJ.jpg (91499 bytes)For example, amnesic patients do not remember the things they have done and experienced, but they seem to retain their identities, and even appreciate how their personalities may have changed with time and circumstance (Klein, Cosmides, & Costabile, 2003; Klein, Cosmides, Costabile, & Mei, 2002; Klein et al., 1996; Klein, Loftus, & Kihlstrom, 2002).  This finding provided multimethod confirmation of what we already believed from priming studies -- that episodic and semantic forms of self-knowledge are represented independently in memory (Klein & Loftus, 1993; Klein, Robertson, Gangi, & Loftus, 2007).


There are other examples.  Autistic children appear to lack a theory of mind, which prevents them from engaging in even elementary processes of person perception, inferring the attitudes and beliefs of other people (e.g., Simon Baron-Cohen, 1995; Tager-Flusberg, 2007).  And, although it seems that they only know their own minds (S. Baron-Cohen, 2005), they may even lack a theory of their own minds -- an essential component of the sense of self (Toichi et al., 2002; but see Klein, Chan, & Loftus, 1999).  Patients with frontal-lobe damage appear to have difficulties in self-regulation (Posner, Rothbart, Sheese, & Tang, 2007; Tucker, Luu, & Pribram, 1995) and self-awareness (Stuss, 1991).  Patients suffering from anosgognosia make strange causal attributions about their problems in memory, language, perception, and voluntary movement (McGlynn & Schacter, 1989; Prigatano & Schacter, 1991; Schacter, 1990).  Commisurotomy patients, too, explain their anomalous behaviors in peculiar ways (Gazzaniga, 2000; Gazzaniga et al., 1996).  Prosopagnosics appear to have a specific deficit in recognizing faces (Bodamer, 1947; Damasio, Damasio, & Van Hoesen, 1982), which draws attention to the other information available for this purpose -- the sound of their voice, their bodies, their gait and other gestures -- and forces us to ask deeper questions about nonverbal behavior.  There is also a whole set of "delusional misidentification syndromes", such as Capgras syndrome (the belief that a familiar person has been replaced by an impostor) and reduplicative paramnesia (the belief that a familiar person has been duplicated), which raise fundamental questions concerning the cognitive basis of close social relationships (Feinberg, 2009).

 

Slide05.JPG (30593 bytes) Viewed from a different perspective, though, brain-damaged patients raise interesting questions concerning interpersonal processes. Consider, for example the role of memory in social relations. In a singles bar, the stereotypical pick-up line is "Come here often?" -- which is essentially a question about memory. We break the ice with new acquaintances by sharing our memories, whether it's our earliest recollections of childhood, where we were on 9/11, or what we did on our summer vacation. When we become friends with a person, to the extent that we become friends, we come to share their memories. And when the friendship breaks up, we suffer a kind of anterograde amnesia for that person's life. Similarly, social groups gain some of their cohesion by the collective memories shared by group members, but not by members of outgroups. Shared memories are part of the glue that binds marriages together. And how many marital disputes are over memory, including forgotten anniversaries, slights and insults that simply cannot be forgotten, and conflicting mental representations of the past? ("No I didn't!" "Yes you did!") So, in the absence of conscious recollection, how do people like H.M. maintain social relations, and how do people maintain social relations with them -- or, put another way, what kinds of social relations can you have without memory? 

But I take it that social neuroscience is about more than expanding the subject pool beyond college sophomores and people waiting in airport departure lounges. And, frankly, it's more than identifying the neural bases of social behavior, because that's the historical agenda of physiological psychology - -which had been doing just fine, thank you very much, for many years before neuroscience was a gleam in anyone's eye (Milner, 1970; Morgan, 1943; Teitelbaum, 1967). Rather, social neuroscience seems to represent a new point of view about how to do social science -- just as cognitive neuroscience presented itself as a new way of doing cognitive psychology, by looking at the brain as well as the mind. 

 

The Rhetoric of Constraint

Slide06.JPG (43590 bytes)When cognitive neuroscience began, it was little more than an umbrella term, collecting all the individual disciplines interested in brain and behavior -- "the neurosciences", like "the physical sciences" or "the social sciences" (Quarton, Melnechuk, & Schmitt, 1967; Schmitt, 1970a, 1970b). If you pick up Mike Gazzaniga's encyclopedic followup to the Neuroscience Study Program volumes (M.S. Gazzaniga, 1995), the first thing you notice is its title: The Cognitive Neurosciences, plural; and when you look at the table of contents, you see large sections devoted to neural plasticity and development, sensory and motor systems, attention, memory, language, thought, imagery, consciousness, even emotion.  There is no mention of social psychology, but even more important there is also no attempt to characterize cognitive neuroscience, singular, as a field as such. Gazz3rdEd.jpg (71413 bytes) There's just the first sentence of the preface: "At some point in the future, cognitive neuroscience will be able to describe the algorithms that drive structural neural elements into the physiological activity that results in perception, cognition, and perhaps even consciousness". (p. xiii).  A somewhat more tractable version of the definition appeared in the 3rd edition.

MoranPhysioPsych.jpg (97125 bytes)But TeitelPhysiolPsych.jpg (74486 bytes)this can't be right, because the goal of determining the biological substrates of cognition (and other aspects of mental life and behavior) has historically been the mission of physiological psychology (e.g., Morgan, 1943; Teitelbaum, 1967). 

 

 

BiolPsych.jpg (110233 bytes)In their undergraduate textbook on Biological Psychology, Rosenzweig and his colleagues described a broader subfield that considers genetics and evolution as well as anatomy and physiology.  There should be a distinctive mission, or a distinctive approach, to justify the new moniker.

 


Slide07.JPG (58329 bytes)Gazzaniga and his colleagues did better in their undergraduate textbook, where they pointed to how "the disciplines of cognitive psychology, behavioral neurology, and neuroscience now feed off each other, contributing a new view to the understanding of the mechanisms of the human mind" (M.S. Gazzaniga, Ivry, & Mangun, 1998, p. xiii). They invoked Marr's (1982) hierarchical analysis of information processing: a computational level that operates on input representations to generate output representations, an algorithmic level that specifies the processes to be performed at the computational level; and an implementational level that embodies the algorithms in a physical system. But they balked at Marr's assumption that the computational and algorithmic levels could be understood without reference to the implementational level. "Any computational theory", they asserted, "must be sensitive to the real biology of the nervous system, constrained by how the brain actually works" (p. 20). 

Slide08.JPG (85742 bytes)Similarly, Kosslyn and Koenig (1992, p. 4) portrayed the "dry mind" approach of traditional cognitive psychology and cognitive science as similar to "the attempt to understand the properties and uses of a building independent of the materials used to construct it". This was in contrast to the "wet mind" approach of "the new cognitive neuroscience": "the kinds of designs that are feasible depend on the nature of the materials". By analogy, "a description of mental events is a description of brain function, and facts about the brain are needed to characterize these events".

Slide09.JPG (39140 bytes)Along the same lines, Ochsner and Kosslyn (1999, p. 324), illustrated the cognitive neuroscience approach with "the cognitive neuroscience triangle" (p. 325), with cognitive abilities at the apex, and computation and neuroscience at the bottom corners. As they put it, "Abilities is (sic) at the top because that is what one is trying, ultimately, to explain, and neuroscience and computation are at the bottom because the explanations rest on conceptions of how the brain computes" (p. 324). Explanations of cognitive abilities rest on conceptions of how the brain computes. This is what some cognitive neuroscientists mean when they say that psychological theories are constrained by neuroscientific evidence. The idea is that evidence about brain structure and function will somehow determine which theories of cognitive function are right, and which are wrong. 

Slide10.JPG (57127 bytes)What I have called the rhetoric of constraint has been echoed by some social neuroscientists as well. For example, Cacioppo and Berntson (1992) wrote that "knowledge of the body and brain can usefully constrain and inspire concepts and theories of psychological function..." (p. 1025). Similarly, Ochsner and Lieberman (2001, p. 726) argued that "cognitive psychology underwent [a] transformation as data about the brain began to be used to constrain theories about the cognitive processes underlying memory, attention, and vision, among other topics" -- with the implication that social psychology would undergo a similar transformation as data about the brain began to be used to constrain theories about the cognitive processes underlying social interaction.  And Harmon-Jones and Devine (2003, p. 589) discussed "the power of neuroscientific methods to address processes and mechanisms that would not be possible with the traditional methodological tools of the social psychologist".  The implication of all three quotations is that social psychology will be a better psychology for taking neuroscientific evidence into account.

Now Ochsner and Lieberman (Ochsner & Lieberman, 2001) are surely right that social psychology "gained greater conceptual and explanatory power" (p. 726) as it stepped down from the behavioral level to the cognitive level -- thus replacing the objective situation of classical experimental social psychology with the subjectively perceived situation of modern cognitive social psychology. But it is not at all clear that either cognitive or social psychology gain additional "conceptual and explanatory power" by taking a further step down from the cognitive to the neural level of analysis. 

Slide11.JPG (79109 bytes)A softer version of the rhetoric of constraint is found in the writings of Daniel Goleman, who has recently done for social intelligence (Cantor & Kihlstrom, 1987, 1989; Kihlstrom & Cantor, 2000) (see also Landy, 2006) what he did earlier for emotional intelligence (Goleman, 1995; Salovey & Mayer, 1989). For Goleman, the study and application of social intelligence -- which he characterizes as having been a "scientific backwater" (p. 330) up to now, will be reinvigorated by the ongoing discovery of mirror neurons and other aspects of a social brain hard-wired by evolution to "orchestrate our activities as we relate to other people" (p. 324). Exactly how that is supposed to happen isn't clear, but Goleman insists that knowledge of brain function is going to revolutionize our understanding of social intelligence, and of social behavior in general. Still, it has to be admitted that the revolution in economics, which Goleman takes to be an inspiration for social psychology, didn't come from neuroimaging, but from Herb Simon (Simon, 1955), and Kahneman and Tversky (Tversky & Kahneman, 1974), and others like them, armed only with field observation, interview protocols, and paper-and-pencil questionnaires -- and there are Nobel Prizes to prove it. 

To some extent, the rhetoric of constraint is a reaction to a kind of functionalism that characterized classical cognitive psychology and cognitive science -- the idea that the mind was software that could be implemented by any sufficiently complex physical system -- whether a brain, or a silicon chip, or a bunch of beer cans connected by string. This functionalism lies at the heart of what John Searle has called "strong artificial intelligence", which asserts that a machine that could perform the same functions as a mind would have the same mental states as minds do (J. Searle, 1980). And it also lies at the heart of the computational model of the mind, which asserts that the mental processes that generate mental states can be thought of as algorithms that operate on symbolic representations. And it's just this kind of computational functionalism that drives some cognitive scientists (like Searle) crazy. Against the Marvin Minskys of the world, who assume that the brain is a machine made of meat, they argue that brains are not just machines, or at any rate that they are very special machines, and that mental life has the properties it does precisely because it's produced by brains. Put another way, our mental states and processes are the way they are because our brains are the way they are. 

Slide12.JPG (41978 bytes)But it is one thing to assume that mental life is a product of brain activity, and another thing entirely to assert that knowledge of how the brain works constrains the kinds of theories we can have about mental life. In fact, it appears that precisely the reverse is true: psychological theory constrains the interpretation of neuroscientific data. I've discussed this issue elsewhere (Kihlstrom, 2006) (see also http://socrates.berkeley.edu/~kihlstrm/SPSPDialogue06.htm), and I won't belabor the point now, except to reprise my favorite example: the amnesic patient H.M., who put us on the road toward understanding the role of the hippocampus in memory. But what exactly is that role? The fact is, our interpretation of H.M.'s amnesia, and thus of hippocampal function, has changed as our conceptual understanding of memory has changed. First, H.M. was thought to have lost his recent memory (Scoville & Milner, 1957); then to have lost long-term but not short-term memory (Atkinson & Shiffrin, 1968); then to have suffered a retrieval rather than an encoding failure (Warrington & Weiskrantz, 1970); then to be capable of shallow but not deep processing (Butters & Cermak, 1975); then to have lost declarative but not procedural memory (N. J. Cohen & Squire, 1980); then episodic but not semantic memory (D.L. Schacter & E. Tulving, 1982; D. L. Schacter & E. Tulving, 1982); then explicit but not implicit memory (Graf & Schacter, 1985; Schacter, 1987); then declarative but not nondeclarative memory (Larry R. Squire & Knowlton, 1994); and now, most recently, relational but not non-relational memory (N.J. Cohen et al., 1999). Here, clearly, neuroscientific data hasn't done much constraining: the psychological interpretation of H.M.'s brain damage, and its implication for cognitive theory, changed almost wantonly, as theoretical fashions changed in psychology, while the neural evidence stayed quite constant. 

Look at it another way. Suppose you had no idea where H.M. had sustained his brain damage. You just learned about this patient who couldn't seem to remember recent events. However, further, careful testing, of H.M. and of other patients like him, employing the paradigms of cognitive psychology, revealed that he suffered a specific impairment of explicit memory that spared implicit memory. The conclusion that there are two expressions of memory, explicit and implicit, and that they are dissociable, might be (and indeed I think it is) a substantial advance in cognitive theory. But it's an advance that comes from behavioral data -- from how H.M. and his confreres performed on tests of free recall, stem-completion, pursuit-rotor learning, and the like. It doesn't matter where his brain damage is.

There seems to be nothing about the anatomy and physiology of the medial temporal lobe that dictates that it should be involved in conscious recollection, as opposed to emotion or mental imagery. If you knew that H.M. dissociated explicit and implicit memory, and then found out that his brain damage was in the amygdala instead of the hippocampus, you'd start thinking that the amygdala had something to do with conscious recollection. You wouldn't say "Oh, no, that can't be right, because the amygdala is too medial, or too anterior to be involved in memory; or the particular neurons that make up the amygdala, and the neurotransmitters that they use, aren't right for memory processing. You'd better take another look at that structural MRI". Instead, you'd say "Huh! OK, that's interesting, maybe the amygdala has something to do with conscious recollection" -- and then, perhaps, you'd say "Gee, I wonder what the hippocampus does".

Slide13.JPG (41425 bytes)But if you're interested in the neural bases of memory, then neuroscientific evidence is obviously critical. You can start with the hippocampus, and then try to figure out whether any of the other structures to which it's connected also play a role in memory. And, in that way, you can determine that the amygdala doesn't play a role in memory after all, but that the perirhinal and parahippocampal areas are also critical -- not just because they pass sensory information to the hippocampus, which would be trivial, but because they have some memory functions independent of the hippocampus (L. R. Squire & Zola-Morgan, 1991). And then you conclude that there's a whole system of medial-temporal lobe structures that are involved in various aspects of memory. 

And that's the way it works everywhere else we look in cognitive neuroscience (Coltheart, 2006a, 2006b) -- and, for that matter, in psychology more broadly (Hatfield, 1988, 2000). Once we have a good description of some process at the psychological level of analysis, then we try to determine how the brain does it, and the presence of a valid psychological theory allows us to make valid interpretations of what we see at the neural level. If the psychological analysis is wrong, the analysis of neural function will be wrong as well. That's because cognitive and social neuroscience depend on cognitive and social psychology; but cognitive and social psychology don't depend on neuroscience. The constraints go down, not up. As Randy Gallistel, a cognitive neuroscientist if ever there was one, has put it: "An analysis at the behavioral level lays the foundation for an analysis at the neural level. Without this foundation, there can be no meaningful contribution from the neural level" (Gallistel, 1999, p. 843). Or, as I like to put it, psychology without neuroscience is still psychology, but neuroscience without psychology is just neuroscience.2 

 

The Doctrine of Modularity

What cognitive and social neuroscience can do, is explicitly link cognition and social interaction with the brain. Stan Klein and I began our paper with the observation that "For a very long time psychology thought it could get along without looking at the brain" (Klein & Kihlstrom, 1998, p. 228). Everybody understood that the mind is what the brain does, but very few people tried to figure out the details. Partly the reasons were ideological: first there was the Skinnerian "black box" doctrine of the empty organism, and later the computational functionalist notion of the brain as a universal Turing machine -- which, if you think about it, can be just a dressed up form of behaviorism. But it wasn't just ideology. As Posner and DiGirolamo (2000) noted, Lashley's doctrine of mass action and equipotentiality reinforced the idea that cognitive processes weren't, and thus couldn't, be localized. 

Slide14.JPG (45454 bytes)The situation in 1967, when I took introductory psychology, was pretty representative. Here's a depiction of functional localization in my textbook, by Morgan and King (1966) -- both of whom were distinguished physiological psychologists (Morgan will be well-known to Texans of a certain age). When it came to the brain, we were taught the landmarks that separated the various lobes, and we were also taught that there were small areas of each lobe devoted to particular elementary functions: motor control in the frontal lobe, touch in the parietal, audition in the temporal, and vision in the occipital. The rest was "association cortex" (p. 710) -- with the anterior portion perhaps specialized for thinking and problem-solving, and the posterior portion having to do with complex perceptual functions, and "symbolic speech" (p. 713) in Broca's and Wernicke's areas. It all pretty much followed from traditional stimulus-response associationism: learning, thinking, and all the rest were mediated by associations, and associations were formed, and stored, in the association cortex as a whole. 

Slide15.JPG (39071 bytes)The specialized areas for speech and language, of course, were the cracks in the dike, and pretty soon, this scheme began to break up. Inspired by Chomsky's idea that there is a language organ, Fodor (1983) postulated a set of mental modules interposed between transducers that make representations of sensory stimulation available to other systems, and central systems that form inferences and beliefs. These modules have a number of interesting properties, but for our purposes three are paramount. First, they are domain-specific -- there might be a visual module for the analysis of three-dimensional spatial relations, which applies principles like linear perspective to create the perception of depth. Second, they are informationally encapsulated, meaning that their operations are not affected by what's going on elsewhere in the system -- which is why we still see the Ponzo illusion even after we've understood how it works. Third, and most important for present purposes, they are associated with a fixed neural architecture -- there's some part of the brain that has the neural machinery that implements the module's activity. 

While it's the goal of cognitive psychology (and cognitive science more broadly) to work out how these modules work at the psychological level of analysis, the defining agenda of cognitive neuroscience is to identify the neural correlates of these modules in some centers, or systems of centers, in the brain. Without something like the doctrine of modularity, cognitive neuroscience doesn't make any sense. If all mental life were just a matter of associations formed by a general-purpose information-processor -- or, for that matter, systems of productions operating on symbolic representations (Anderson, 1976), or even patterns of activations across a connectionist network (McClelland & Rumelhart, 1986; Rumelhart & McClelland, 1986) -- we wouldn't be interested in the neural bases of different mental functions, and we wouldn't need any neuroscience beyond molecular and cellular biology. 

Slide16.JPG (48577 bytes)Of course, the idea of functional specialization has its origins in phrenology (Finger, 1994; Gross, 1998a) -- although it was foreshadowed in the work of Emanuel Swedenborg (Swedenborg, 1740/1845-1846), who dabbled in anatomy before he became a Christian mystic (Gross, 1998b). But for the most part, prior to the 19th century, the brain was considered to be a single organ. But as is well known, first Gall and then Spurzheim, both of whom were distinguished neuroanatomists, identified some three dozen separate mental faculties, including propensities like secretiveness and acquisitiveness, sentiments like cautiousness and self-esteem, perceptive abilities like size and weight, and reflective abilities such as comparison and causality -- each localized in a different part of cerebral cortex, as revealed by bumps and depressions in the skull (Gross, 1998c). 

Slide17.JPG (46363 bytes)The phrenologists' evidence was terrible, of course, and their assertions were vigorously challenged by Pierre Flourens and others, who argued for an early version of the law of mass action. But the tide turned in 1861, when Paul Broca correlated "motor" (expressive) aphasia with damage to the left frontal lobe -- Stanley Finger (1994, p. 376) calls it "the revolution of 1861". Modern cognitive neuroscience has now gone on to identify dozens of brain centers for specific functions. Here is a map, published in the New York Times seven years ago (March 14, 2000), showing the locations of a host of centers for things like exact versus approximate mathematical computations, the perception of speed and motion, and the like. The general-purpose association cortex was already getting carved up pretty quickly, as cognitive neuroscience advanced its agenda of identifying brain systems associated with mental systems. 

 

Phineas and the Phrenologists

Slide18.JPG (54088 bytes)Of course, none of these areas had much or anything to do with social behavior per se. By contrast, one of the things that has struck me about the classical phrenologists' map is how social many of their faculties were. While all the phrenological faculties represented "higher" mental functions, as opposed to "lower" sensory and motor abilities, more than half of the three-dozen or so faculties listed by Spurzheim (1834) were affective as opposed to intellectual in nature, and almost half of them are legitimate topics for personality and social psychology. Consider these faculties, from the 37 listed in Spurzheim's final system (the numbers reflect Spurzheim's taxonomy): 

#1, Destructiveness; 
#2, Amativeness (conjugal love); 
#3, Philoprogenitiveness (parental love); 
#4, Adhesiveness (social contact and friendship); 
#5, Inhabitiveness (attachment to home); 
#6, Combativeness; 
#7, Secretiveness; 
#8, Acquisitiveness; 
#10, Cautiousness; 
#11, Approbativeness (concern for the opinions of others); 
#12, Self-Esteem; 
#13, Benevolence; 
#14, Veneration; 
#16, Conscientiousness; 
#17, Hope; 
#20, Mirthfulness; and 
#21, Imitativeness. 

That's 17 out of 37 right there. You could even make a case for 

#22, Individuality (a knowledge-gathering disposition, something like the need for cognition); 
#33, Language (as a vehicle for communication rather than a mode of knowledge representation or a tool for thought); and 
#34, Causality (if you think about attribution theory) -- not to mention 
#?, The desire to live. 

Social neuroscience really begins here. 

Slide19.JPG (66235 bytes)And here's where Phineas Gage comes in (you were perhaps wondering when I was going to get around to Gage?).3 We all know the basic story (Malcolm Macmillan, 2000; Macmillan, 1986) (see also Macmillan's "Phineas Gage" website, http://www.deakin.edu.au/hmnbs/psychology/gagepage/index.php):



Cavendish, Vermont, September 13, 1848, 4:30 PM. This railway construction foreman, while placing explosives in some rock, accidentally blasted his tamping iron through his head, entering under his left cheekbone and exiting from the top of his skull, destroying most of his left frontal lobe. Treated by John Martyn Harlow, a local physician, Gage survived, eventually dying in San Francisco on May 21, 1860 (Fred Gage, the distinguished neuroscientist at the Salk Institute and pioneer in the study of neurogenesis, is a distant relative). 

Medical interest in the case initially focused on the fact that Gage lived at all -- not to mention that he continued to function reasonably well. But most modern authorities, such as Hanna and Tony Damasio (A. R. Damasio, 1994; H. Damasio et al., 1994) focus on the claim that, after his accident, "Gage was no longer Gage". Or, as some wag wrote (M. Macmillan, 2000, p. 42):

A moral man, Phineas Gage,

Tamping powder down holes for his wage,

Blew the last of his probes

Through his two frontal lobes;

Now he drinks, swears, and flies in a rage.

Of course, we have to take some of this with a grain of salt. As Malcolm Macmillan has cogently demonstrated, many modern commentators exaggerate the extent of Gage's personality change, perhaps engaging in a kind of retrospective reconstruction based on what we now know, or think we do, about the role of the frontal cortex in self-regulation. 

What is not fully appreciated is that, more than a decade before Broca and Wernicke, the Gage case played a role in the debate over phrenology and localization of function (Malcolm Macmillan, 2000; Macmillan, 1986) (see also Barker, 1995). John Martyn Harlow's initial (1848 and 1849) reports of the case merely emphasized the fact that Gage had survived. An 1850 report on the case by Henry J. Bigelow, the Harvard professor of surgery who also examined Gage and was eventually to acquire his skull and tamping iron for what is now the Countway Medical Library, called it "the most remarkable history of injury to the brain which has been recorded" to date -- remarkable because Gage survived at all, much less continued to function. All these accounts could be interpreted as consistent with Flourens' holistic view of the brain -- that you could lose a lot of tissue and still function just fine, thank you very much. 

Slide20.JPG (90143 bytes)But already in 1848, Harlow was hinting that while Gage's intellectual capacities were unaffected by the accident, he had observed changes in his "mental manifestations" -- a piece of phrenological jargon that referred to the affective (and social) as opposed to the intellectual faculties. In 1851, an anonymous article in the American Phrenological Journal insisted that Gage was, in fact, changed by his accident. "Before the injury he was quiet and respectful." But "after the man recovered... he was gross, profane, coarse, and vulgar, to such a degree that his society was intolerable to decent people". Harlow himself, in his final report of 1868, described the "mental manifestations" in some detail, with particular respect to the "equilibrium... between his intellectual faculties and his animal propensities". Nelson Sizer, a prominent American proponent of phrenology (and probably the author of the 1851 APJ article), concluded that the tamping iron had passed out Gage's head "in the neighborhood of Benevolence and the front part of Veneration" (Malcolm Macmillan, 2000, p. 350).4 This was 10 years before Broca refuted Flourens. Unfortunately, Gage's brain (as opposed to his skull) was not available for examination, or Harlow and Bigelow might have beaten Broca to the punch, and Gage, not Tan, would have been the milestone demonstration of cerebral localization. Still, just as H.M. became the index case for the new cognitive neuroscience, so Phineas Gage becomes the index case for our new social neuroscience. 

 

The Modularity of the Social Mind

If, as I argue, the defining feature of cognitive neuroscience is the search for dedicated cognitive modules in the brain, then the defining feature of social neuroscience is the search for dedicated social modules. The phrenologists had some ideas about what these might be, and so have some more recent social scientists. 

Slide21.JPG (40892 bytes)Howard Gardner (1983) explicitly cited Gage as evidence for an interpersonal form of intelligence, defined as "the ability to notice and make distinctions among other individuals", and isolable by brain damage from other intellectual abilities. (Gardner also proposed an intrapersonal form of intelligence, defined as the ability to gain access to one's own internal, emotional life".)



Slide22.JPG (32708 bytes)Taylor and Cadet (1989) offered a more differentiated view of the neurological basis of social intelligence, suggesting that three different "social brain subsystems" were involved: a balanced/integrated cortical subsystem that employs long-term memory to make complex social judgments; a frontal-dominant subsystem that organizes and generates social behaviors; and a limbic-dominant subsystem that organizes and generates emotional responses.


Slide23.JPG (24885 bytes)Based on his analyses of autistic children, Simon Baron-Cohen (1995) suggested that the capacity for mindreading -- by which he really means a capacity for social cognition -- is based on four cognitive modules, each presumably associated with a separate brain system, impairments in which presumably cause the "mindblindness" characteristic of autism.



Slide24.JPG (33216 bytes)An even more differentiated view has been offered by Daniel Goleman (2006). As he imagines it, the social brain is not a discrete clump of tissue, like MacLean's (1970) "reptilian brain", or even a proximate cluster of structures, like the limbic system. Rather, the social brain is an extensive network of neural modules, each dedicated to a particular aspect of social interaction.



Modularity is all the rage now in both cognitive and social neuroscience (Pinker, 1997), but even Fodor (2000) has expressed doubt about what he has called Massive Modularity -- the idea that the mind, and thus the brain, is nothing but a collection of a vast number of modules, each dedicated to performing a different cognitive activity. 

Slide25.JPG (45704 bytes)In the first place, there has to be a mechanism for general-purpose information processing -- the role once assigned to association cortex. This is because we can solve problems other than those that confronted our ancestors on the African savannah in the Pleistocene Era -- like how to jury-rig carbon-dioxide scrubbers to keep the crew of Apollo XIII alive after their command module lost power and heat. To take another example: the modern phrenological head locates a brain system for the semantic priming of visual words: but it seems extremely unlikely that evolution produced a specialized brain system for processing visual words, for the simple reason that words were only invented 6,000 years ago, and the brain hasn't yet had enough time to evolve one.

The greatest gift of evolution was not a mental toolbox of dedicated modules: it was language, consciousness (which gave us something to talk about), and the general intelligence needed to solve problems other than those posed by the Environment of Early Adaptation. In  fact, on the assumption that our cognitive modules were adapted to the conditions of the EEA, it seems likely that it was our capacity for general intelligence, and general problem-solving ability, that permitted us to migrate out of the savannah in the first place, so that we now have a permanent presence on every continent (including Antarctica), and in every climate (including outer space).  

We don't simply adapt to the environment.  We also adapt the environment to us.  And that adaptation is enabled by consciousness, language, and most of all, our high general intelligence -- all, arguably, products of biological evolution; but all, certainly, the means by which cultural evolution transcends biological evolution.    And to the extent that social behavior is mediated by a general-purpose information-processing system, the project of identifying specific neural correlates of social behavior will fail, because the models and methods of social neuroscience are geared toward identifying domain-specific modules. 

So if the brain isn't just an equipotential blank slate, neither is it composed exclusively of dedicated mental modules. What's needed is a system that lies somewhere between Gardner's proposal for a single module for "interpersonal intelligence" and Goleman's "far-flung neural networks" (p. 324) -- a kind of "basic level" analysis that encompasses a limited number of dedicated modules, but still leaves ample neural space for general problem-solving. 

One such proposal comes from Ray Jackendoff (1992; 2007; 1994), a cognitive scientist much influenced by Chomsky and Fodor, who has argued since 1992 that certain aspects of social cognition are modular in nature.5 For example, he has argued that because social organization is unrelated to perceptual structure -- that is to say, the interpersonal meanings assigned to objects and events are not highly correlated with their physical attributes -- the same modules that process perceptual information cannot process information relating to social stimuli. 

Slide28.JPG (42140 bytes)What kinds of social information-processing modules might there be? Based on considerations of specialized input capacities, Jackendoff has suggested that there might be dedicated modules for face and voice recognition, affect detection, and intentionality. Based on considerations of developmental priority, he has suggested that children have an innate capacity to distinguish between animate and inanimate objects, and to learn proper names --that is, to think about individual people as such. And based on the work of Alan Fiske (son of Donald, brother of Susan), he has suggested that there are modules dedicated to processing such universal cultural parameters as kinship, ingroup-outgroup distinctions, social dominance, ownership and property rights, social roles, and group rituals. Now, Jackendoff doesn't think that the Liturgy of the Eucharist is hard-wired into anybody's head. But he does think that we come into the world innately equipped to pick up on such things -- just as we come into the world innately equipped to pick up Russian, if that's the language our parents happen to speak. And that innate equipment comes as a set of brain modules.

Slide29.JPG (54032 bytes)Without commenting on the specifics, Jackendoff's proposal strikes me as hitting just about the right level of analysis. Certainly there would be good reasons for thinking that evolution might have produced something like a face-perception module, allowing the easy recognition of that most social of stimuli. And sure enough, based on neuropsychological analyses of prosopagnosic patients, as well as neuroimaging studies of neurologically intact subjects, Nancy Kanwisher and her colleagues seem to have identified just such a module in the fusiform gyrus (Kanwisher, McDermott, & Chun, 1997) -- along with an area in the occipito-temporal cortex specialized for the perception of body parts (Downing, Jiang, Shuman, & Kanwisher, 2001), and another one at the temporo-parietal junction specialized for the theory of mind (Saxe & Kanwisher, 2003). 

Slide30.JPG (54757 bytes)And since then, social neuroscience has proceeded apace. In a recent review, Matt Lieberman (2007) has identified some 21 different social-cognitive functions that have been localized through brain-imaging studies -- some automatic and others controlled, some directed toward internal mental space, others directed towards external appearance and behavior.



Lieberman (2007) has suggested that the distinction between externally and internally focused social cognition is an organizing principle that emerges only from a neuroscientific analysis, on the ground that internally focused modules tend to be located in medial portions of frontoparietal cortex, while externally focused modules tend to be located in lateral portions of frontotemporoparietal cortex.  If so, that would contradict the position, taken in this paper, that neuroscientific findings do not, and cannot, constrain psychological theory.  Lieberman's distinction may be the exception that tests the rule.  On the other hand, the internal-external distinction is similar to other distinctions, already well established in nonsocial cognition, between perception-based and meaning-based knowledge (Anderson, 1995; Paivio, 1986), and between perceptually driven and conceptually driven processing (Roediger & McDermott, 1993), and it would not be surprising that a similar distinction held in the social domain as well.  The separate neural representation of perceptual and conceptual social-cognitive processes implies that they can be dissociated; but behavioral evidence from priming studies has already established that perceptually driven and conceptually driven processes can be dissociated from each other (Blaxton, 1989).

Slide31.JPG (49796 bytes)Of course, some of these findings are controversial. To take just one example, the idea that there is a brain module dedicated to identifying faces is one of the most appealing "just so" stories of evolutionary psychology, but establishing the existence of such a module turns out to be no trivial matter. For one thing, Isabel Gauthier, Mike Tarr, and their colleagues have produced quite compelling evidence that the same "fusiform face area" (FFA) that is activated in face recognition is also activated when experts recognize all sorts of other objects, including greebles. And that prosopagnosic patients, who appear to have a specific deficit in categorizing faces, also have problems categorizing snowflakes (e.g., Gauthier, Behrmann, & Tarr, 1999; Gauthier & Tarr, 1997; Gauthier, Tarr, Anderson, Skudlarski, & Gore, 1999; Tarr & Gauthier, 2000). So, perhaps, the FFA may not be a face-specific area after all, but rather a "flexible fusiform area" (Tarr & Gauthier, 2000) that is specialized for recognition at subordinate levels of categorization -- of which face recognition is a particularly good, and evolutionarily primeval, example.

Slide33.JPG (32965 bytes)To take another example, consider the self-reference effect (SRE), first observed in the "depth of processing" (DOP) paradigm, in which making self-referent judgments about words leads to an advantage in memory over other semantic judgments.  Based on the standard interpretation of the DOP effect, Rogers and his colleagues concluded that the self is a highly elaborate knowledge structure (Rogers, Kuiper, & Kirker, 1977). 



Slide34.JPG (41240 bytes)And based on this interpretation, Craik and his colleagues (Craik, Moroz, Moscovich, Stuss, Winocur, Tulving, et al., 1999) employed the SRE in an experiment that suggested that self-referent processing was performed by a neural system located in medial prefrontal cortex (mPFC).  

 


Slide35.JPG (36764 bytes)But it has long been known that the self-reference effect is duplicated when subjects are asked to make other-referent judgments concerning individuals that they know well (e.g., Keenan & Baillet, 1980).  

 

 

Slide37.JPG (48388 bytes) And, as it turns out, a properly controlled other-referent task activates the same cortical areas as does the standard self-reference task (Ochsner, Beer, et al., 2005; for a review, see Gillihan & Farah, 2005).  So, it might be the case that making person-referent judgments is mediated by a system in mPFC, but that the same module makes judgments about both self and others.  

 

Slide39.JPG (39342 bytes) Unfortunately, it has also long been known that the self-reference effect has nothing to do with the self: the advantage in memory produced by self-referent processing is wholly an artifact of organizational activity (Klein & Kihlstrom, 1986).  

 


Slide41.JPG (32440 bytes)In Slide42.JPG (32787 bytes) the standard SRE experiment, the semantic processing task employs a different sentence frame for every item; but for the self-referent task, the question is always the same: is the item self-descriptive or not?  So, self-referent processing encourages subjects to sort items into two categories -- those that are self-referent and those that are not. The upshot is that the typical self-reference task involves organizational activity, while the typical semantic  task does not.  When Stan Klein cleverly unconfounded self-reference and organization, he discovered that literally all the variance in recall was accounted for by organizational activity, with none left over for self-reference.  So, the activation of mPFC by self-referent processing may have nothing to do with social cognition at all, but may simply reflect organizational activity of any sort. 

To my knowledge, nobody has yet done an imaging study to identify the neural module serving organizational activity in memory.  But the larger point is that the psychology has to be right or the neuroscience will be wrong: the constraints go down, not up.

Gauthier's proposal remains controversial (for a flavor of the debate see Tarr & Cheng, 2003; McKone, Kanwisher, & Duchaine, 2007), and I suppose that it is possible that snowflake-recognition co-opts a brain module that originally evolved for face-recognition. But the larger point is that the accurate assignment of neural function depends not so much on the sensitivity of the magnet, but on the nature of the task that the subject performs while in the machine. If you want to know what the FFA really does, the psychology has to be right; nothing about neuroscience qua neuroscience is going to resolve these issues. 

 

What's Social about Social Neuroscience?6

Slide43.JPG (24292 bytes)The first glimmers of social neuroscience were pretty exclusively psychological in nature, having their origins in social psychophysiology and cognitive neuropsychology. And based on the research being presented at this conference, social neuroscience is still pretty psychological in nature. Maybe that's the way it has to be. Consider the three basic levels at which we can explain behavior. Psychologists explain the behavior of individual organisms in terms of their mental states. We explain someone's suicide in such terms as his belief that he is worthless, his feelings of depression, or his lack of desire to live. That's what psychologists do. 

Slide44.JPG (35490 bytes)A biologist, by contrast, would explain the same behavior in terms of some biological mechanism -- a genetic disposition, perhaps, or an anomalous neurotransmitter. 

 

 

Slide45.JPG (48679 bytes)And a sociologist or anthropologist would explain the same behavior in terms of some structure or process that resides outside the individual's mind or brain -- the hothouse atmosphere of a cult, for example, in the case of the mass suicide at Jonestown; or a culture dominated by Emperor-worship, in the case of Japanese kamikaze pilots in World War II. 

 

Slide46.JPG (32273 bytes)Of course, from a strictly psychological point of view both the neurobiological and the sociocultural effects on behavior are mediated through psychology. Diminished serotonin levels, perhaps generated by a particular genetic polymorphism, make people feel depressed and think about suicide; and the cult of the Emperor, or membership in the People's Temple, might make people want to sacrifice themselves for the cause.



Slide46.JPG (32273 bytes) I take it that the goal of cognitive (and affective, and conative) neuroscience is to link the psychological level of analysis to the neurobiological level; similarly, it's one goal of social psychology to link the psychological and sociocultural levels of analysis. And social neuroscience can serve to link the sociocultural level of analysis through the psychological level all the way down to the neurobiological. But it seems to me that the neuroscientific approach has the potential to extend beyond individual psychology, to encompass other social sciences as well.

Slide48.JPG (40141 bytes)We see this trend looming on the horizon in such new fields as neuroeconomics (Glimcher, 2003) and neuroethics (Farah, 2005; M.S. Gazzaniga, 2005) -- not to mention inroads of neuroscience into political science (Westen, 2007; Wexler, 2006). Also on the side of applied social neuroscience are emerging fields like neuromarketing (McClure et al., 2004) and neurolaw (Rosen, 2007). There's even a neurophilosophy (P. S. Churchland, 1986) and a neurotheology (McKinney, 1994).

Now much of this work still looks a lot like psychology, focused as it is at the level of individual minds and brains. But it is possible that in the future we will begin to see work that is both neuroscientific and distinctively anthropological or sociological in nature. Of course, physical anthropology always implied an interest in neuroscience, and there are a number of anthropologists engaged in a kind of comparative neuroanatomy among primate species, as well as a paleoneurology focused on hominids. But that's pretty much pure evolutionary biology, and it might be really interesting to get the cultural anthropologists involved, looking at the neural underpinnings of culture (Rilling, 2007). Similarly, sociologists might get interested in looking at the neural underpinnings of processes, such as social identification (Berreby, 2005), that emerge at the level of the group, organization, and institution. If E.O. Wilson (1998) is right that certain aspects of group behavior have evolved through natural selection and are encoded in the genes, they should be encoded in the brain as well -- perhaps we should call it socioneurobiology

Slide49.JPG (34337 bytes)The danger in all of this is reductionism -- not so much the everyday causal reductionism implied by the axiom that brain activity is the source of mind and behavior, but in particular the eliminative materialism, sometimes disguised as intertheoretic reductionism, which asserts that the language of psychology and the other social sciences is at best an obsolete folk-science, and at worst misleading, illegitimate, and outright false. In this view, psychological concepts such as belief, desire, feeling, and the like -- have the same ontological status as vital essence, the ether, and phlogiston -- which is to say they're nonexistent, and should be replaced by the concepts of neuroscience (P.M. Churchland, 1981; Paul M. Churchland, 1995; P. M. Churchland & Churchland, 1991; Paul M. Churchland & Churchland, 1998; P. S. Churchland, 1986; Stich, 1983). 

Slide50.JPG (59751 bytes)You get a sense of what eliminative reductionism is all about when Pat says to Paul, after a particularly hard day at the office,

 

 

"Paul, don't speak to me, my serotonin levels have hit bottom, my brain is awash in glucosteroids, my blood vessels are full of adrenaline, and if it weren't for my endogenous opiates I'd have driven the car into a tree on the way home. My dopamine levels need lifting. Pour me a Chardonnay, and I'll be down in a minute" (as quoted by MacFarquhar, 2007, p. 69),

It's funny, until you stop to reflect on the fact that these people are serious, and that their students have taken to talking like this too.7 

But really, when you step back, you realize that this is just an exercise in translation, not much different in principle from rendering English into French -- except it's not as effective. You'd have no idea what Pat was talking about if you didn't already know something about the correlation between serotonin and depression, between adrenaline and arousal, between endogenous opiates and pain relief, and between dopamine and reward. But is it really her serotonin levels that are low, or is it her norepinephrine levels -- and if it's serotonin, how does she know? Only by translating her feelings of depression into a language of presumed biochemical causation -- a language that is understood only by those, like Paul, who already have the secret decoder ring. And even then, the translation isn't very reliable. We know about adrenalin and arousal, but is Pat preparing for fight-or-flight (Cannon, 1932), or tend-and-befriend (S. E. Taylor, 2006)?  Is she getting pain relief or positive pleasure from those endogenous opiates? And after going through the first five screens of a Google search, I still couldn't figure out whether Pat's glucosteroids were generating muscle activity, reducing bone inflammation, or increasing the pressure in her eyeballs. 

Slide51.JPG (73257 bytes)And note that even Pat and Paul can't carry it off. What's all this about "talk" and "Chardonnay"? I suppose that what she really means to say is:

 



Your Broca's area should be soaking in inhibitory neurotransmitters for a while, so that my mirror neurons don't automatically emulate your articulatory gestures as you push air into your larynx, across your vocal cords, and into your mouth and nose (A.M. Liberman, Cooper, Shankweiler, & Studdert-Kennedy, 1967; A. M. Liberman & Mattingly, 1985)

and

Mix me a 12-13% solution of alcohol in water, along with some glycerol and a little reducing sugar, plus some tartaric, acetic, malic, and lactic acids, with a pH level of about 3.25 (Orlic, Redzepovic, Jeromel, Herjavec, & Iacumin, 2007; Schreier, 1979).

What's missing here is any sense of meaning -- and, specifically, of the meaning of this social interaction. Why doesn't Pat pour her own drink? Why Chardonnay instead of Sauvignon Blanc -- or, for that matter, Two-Buck Chuck? For all her brain cares, she might just as well mainline ethanol in a bag of saline solution. And, for that matter, why is Pat talking to Paul at all? Why doesn't she just give him an injection of oxytocin?  But no: What she really wants is for her husband to care enough about her to fix her a drink -- not an East Coast martini but a varietal wine that almost defines California living -- and give her some space -- another stereotypically Californian request -- to wind down. That's what the social interaction is all about; and this is entirely missing from the eliminative materialist reduction. 

Slide52.JPG (74440 bytes)The problem is that you can't reduce the mental and the social to the neural without leaving something crucial out -- namely, the mental and the social. And when you leave out the mental and the social, you've just kissed psychology (and the rest of the social sciences) good-bye. That is because psychology isn't just positioned between the biological sciences and the social sciences. Psychology is both a biological science and a social science. That is part of its beauty and it is part of its tension. Comte recognized this, even before psychology as we know it today was born -- and he liked phrenology, too, because of its emphasis on affective and social functions (Allport, 1954). 

Slide53.JPG (37625 bytes)All sciences want to provide objective explanations of the real world, but they differ in the kind of reality they're trying to explain, and the kind of knowledge they generate (J. R. Searle, 1992, 1995) (see also Zerubavel, 1997). We usually think that there are only two modes of existence and two modes of knowledge: objective existence, and objective truth that is simply a matter of brute fact, and subjective existence, and subjective truth which depends on the attitude or point of view of some observer. But as Searle has pointed out, there is no isomorphism between ontological and epistemological objectivity and subjectivity. That is, there are some things in the world that have an objective existence because they are intrinsic to nature, there are other things that exist objectively, even though they have been brought into existence by the mental processes of observers: they are the product of individual or collective intentionality.8 It is an objective fact that my cat Guinevere is a cat: it's intrinsic to her DNA. But it's also an objective fact that Guinivere is my pet, even though this fact is true only because I construe her that way. Similarly, to use two of Searle's examples, money is money and marriage is marriage only because some organization or institution says they are; but these features of institutional reality nonetheless have an ontologically objective mode of existence. 

The natural sciences try to understand those intrinsic features of the world that exist independently of the observer. Neuroscience is like this: the brain exists, and the principles of neural depolarization and synaptic transmission are what they are, regardless of our beliefs and attitudes about them. And that's also true to some extent of psychology. We can say that the psychophysical laws, the principle of association by contingency, homeostatic regulation, the relation between depth of processing and recall, the structure of concepts as fuzzy sets (or whatever), and the availability heuristic for making judgments of frequency are all observer-independent facts about how the mind works. They're true for everybody, everywhere, throughout all time. That's one reason, I think, why cognitive psychologists tend to select their stimulus materials arbitrarily. If you're doing a standard verbal-learning experiment, for example, so long as you control for things like word-length and imagery value, it doesn't much matter which words you ask your subjects to memorize. But that's not all there is to psychology. Bartlett (1932) famously criticized the natural-science approach to mental life, as practiced by Fechner and Ebbinghaus, precisely because it ignored the person's "effort after meaning". 

Slide54.JPG (47097 bytes)Somewhere Paul Rozin has pointed out that psychology has been more interested in how people eat than in what people eat -- and that that's too bad (an example of the general argument can be found in Rozin, 1996). People don't want just to eat, in order to correct their blood sugar levels. Rather, people want to eat particular things, because they like them, or because eating them in certain contexts has a certain meaning for them. And they avoid eating other, perfectly good foods, either because they don't like them or because they're obeying institutional rules telling them what is permitted and what is forbidden. Not to press the point too much, but I'd suggest that psychology as a natural, biological science is interested in the how of mind and behavior, while psychology as a social science is interested in the what of mind and behavior -- what people actually think, and feel, and want (and do). That's especially true of social psychology, which is why just about the first thing that social psychologists did was to figure out how to construct attitude scales (Thurstone, 1931). 

Slide55.JPG (47286 bytes)The natural sciences try to understand those features of the world that are observer-independent, existing without regard to the beliefs, feelings, and desires of the observer -- in other words, a world in which there are no conscious agents, where mental activity has no effect on the way things are. But the social sciences seek to understand those aspects of reality that are observer-dependent -- because they are created either through the intentional processes of an individual or through the collective intentionality of some group, organization, or institution. Just as psychology as a social science tries to understand behavior in terms of the individual's subjective construction of reality, so the rest of the social sciences try to understand behavior in terms of social and institutional reality. This is the difference between the natural and the social sciences -- and it's a difference that is qualitative in nature. You can't make a natural science out of a social science without losing the subject matter of social science. 

Slide56.JPG (45084 bytes)When, in the movie Sideways (2004), Jack talks about drinking Merlot, that may result from the operation of some politeness module in the brain. But when Miles says "I am not drinking any... Merlot!", that has nothing to do with nutritional value, carbohydrate content, or even alcohol levels -- all the things that a natural science of oenology considers important. But it has a great deal to do with the meaning of Merlot for Miles, his identity as a oenophile, and his view of the kinds of people who like Merlot -- just the sorts of things that would interest a social scientist of wine.

To be sure, social reality is the product of individual minds (working together), and personal reality is the product of individual minds (working alone), and individual minds are the product of individual brains. But a science that ignores the subjectively real in favor of the objectively real, and that ignores observer-dependent facts in favor of observer-independent facts, leaves out the very things that make social science -- social science. So with our best theories and experimental methods in hand, and the biggest magnets money can buy, let's proceed to identify the neural systems involved in social behavior. It's a great project, and there are wonderful things to be learned. But let's not forget what the social sciences are all about: Let's not get lost in the soups and the sparks. 

 

Footnotes

1As a start, Higgins and Kruglanski (2000) have announced an interdisciplinary "motivational science", apparently modeled on cognitive science; "motivational neuroscience" cannot be far behind.  Return to text.

2I suspect that the proper relationship between psychology and neuroscience is a little like the situation in physics.  It may well be the case that string theory will produce the final Theory of Everything that unites the strong and weak forces with electromagnetism and gravity, and time with space, and complete the agenda of theoretical physics.  On the other hand, whether strings vibrate in 26 dimensions or only 10 makes no difference to Newton 's falling apple, or the laws of planetary motion, or the expanding universe.  Momentum will still equal mass times velocity, and E = mc2Return to text.

3My title refers to a line in "A Psalm of Life", by Henry Wadsworth Longfellow, which describes the impact that great men and women make on history -- maybe Goethe (Longfellow first recited the poem, prior to its publication, at the conclusion of a lecture on the poet); maybe Washington (at the time, Longfellow was living in the same house in Cambridge -- indeed, the very same rooms -- that Washington occupied when he took command of the Colonial Army during the Revolutionary War).  But his point applies to all of us, not just the great: we can all leave our footprints on the sands of time.  Phineas Gage was not a great man in the sense that Goethe and Washington were, but he has left his mark on history, as the issues raised by his case continue to concern us today -- especially as we consider how to pursue this new discipline of social neuroscience.  Return to text.

4As Macmillan (Malcolm Macmillan, 2000; M. Macmillan, 2000; Macmillan, 1986) argues, it is important not to exaggerate Gage's post-morbid difficulties.  He did return to work, in the livery and coach business if not on the railroad, including nearly seven years in Chile before ending up in San Francisco -- where at least he died of epilepsy and not in a barfight on the Barbary Coast !  It seems that a lot of what many authorities know of the Gage case is actually an imaginative reconstruction based on what is now known about the frontal lobes.  But all we really know about Gage himself is what we know from Harlow 's first-hand account.  And because Harlow himself was a closeted phrenologist, acquainted with Sizer for years before either of them ever laid eyes on Gage, even Harlow 's accounts may be to an unknown degree an imaginative construction. Return to text.

5In our early presentations of social-cognitive neuropsychology, Stan Klein and I unaccountably failed to cite Jackendoff's work.  Jackendoff himself was too polite to ever mention it, but I take this opportunity to correct the record. Return to text.

6With apologies to Rae Carlson (1984).  Return to text.

7Eliminative reductionism is not simply a project of some philosophical iconoclasts.  The tendency toward eliminativism can be detected in Goleman's assertion that neuroscientific findings enhance the ontological status of social intelligence, and in the idea, proposed by some advocates of neurolaw, that the legal concept of personal responsibility is obviated by the "finding" that behavior is caused by the brain.  Return to text.

8And just to complicate things further, there are things that have a subjective mode of existence but nonetheless are observer-independent.  To use Searle's example: if I'm in pain, it's true that I'm in pain regardless of what anyone else may think about it.   Return to text.

 

Responses to Commentaries

 

Several correspondents have raised a number of questions about the arguments in this paper.

 

Correspondent 1

Your argument about modularity is overstated. Certainly, authors have recognized the multiple mappings issue, and you should probably note that. At the same time, the one-to-many and many-to-one issues present quite difficult problems for the whole goal of localizing psychological function in the brain. So, I agree that you should tone down the comments about modularity if all you mean to say is that social neuroscience assumes a one-to-one mapping. If you mean to say something broader about the problem of localization (or whatever else you have in mind), then you need to clarify that section.

I don't think that the argument about modularity is overstated.  For the reasons I discuss, cognitive and affective and social neuroscience (hereafter, CASN) don't make any sense unless there are things like process-specific modules.  And as Gazzaniga (who should know, since he coined the term) makes clear, the whole point of cognitive neuroscience is (1) to find out how cognitive modules are implemented in brain tissue and (2) use this information to constrain psychological theory.  That's why cognitive and affective and social neuroscientists do brain-imaging, and that's all brain-imaging can do: identify process specific modules.

It's too much, in this paper, to discuss the issue of multiple mappings, but while some social neuroscientists might like the idea, and it might even be correct, it is absolutely fatal to the imaging enterprise  that dominates CASN.  If there's one module that seems to do lots of different things, then it's not a module.  That, or research hasn't yet identified precisely what that module does (I think that the fusiform area is a possible example of this, and so is the anterior cingulate).  If there are many modules that do the same thing, and one module that does lots of things, then all we'll get when we do brain-imaging is noise. 

So I'm sticking with my story.  CASN assumes one-to-one mapping.  I wouldn't be surprised to learn that reality is more complicated than that.  But for the present I suspect that all this talk about many-to-many mappings is just a post-hoc coverup for the truth, which is that the neuros don't have the foggiest idea where these modules are or what they do.  But if I say that, the paper's going to get stronger, not toned down, and everybody's going to get mad at me. 

 

I'm not really in a position to judge the extent to which social neuroscience writers have claimed that biological constraints are more potent than other kinds of constraints. At the very least, perhaps you should exclude citations of Cacioppo's work in making that point.

There is a serious misunderstanding here.  My arguments aren't about the comparative power of biological and environmental constraints on experience, thought, and action.  That's a false issue, as we all should have learned from the person-situation debate.  My argument deals with another kind of constraint entirely: constraints on theory, rather than constraints on behavior.  Many social neuroscientists, like many cognitive neuroscientists, explicitly or implicitly believe that biological data can constrain psychological theory, and that psychological theory must conform to biological data.  It doesn't and it shouldn't, and it's biological research that should conform to psychological theory.  And people should stop talking as if somehow neuroscience is going to save social psychology, or set it on the right track, or produce some big new advance in theory at the psychological level of analysis.  All I am concerned with, in this paper, are biological constraints on psychological theory, and not with biological constraints on behavior.  

 

A big part of the problem is that we simply do not understand brain function well enough to do a decent job of mapping psychological processes. To me, it's a problem of construct validation like any other. The nomological net surrounding the functions ascribed to different brain regions seems very poorly established. Note that this depends entirely on mapping what we know about psychology onto different brain regions (we don't worry about nomological nets in describing the function of the eye, for example). Nevertheless, if localized brain functions were more successfully validated, then noting the involvement of a particular brain region could surely be informative for explaining a psychological or behavioral process. I think some of the trouble could be avoided if neuroscience researchers simply were more circumspect in their interpretations of the data. On the one hand, we are told to have patience and that basic mapping is still far from fully realized, but, on the other hand, this does not seem to dissuade researchers from making rather strong claims about function and cause. 

For reasons I try to make clear in my paper, I don't think that "noting the involvement of a particular brain region" can be informative at all for psychological explanation.  But the comment gives the game away: if we don't understand brain function well enough to do a decent job of mapping psychological processes onto brain processes, then maybe we should stop trying to do it until we understand the brain better. 

 

Some social neuroscientists have used the language of constraint going in both directions, so that biology and psychology constrain each other 

This is precisely the misunderstanding I referred to earlier.  Based on Bandura's doctrine of reciprocal determinism, it's possible to talk about how biological and environmental and behavioral factors influence each other.  Fine, that's a given, and almost a truism by now.  But that's not what I'm talking about.  The rhetoric of constraint that I've identified, and seek to critique, and consign to the dustbin of history, is the idea that biological findings constrain psychological theory.  They don't, and they can't, and they never have, and they never will.

What you call the rhetoric of constraint is not the only justification for a social neuroscience approach. 

This is quite right.  What I have called the rhetoric of constraint is not the only justification for social neuroscience.  But it is part of the rhetoric of social neuroscience, and it needs to be identified, critiqued, and abandoned.  Social neuroscience is completely justified by the project of identifying the neural correlates of social-psychological constructs -- or, if you will, the neural substrates of social interaction.  But, for reasons that I've outlined in the paper, CASN is completely justified by the doctrine of modularity.  Without something very much like the doctrine of modularity, CASN makes absolutely no sense at all.

 

Social neuroscience is not equivalent to fMRI. 

Some social neuroscientists include social genomics, social psychophysiology, psychoendocrinology, and psychoimmunology, among a host of other wet things, under the umbrella of social neuroscience.  I think that's overbroad, but that's a debate we could have over drinks.  But if you think about it, all of these projects are justified by the same assumption of one-to-one mapping that lies at the heart of the doctrine of modularity.  If, for example, changes in heart rate don't indicate changes in anxiety levels, or attentional focusing, or whatever, then why are we recording EKG?  If there isn't a "gene for altruism", they why are we looking for genetic correlates of individual differences in altruism? 

 

You write that social neuroscience doesn't need to constrain theories of social psychology in order to be valuable--that localizing the neural substrates of social psychological processes is important. How so? If it tells us nothing about the social psychology, and if the interpretation is entirely dependent on the psychology, then what is it good for?

Good question.  I take it that ever since the 19th-century physiological psychologists, and especially ever since James put the brain in the second chapter of the Principles, psychologists have been concerned with what the Chicago functionalists, like Dewey and Angell, would call "mind in body".  The relation between what goes on in the mind and what goes on in the brain.  So looking at the neural substrates of mind and behavior is a perfectly legitimate enterprise for psychologists who choose to do so.  It's an option, though not an obligation, and it makes sense to choose that option once we have a good description of mental life at the psychological level of analysis, to see how exactly the brain does it.  But it doesn't make any sense to choose that option, to look at exactly how the brain does it, unless and until we know what the "it" is.  That's why psychology constrains neuroscience, not the other way around.

A somewhat related question concerns the many-to-many problem. I understand that you are making an abstract argument that neuro-data can't constrain psychological theories, per se. That is, no matter how well-understood the neuro-modules are, it cannot constrain psychological theory. In principle, I'm having a hard time following this. If we knew that brain region X corresponded perfectly to evaluative responding and we had opposing theories that did or did not implicate evaluative responding, then why wouldn't the involvement of brain region X be informative for discriminating between the two theories? 

It might, if we operated at that level of specificity.  But as Russ Poldrack has been at pains to argue, reasoning backward from brain to mind -- to say that "The amygdala is active here, so evaluation must be going on" requires of level of specificity that we just don't have.  

Consider the premise:

If the person is evaluating, then the amygdala is active.

Assuming for the sake of argument that the premise is valid, what follows from it?  

First, we can affirm the antecedent.  By modus ponens,

John is evaluating something.

Therefore his amygdala is active.

And we can deny the consequent.  By modus tollens,

John's amygdala is not active.

Therefore he's not evaluating something.

But you can't affirm the consequent.

John's amygdala is active.

Therefore he's evaluating something.

This is because John's amygdala might be active for some other reason.

Nor, for that matter, can you deny the antecedent:

John is not evaluating something.

Therefore his amygdala is not active.

Because, again, is amygdala might be active for some other reason.

Those are simple logical errors, affirming the consequent and denying the antecedent.  And for all the talk about conditional reasoning since Wason and Johnson-Laird (1970), even psychologists seem to have difficulty with it -- including those who want to infer mental states from physiological data.  You can only affirm the consequent if the relation between antecedent and consequent is 1:1 -- if and only if.  But if we're going to have a social neuroscience based on "many to many" mappings, we don't have that 1:1 relationship.  See more on this below. 

 

If I understand you correctly, the extent of the many-to-many issue is surely problematic for understanding neuro-data. However, that issue is entirely orthogonal to the question of whether neuro-data can constrain psychological theory. Is that a correct understanding?

Yes, I think that's right.  The mapping issue has to do with identifying neural substrates once the situation is clear at the psychological level of analysis.  Consider the possible outcomes of an fMRI study.

One-to-One Mapping: The anterior cingulate gyrus lights up whenever a subject experiences conflict.  So we can say something like "the ACG is a module that mediates conflict-resolution".  Or detects conflict, that is resolved by some other module.  Or responds to conflict that is detected by another module, which is in turn resolved by another module.  Or whatever.  The precision of our mapping depends entirely on how good our experiment is -- exactly what we've controlled for in our imaging experiment.  It's the experimental design that tells us what the module does, not the pixel-illumination.

One-to-Many Mapping: The ACG lights up whenever a subject experiences conflict, but it also lights up whenever the subject engages in controlled information-processing.  So we've got one module that performs two different tasks.  That's a funny module, so maybe we should do our experiments a little more carefully.  Maybe when we get the psychology right we'll find out, as I suspect is the case for the FFA, that it performs many functions, like face-recognition and bird-classification, that are all a subset of One Bigger Function, like classification at the subordinate level.

Many-to-One Mapping: The ACG lights up whenever a subject experiences conflict, but so does Brodmann Area 10.  So now we've got two different modules that perform the same function, and we need to determine the relationship between them.  Maybe Area 10 is redundant with Area 24.  Maybe the reverse.  Maybe one is more sensitive to conflict than the other.  Maybe one processes conflict about emotional infidelity, the other one processes conflict about sexual infidelity.  We don't know the details until we've analyzed the task correctly.  But if all Investigator X asks about is sexual infidelity, and if all Investigator Y asks about is emotional infidelity, we'll never know (picking on them just for the sake of the example).

Many-to-Many Mapping:  The ACG lights up whenever a subject experiences conflict, and so does Brodmann 10, but both these areas also light up whenever the subject engages in controlled information processing.  So now we've got two (or many) modules, each of which performs two (or many) tasks.  This is The Neuroscientist's Nightmare because -- and this is critical -- the whole point of doing neuroscience the way we do it is to identify specific brain areas that perform specific functions.  That's what the Doctrine of Modularity is all about.  That's what brain-imaging does.  And if it proves to be the case that the structure-to-function mappings revealed by fMRI are M-t-M, as opposed to O-t-O, the DoM goes down the toilet. If a researcher is going to fall back on M-t-M, then -- to use a phrase from Vietnam -- he has destroyed the village in order to save it.  But it can't happen this way, because brain-imaging as a technology is based on the assumption of O-t-O mappings.  It can't reveal M-t-M mappings, because to do so you'd have to survey every conceivable Region of Interest with every conceivable task, and nobody's going to do that.  Life's too short.

By the way, precisely the same argument applies to human genomics, and the search for "genes for" particular psychological traits or functions.  But life's too short to take on genomics as well as neuroscience.

Now, bringing us full circle, suppose that someone wanted to say "Well, after all this work, it turns out that the mapping of structure onto function is M-t-M, which contradicts the DoM, and so it's true, after all, that neuroscience can constrain psychological theory.  It just did, by showing that the DoM was wrong.  But notice that, insofar as the DoM talks about a fixed neural architecture, it's not functioning as a psychological theory.  It's functioning as a biological theory (all the other elements of the DoM are purely psychological in nature, until we get to the bottom line, the assumption of a fixed neural architecture).  So neurosocientific evidence is critical for a neuroscientific theory, about fixed neural architecture, but not for psychological theory (such as Fodor's assumption that modules process inputs but not outputs, or that they're cognitively impenetrable).   

 

Correspondent 2

I thoroughly enjoyed your Social Cognition paper. You referred to a hostile reaction to the paper from the Social Neuro Types. That is plausible, but the paper reads to me like a prudent caution; not like some kind of crazy rant....  Like you, I think the psychological neuroscience frontier is full of self-aggrandizing arrivistes and other parasites, as well of Type I errors and over-stated claims. Social Neuroscience is, of course, the worst offender (oh, well, maybe I should save that credit for Neuro-Marketing). Anyway, well, whatever ... but, neuroscience is still an essential scientific enterprise and I still read the neuroscience of decision literature as a hobby. And, maybe I don't run into "constraints," but I get the occasional idea to re-think my basic assumptions about how decisions are computed.

Thanks. I gave the talk on which the published paper is based before the Vul paper came out (I was a reviewer, I'm sure because of my great, and highly vaunted, statistical enterprise), and I wanted to make a more positive contribution, by putting some perspective on the enterprise. Even more so when the opportunity to publish came about, after Vul started to circulate. Recently, I've encountered a number of people trained as psychologists, and housed in psychology departments, identify themselves as neuroscientists.  The idea that they seemed embarrassed to identify themselves as psychologists, much less social psychologists, was deeply saddening.  

Somewhere Mike Gazzaniga has written that psychology is dead, and the only people who don't know it are psychologists.  Of course, he's a former president of the Association for Psychological Science, and is first author of a leading introductory psychology text, so I don't know what to think.  Anyway, if he's right, I think it's a case of disciplinary suicide, and I'd like to do my part to prevent it.

 

Here are a couple of critical comments on your review:

1. On p. 767, you make several statements that out-of-context seem wrong, because they are over-stated: "Without something like the doctrine of modularity, cognitive neuroscience does not make any sense" (etc.). <= Why of course it's still sensible and essential scientifically. It's just that "the logic" of several current research programs is undermined. But, we'd want to know what the "machinery" (the implementation level") for thought and for linking stimuli to behavior looks like in any case -- whether there is a "modular theme" to its organization or whether the overall system looks like something else (an undifferentiated computing mass?).

Maybe I put this too bluntly, but I think I'm right on this. Let me see if I can do better with some extra space.

First, I need to distinguish between neuroscience, as a biological science, and cognitive (or social, or affective, etc.) neuroscience, which attempts to link brain function to cognitive function. You can have a perfectly good neuroscience, at least up to a point, without the doctrine of modularity. For example, you can detail the anatomy of the nervous system, and the various parts of the brain. You can describe the structure of the neuron. You can figure out how neurons function, and get a description of action potentials, and the all-or-none law, and map the speed with which neural impulses travel down the axon. You can discover the synapse, and figure out that the mechanism of neural transmission involves soup, not sparks. None of this neuroanatomy or neurophysiology depends on the doctrine of modularity.

But then, still just being a neuroscientist, you realize that the nervous system isn't just one big mass of tissue. There are identifiably different parts, and not all neurons look alike. There's more than one neurotransmitter. There are those different gyri in the cerebral cortex, and they're not random folds, which is all you'd need if the only function of the folds was to fit the cortex inside the skull -- everybody's got the same ones. And then Brodmann finds that there are different kinds of neurons, and that they're clustered together in various areas of the cortex. Which begs the question of why? And it's not too hard to generate the hypothesis that these anatomical and physiological differences have functional significance. That there are different neurotransmitters, and different neurons, and different parts of the brain and even different parts of the cerebral cortex that perform different functions. So you can do molecular and cellular neuroscience, and maybe even systems neuroscience, without the doctrine of modularity. But you can't do behavioral (cognitive, social, etc.) neuroscience without thinking about functional specialization (I recognize that the doctrine of modularity is complex, but in this paper I'm only concerned with its proposition concerning fixed neural architectures associated with each mental module).

Unless you think of the brain as a general-purpose information-processing system -- a bunch of tissue that just associates stimuli with responses, or makes connections between inputs and outputs (if I live long enough, I'd like someday to argue that newfangled connectionism is little more than a sexed-up version of stimulus-response behaviorism). If the brain is an general-purpose information processor, then it doesn't have modules -- anymore than my PC does. And this is exactly how we thought about the brain -- or, at least, about the cerebral cortex -- until the doctrine of modularity came along. Posner makes this point nicely in his history of cognitive neuroscience, and I think he's right. I took my introductory psychology course in 1967, and the textbook was by Morgan and King, two physiological psychologists, and that was a leading text at the time.  I looked back to check my memories, and their discussion of brain function was extremely impoverished. Their account of functional specialization was limited to the primary auditory, visual, somatosensory, and motor cortices, plus Broca's and Wernicke's "speech" areas, plus a skeptical nod at prefrontal cortex as possibly critical for "intelligence". I suspect that other texts, like Hilgard's, took the same approach. The vast bulk of the brain was characterized as "association cortex" -- that is, its job was to form and mediate associations. And since, in the heyday of behaviorism, that's all psychologists thought that perceiving, learning, remembering, and thinking was, the question of specialized modules never really came up. At least, so far as cerebral cortex was concerned. There was a broad recognition that subcortical structures were functionally specialized -- the reticular formation, for example, or the hypothalamus. But not much else, especially for cerebral cortex. All you needed was a general-purpose machine that would record stimulus-response associations. It's this idea of the brain as a general-purpose S-R machine that underlies Lashley's Law of Mass Action, and even Pribram's notion of the brain as a holographic structure. There's no notion of functional specialization in either of these ideas: the functional specialization of the primary sensory and motor areas is, in fact, the exception that proves the rule.

In my view, two events happened to change all this. First was H.M., who gave us the idea that the hippocampus was somehow critical for memory. Second was Chomskian linguistics, which talked about things like a Language Acquisition Device, which needed to be represented somehow in brain tissue, and which recognized the implications of Broca's and Wernicke's areas, and which led directly to Fodor and his Doctrine of Modularity. So now people had the idea that there might be specialized mental modules, each associated with a fixed neural architecture, and they started looking for them in brain-damaged patients. Just like Broca, and Wernicke, and Milner, whenever they discovered a patient with focal brain damage who had a specific cognitive deficit, they inferred that the damaged area was specialized for the lost function. And so cognitive neuropsychology was born. The fact that Fodor articulated a particular version of modularity only in 1980 doesn't matter for the story. What matters is that people got interested in the brain because they grasped the idea of functional specialization.

And it's only with the idea of functional specialization, and some version of the Doctrine of Modularity, that cognitive neuroscience makes any sense at all. You wouldn't look for specific deficits in brain-damaged patients unless you thought that the brain damage would impair some specific function while sparing others. You wouldn't do brain-imaging unless you expected some particular part of the brain to light up when the subject performed some particular task, but to remain dim when the same subject performs some other task. If the brain were just a general information-processing machine, or a stimulus-response machine, or a connection machine, or even a holographic machine, then the whole brain would light up every time subjects did anything. The brain might light up more brightly when the subject performed a difficult task (or an easy one, depending on your reasoning), but the whole brain would light up anytime the subject did anything.

Now, imagine that you're a neuroscientist, and you go home at night and you tell your beloved spouse what you discovered that day: "Honey, it's the most amazing thing -- when subjects performed a task, their brains light up!". Or, to make the point even more dramatically, suppose that you wrote an NSF grant, asking to spend $250,000 on a used Varian 4T machine (as of January 13, 2011, there's one available on the Internet at that price), and hire a staff of technicians, including a neurologist to screen the subjects, to test the hypothesis that the brain becomes active when subjects are engaged in cognitive activity. You wouldn't get your grant and your spouse would try to have you institutionalized. The only reason that people are interested in cognitive neuroscience at all, either as funders or consumers of research, is the hypothesis that different parts of the brain are specialized for different functions. Without the Doctrine of Modularity, all we'd need is molecular, cellular, and systems neuroscience.

 

Okay, I see what you're claiming more clearly now, of course. I think you're saying: (1) We are beyond "undifferentiated," and "equi-potentiality" now. (And, I think we were in 1970, too, when the Wooldridge "Machinery of the Brain" book was assigned in "Psychophysiology" courses.) (2) So, cognitive neuroscience has to look for localization -- and we see this in practice when they identify loci or "regions of interest" in activation and extirpation studies; even when they, as they usually do, add the, "it's local, but not in one spot because it's a circuit" qualification. (3) Therefore, the modern enterprise doesn't make sense without the background assumption of "modularity." Okay, I agree, with that.

Oh, I had that Wooldridge book too, it was great, but had forgotten how he treated functional specialization. I'll have to look at my copy again.

 

2. You do debunk several specific "Neuro (implementational level) Constrains Psychology (computational level)" claims. But, I was hoping for something more principled. In general, are there any forms of logic whereby a neural result COULD constrain a psychological-process claim? What are the best candidates for such a constraint in the form of "arguments," and why are they in principle defective? Just because 4-5 past efforts are failures, does not mean there are not valid constraint arguments that COULD be made as technology improves. But, I'm not sophisticated enough to tell you about a strong, almost valid example or to describe the form of an argument that could be made (but has not yet) that would be valid.

OK, I didn't make a principled argument, but I really do believe that, in principle, neuroscientific findings can't constrain psychological theory. It's not just that no constraints emerged from H.M. or the mental imagery debate. Here I'm echoing the empirical conclusions of Max Coltheart, who issued a challenge to his fellow cognitive neuroscientists a couple of years ago: John Jonides claimed that he had "So many examples, so little space", but in fact none of his examples met Coltheart's test. And I agree that my reliance on empirical failures isn't principled, but I meant them to illustrate the principle that it can't happen, and here I'm greatly influenced by the arguments of Gary Hatfield, a philosopher at Penn who trained in perception at Wisconsin with Bill Epstein. The argument is that, in principle, it can't be the case that neuroscientific findings constrain psychological theory, and that in fact psychological theory constrains the interpretation of neuroscientific findings. I think that there is a kind of default assumption that somehow biology trumps psychology -- and it's an assumption that shouldn't be made by default, because there's no good example of it.

But here I've got to make clear the terms of the argument (which are basically Coltheart's).

First, the psychological theory has to be stated at the psychological level of analysis -- that is, it has to be about mental structures and processes. Here are some examples of what I have in mind:

The behavior of organisms in classical conditioning experiments is based on associations based on contingency rather than spatio-temporal contiguity.
Perceivers employ a combination of ocular, optical, and motion cues to perceive depth or distance.
Elaborative and organizational processes are critical to the formation of long-term memories.
Knowledge can be represented in analog as well as symbolic form.
Categories are represented mentally as prototypes rather than as lists of defining features.
Judgments are carried out by means of fast and frugal heuristics, rather than normatively "rational" but cognitively demanding algorithms.
The trajectory of cognitive development is characterized by a smooth ogival curve rather than abrupt shifts.

Second, the neuroscientific findings have to be expressed in biological, not behavioral terms. Here are some examples of what I have in mind:

The hippocampus, rather than the amygdala, becomes active when subjects encode a new episode in memory.
The fusiform gyrus, rather than Wernicke's area, becomes active when subjects recognize a familiar face.
Area V8, rather than V5, in the occipital lobe becomes active when subjects view a colored surface.
The anterior cingulate gyrus, rather than the superior frontal gyrus, becomes active when subjects experience conflict.
The left prefrontal cortex becomes active when subjects retrieve semantic memories, while the right prefrontal cortex becomes active when subjects retrieve episodic memories.

The argument is that, in principle, information of the second sort can't be used to distinguish between plausible theoretical alternatives of the first sort. I believe that the two cases I discuss in the paper -- the role of the hippocampus in memory and the existence of analog representations of knowledge are the two best examples of the rhetoric of constraint offered so far, and neither of them work. First, the fact that H.M. became amnesic after losing his hippocampus doesn't tell us anything about what the critical features of memory function are. Now, it might be that it turns out that, as Cohen and Eichenbaum have argued, hippocampal patients have difficulties with relational processing, rather than explicit memory per se. But even if so, it's behavioral, not neuroscientific, data that tells us this - -that is, what matters is how hippocampal patients perform on performance tests of memory, not what lights up in the MRI. Psychological theories are obviously constrained by behavioral data -- that's how we test them in the first place. The question is whether they can be constrained by biological data concerning brain function. And in the hippocampal case, in any event, our understanding of hippocampal function has evolved based on the results of behavioral testing, not on the availability of ever-more-sensitive MRIs. Second, it was only Zenon Pylyshyn who wasn't persuaded by the behavioral evidence that knowledge was represented in both symbolic and analog form (ably reviewed by Ron Finke); and when Martha Farah and Steve Kosslyn produced their brain-images showing that the same parts of the visual system lit up when subjects imagined objects as when they perceived them, he still wasn't convinced. Now, maybe Zenon is just being naughty, but it appears that, with respect to an issue that is about as central to cognitive psychology and cognitive science as you can get -- how knowledge is represented in the mind -- the neuroscientific evidence was neither necessary nor sufficient to alter our understanding of the nature of mental representations.

Obviously, neuroscientific evidence is crucial if we're going to understand how mental processes are implemented in the brain, and I agree that this is a worthwhile project for psychologists to pursue. But I think it's the case in principle that such a search depends on the prior existence of a valid theory of mental structure or function stated at the psychological level of analysis. I don't have the philosophical training to be able to make this argument in principle, which is why I rely on people like Hatfield to do it for me. But his argument makes sense, and as a psychologist I think that the empirical cases are pretty convincing.

 

As for where to look, isn't modern vision research the example of what the future of Cognitive Psychology will look like? And, aren't there valid "constraint arguments" in that area -- e.g., doesn't what we know about rods and cones and early vision neural processing (the ganglia behind the retina) constrain color vision theories? Doesn't what we know about feature detectors in the visual cortex, constrain process models for object detection? Isn't that what a more mature field of Cognitive Neuroscience will look like? (Not that there aren't many unsolved problems [and super-interesting problems] at the frontiers of vision research.)

I've thought about this, and I wish I knew more about vision science. I share your intuition that, if there is going to be a good example of constraint, it's likely to come from sensory science of some sort, in part because sensory science is, by definition, tied closely to sensory physiology. But we'd still have to identify strictly psychological theories of sensation and perception, and then see if they're constrained by biological evidence. Palmer's book goes from photons to phenomenology, and I can't say that I've understood (or even read!) all of it, but I don't recall any instance where data from the biological level constrains theory at the psychological level.

Let me talk about three possible counterexamples that I've thought a little about.

First, there's Muller's Doctrine of Specific Nerve Energies, and Helmholtz's extension, the Doctrine of Specific Fiber Energies. Muller thought that each modality of sensation was associated with a particular kind of neural impulse, and Helmholtz thought that the same was true of each quality within a modality. But Lord Adrian showed that all nerve energies are the same, regardless of the nerve that's carrying them -- it's all electrical, it's the same electrical impulse no matter what nerve is carrying it, and what matters is where the impulse goes. So the two doctrines as originally stated are wrong. But they're not exactly psychological theories (Muller and Helmholtz thought of themselves as physiologists first and foremost). They're closer to biological theories of how the nervous system works, or maybe theories of the mind-brain interface. But I'm not sure that they're purely psychological theories.

Now, you could turn around and say: Aristotle had a theory that there are five modalities of sensation. This is a psychological theory of sensory experience. But from Sherrington and others we know that there are more than that, and we know that there are more than that because of neuroscientific evidence. So here's an example of neuroscientific evidence constraining psychological theory. We now know that the modality of sensation is determined by where the neural impulses, generated by the sensory receptors, ends up in the brain. The number of sensory modalities, then, is given by the number of distinct sensory projection areas in the brain, and that neuroscientific evidence is decisive. Touche. On the other hand, I'd argue that even in this case the neuroscientific evidence isn't what's decisive, because the strictly neuroscientific evidence isn't sufficient to tell us what the psychological experience will be. We only know that there's an auditory projection area because people with damage in that area can't hear anything, and people who are stimulated in that area hear something, and we only know these things because they tell us -- and self-reports are behavioral, not neuroscientific, data (not to mention that they're precisely the kind of "old-fashioned" data that many neuroscientists sneer at). I'm willing to be corrected here, but I don't even think that the systems neuroscience connects the cochlea to the superior temporal gyrus. Rather, I suspect, that we look for such a connection because we know that this part of cortex is important for hearing. I admit here I'm on shaky ground, and I'm resisting the temptation to say that the count of sensory modalities isn't really a psychological question. But if the only counterexample is one that lies so close to the periphery, so close to physiology and even physics, I'd argue that neuroscientific evidence doesn't constrain psychological theory very much, and that neuroscientists shouldn't behave as if they're going to fix psychology, and that psychologists shouldn't be waiting for the biochemist to come save them -- attitudes that I detect when the cognitive and social neuroscientists invoke the rhetoric of constraint.

I've got much the same reaction to feature detectors. In the first place, Hubel and Wiesel were trying to answer questions about the neural basis of vision -- which isn't really the kind of psychological theory I think is at stake. True, the fact that they identified some feature detectors lends support to a theory of perception involving analysis then synthesis, a la the pandemonium model. I've got colleagues here who tell me that the H-W type experiments give a somewhat misleading picture of even early vision. And I'd also point out that H&W identified their feature detectors based on behavioral evidence, just as our brain-imaging colleagues identify their brain modules. Along these lines, I'd be interested in whether brain-imaging evidence can distinguish between something like Biederman's theory of recognition by components, and Tarr's theory of viewpoint-based recognition -- which is about as central to the theory of perception as you can get. I admit that I haven't kept up with this debate since I left Yale, but my impression is that the battle is being fought with behavioral, not neuroscientific, evidence.

A third possible counterexample is more to my liking, but still pretty close to the periphery: color vision. The psychological question is, how do we experience the visual world in color instead of greyscale? Helmholtz offered a psychological theory, his trichromatic theory, that we experience all the different colors by virtue of mixing three primary colors, red, green, and blue. It's a really good theory, and something like it works in other domains, like television. Then, along comes Hering with an alternative psychological theory, opponent-process theory. We all now accept the Hurvich-Jameson version of opponent-process theory as the correct understanding of color vision, but not by virtue of any neuroscientific evidence. The physics, after all, supported Helmholtz. What decided the case was behavioral evidence. 

First, that subjects saw yellow as a pure color, not as a mix of red and green; that meant, at the very least, that color vision was produced by the mixture of four, not three primary colors. 

Second, the evidence from color-blindness -- which, under the terms of my argument, counts as behavioral, not biological, evidence (because it comes from self-reports) -- suggested that the basic color-processors were linked in a particular way. 

And third, most decisively, the evidence of negative afterimages confirmed this suggestion and added important mechanical details. 

All of this evidence was behavioral in nature. None of it was physiological. But, and this is the important point, it was the availability of the correct psychological theory, of opponent processes, that permitted people like Russ DeValois to go on and search for the neural basis of color vision. Once we understood that color vision was organized by opponent processes, we could go into the nervous system and find them. Without the correct psychological theory, we'd still be looking for three color processes, and we'd miss the opponent-process mechanisms entirely; or perhaps we'd be looking for four types of cones, and beating our heads against the wall because the histological evidence yields only find three.

So here, just about as close to the periphery as we can get, it appears that psychological theory constrains the interpretation of neuroscientific data. If neuroscientific data doesn't constrain psychological theory here, at the lowest level of mental life, it seems unlikely that there will be much constraint at the higher levels where cognitive and social psychologists prefer to operate.

 

Well, hmmmm ... that was much more sophisticated than what you were able to write in the length-constrained Social Cognition article; and more comprehensive by far than my understanding of these domains. But, I'm still not completely convinced "in principle," though my confidence is reduced in my belief. Here are a couple of conjectures: Suppose we found one area of the brain that "computed" utilities (i.e., we focus an image-scanner on that area and find that there is a "multiplicative" activation given probability of reward and amount of reward inputs (measured on the stimulus end of the system <= outside the head, not inside). That would "constrain" me to believe in a utility-type evaluation as part of the decision process. It would not be absolutely logical - as maybe those areas are epi-phenomenal and not integral to the stimulus-decision-response circuit -- but I would stipulate the coincidence ("they're epi-phenomenal") and proceed to hypothesize utility calculations underlying decisions. Is that a neural constrains psychological-behavioral argument, or is the fact that we've already observed and hypothesized the multiplying relationship mean it doesn't count? Second, an analogy argument: I'm trying to understand how a car accelerates and I discover a behavioral relationship between pressing the accelerator and increased engine turnover and increased speed. Second, I discover the accelerator is only connected to the gasoline supply valve. So, then I constrain all theories of acceleration to involve gas supply. Is that a mechanical constrains functional argument? If so, won't we have some of those for the brain and behavior?

Well, I take your point about the need for a principled argument, and perhaps I should have done more than simply cite Hatfield. But I was more interested in disarming the facile hand-waves toward H.M. and the imagery debate, which are the examples that the rhetoricians of constraint like to use.

Anyway, here's how Hatfield sets out the principled argument (in his 2000 Phil Sci paper).

1. The operations of the brain can be partitioned into various subsystems, study of which constitutes the study of brain function.

2. Some of the functions realized by the brain are mental functions.

3. Psychology is the science that studies mental functions directly.

4. Hence, psychology is the primary discipline covering a major subset of brain functions.

5. Although it may be possible on occasion to reason from structure to function, in general knowledge or conjecture about function guides investigation of structure.

6. And so psychology leads the way in brain science.

Point #5 depends on there being a principled argument that you can't reason from structure to function. Hatfield doesn't really supply one, though it seems right to me. Kosslyn and Koenig, in Wet Mind, drew an architectural analogy, that you couldn't understand the uses of a building without understanding the materials with which it is made. But that's just not so. You can build a house made of straw, or of sticks, or of bricks, and they all function as houses. But just because the Big Bad Wolf can blow down a house made of straw or sticks, doesn't mean that a house made of straw or sticks functions is any less a house than one made of bricks.

As for your conjecture, I see your point. Somewhere Chomsky has written that if we think that recursion is the defining feature of language, and we discover that the brain can't handle recursion, then we need to get another theory of language. My response would be different: if we think that language involves recursion, and we can't figure out how the brain does it, then we look at the brain again. Anyway, we know that the brain can do recursion because we do recursion. And we do it using our brains. Maybe language doesn't involve recursion, maybe it involves combination (which is what I understand Len Talmy proposes as an alternative), but the brain isn't going to decide this issue, because it's obvious that the brain can do combinations as well as recursions.  Chomsky's question is whether language involves recursion as opposed to combination, and looking at the brain can't settle that -- unless, I suppose we find that the neurons in Wernicke's area can do combinations but not recursions, but I suspect that's unlikely -- not least because, if Joe Bogen is right, we don't really know where Wernicke's area is!

But this isn't a principled response. I guess that I just have to fall back on counterexamples, like the hippocampus, and mental imagery, and color vision. These are the examples that are most frequently invoked by the proponents of the rhetoric of constraint. The idea that biological data can constrain psychological theory is proposed all too easily by all too many neuroscientists, and accepted all too easily by many psychologists, and I just want people to think about it a little more.

One last thought about structure and function. I suspect that structure does constrain function within a level of analysis. So, for example, the neuron is built to conduct electricity, not the fluids that Descartes thought were important. And if you look carefully at a neuron, you can see that it's not built to carry fluids. Once you know about electricity, you can see that it's built to carry an electrical charge. So, if you say that the function of a nerve is to carry a fluid from the receptor to the brain and then back to the muscles, a little structural knowledge shows you that's wrong. But I doubt very much that structure constrains function across levels of analysis. Within biology, we've known since Harvey that the function of the heart is to pump blood -- and you can see that it's built to do that (though Galen didn't notice it). But for the Aztecs, the heart was something that you sacrificed to thank the gods for life. There's nothing about the biological structure of the heart that constrains that sociocultural function. The liver, or the brain, would have done just as well.

 

But, all that said, those are hypothetical arguments. And, certainly today I believe the behavioral analysis is primary and a psychological process account of the behavioral is second, and the neural is third -- in the quest to understand, explain, and control the behavior. Mostly what I don't like about the huge shift of funds into neuroscience is that it makes behavioral research conservative. Because now, with neuroscientists influencing what behavioralists and psychologists study, there is a regressive increase in research on known and more-or-less-already-understood behavioral phenomena, so that they are ready for the neural analysis. So, instead of more expansion in behavioral studies of Working Memory function, we have a shift to refine and replicate old results, so that they are stable and over-studied enough to be ready for a neural analysis. Just makes for dull behavioral work, and penalizes studies that expand behavioral knowledge (<= they aren't the ones the neural people want to study).

Well, I agree entirely. I guess what gets me riled up is when psychologists go through one of our periodic existential crises, where we start thinking that we're not a legitimate science -- as when Gazzaniga says that "psychology is dead, and the only people who don't know it are psychologists". Or when Steve Hyman or Tom Insel will invoke some physiological finding to show that psychology is irrelevant. And so we instinctively turn to biology -- whether it's neuroscience or the theory of evolution -- to solve all of our problems. In fact, I don't think we have all that many problems. And I'm sure that the ones we have won't be solved by becoming biologists. But meanwhile, the whole field, and its funding structure, and its curricular structure, gets distorted.

 

Correspondent 3

I see you cite the debate about imagery in mental rotation as not having been resolved by neuroscience methods. This is the one case I was able to find that I though could be judged as resolved to almost everyone's satisfaction. Perhaps Pylyshyn is a holdout, but isolated holdouts are not enough to conclude that a debate is unresolved. (Else we would have to say that the debate about whether HIV causes AIDS remains unresolved.) I leaned on the following article: 

 

Corballis, M. C. (1997). Mental rotation and the right hemisphere. Brain and Language, 57, 100-121.

Corballis's article starts with this sentence: "Mental rotation may be considered a prototypical example of a higher-order transformational process that is nonsymbolic and analog as  opposed to propositional." I couldn't find a subsequent article by anyone other than Pylyshyn to contest this. (Maybe I didn't look hard enough.) Do you know of anyone (other than himself) who is now (or recently) (a) taking Pylyshyn seriously or (b) presenting behavioral data in support of Pylyshyn's view?

 

You're right about the mental imagery debate -- or, more generally, the question of whether there are two modes of mental representation, symbolic and analog. My point is twofold: 

  1. Everybody thought that the mental imagery debate had already been resolved, based on behavioral methods (e.g., Roger Shepard's studies of mental rotation, and Steve Kosslyn's studies of spatial imagery), so the neuroscience wasn't necessary.  The sole exception, I think, was Zenon Pylyshyn.  John Anderson thought that the question was unresolvable, though that did not prevent him from describing the two modes in his textbooks as if the distinction were valid).

  2. Even if Pylyshyn was the last holdout, Martha Farah's experiments didn't persuade him, which means that the neuroscience wasn't sufficient, either.

 

I was quite persuaded by John Anderson when I read his Psych Review article in 1978. I later came to alter my reading by concluding that he had established that the debate was unresolvable by *behavioral* methods (i.e., not necessarily extending to physiological methods).

And I regard the change as political (not logical). Yes, I agree with you that prior to the 1990s the majority (political) view favored the analog representation conception. But, without my actually knowing what's in the cognitive textbooks of the past few decades, I'm willing to guess that the propositional-analog debate had not disappeared from textbooks by 1990, and that it probably was fading rapidly by about 2000.

I arrived at a view resembling Ramachandran's idea about looking inside the black box about 10 years ago.

Well, I just wouldn't call it political. My reading is that John was probably right that the issue was undecidable. Although I don't remember that he reserved final judgment in the absence of physiological evidence, it strikes me (as someone who doesn't actually do computational modeling), that the same mathematical considerations would apply to things like neuroimaging evidence. That is, that someone clever like John could probably find a way to analyze brain-imaging data that both supported and contradicted dual-code theory.

But my reading continues as follows: the claim of dual-code theory is so reasonable, especially given our subjective experience of mental imagery, and the behavioral evidence (Shepard, Kosslyn, etc. -- all the evidence summarized by Ron Finke as early as 1980) so convincing, that it seems reasonable to write our textbooks in a way that favors the dual-code theory even if it is undecidable. Not to mention parsimony -- that, in the final analysis, it makes more sense to postulate two codes than it does to do the mental gymnastics required to deny the evidence for mental imagery. And that consensus was reached long before Martha Farah published her 1988 neuropsychological study. Now, the evidence from brain-imagery may have helped Steve and others strengthen their case, especially in their own minds, but that was a genuine rhetorical choice -- since, for reasons that I have argued, neuroscientific data can't actually decide between psychological theories.

As far as the "black box" goes, those of us who took Driver's Ed in summer school in New York State in the 1960s never learned how to change a flat tire, never mind how an internal-combustion engine worked. I like Dick Neisser's take on this (on virtually the same page of Cognitive Psychology where he criticizes those who think that psychology is just something to do "until the biochemist comes". Which is to say that you don't have to know anything about internal combustion engines to drive from New Haven to Berkeley. You just have to know where you are, where you want to go, and where to find gas stations along the way. As biological scientists, some psychologists might want to know how the brain does it, and that's a worthy goal. But as social scientists, other psychologists need only understand why people make the choices they make. I'm happy understanding why someone chooses to drive from New Haven to Berkeley; given that goal, knowing whether the vehicle was a diesel bigrig, a Chevy V8, or a Prius is pretty irrelevant, despite the fact that they all have quite different internal mechanisms. (Someday I'll finish my paper entitled "Two Cheers for Dualism, and One More for Functionalism!".)

 

Correspondent 4

I propose that those of us who are on the side of concern about the excesses of fMRI, not its defenders, write a policy statement of the sort that appears in Science from time to time (usually one or two pages in their very terse style). We will say how it originated and say that others disagreed (and why).

If there's some interest in having some group speak publicly, I think there are several different issues, requiring address to different constituencies.  For example: 

  1. The decline of interest in basic research at NIMH.  NIMH used to have a solid portfolio of basic psychological research, on the quite reasonable assumption that we needed to know more about basic psychological (and even sociological and anthropological) processes to create a firm scientific base for mental health research.  But that's disappearing rapidly, despite the best efforts of the PhD staff at NIMH.  This issue needs to be addressed to policymakers, both at NIH and in Congress.  And I know it's something that organizations like APA, APS, and FABBS have been working on forever. 
  2. The biologization of the psychological research portfolio at NIMH.  Again, despite the best efforts of the PhD staff at NIMH, who from top to bottom have been genuine allies, the Institute has always been dominated by psychiatrists, by psychiatry and for psychiatry, and now psychiatry has abandoned psychotherapy (and thus gotten bored with psychology) and gotten religion about neuroscience (and medications).  So that's where the funding goes.  "Translational" seems to mean applied not basic research, unless it's translation into medications and biological tests.  Again, this issue needs to be addressed to policymakers.
  3. The biologization of academic psychology.  There's enough anecdotal evidence to warrant a formal study of recent hiring and tenuring policies which may favor neuroscientific approaches (broadly construed) over "traditional, old-fashioned" (as I have heard it characterized on numerous occasions) behavioral methods.  If it's really happening, that's cause for worry, for the reasons that some of us have discussed on this list.  But this issue needs to be addressed to APA,. APS, department chairs, deans, and provosts, and COGDOP.
  4. Poor training of some cognitive neuroscientists.  It's been suggested that cognitive neuroscientists who start out in cognitive psychology, and get a thorough grounding in cognitive theory and methods, do better, less silly cognitive neuroscience than those who are trained in free-standing cognitive neuroscience programs.  If that's the case, then again this issue needs to be directed toward departments and academic administrators, who maybe should stop establishing free-standing graduate programs in cognitive neuroscience and fold their existing programs back into departments of psychology.  Or maybe the National Research Council, if it dares to do another evaluation of graduate programs sometime this century, could see if the impression is really true.
  5. Problems with cognitive neuroscience as a discipline, including voodoo correlations, reliability problems, the rhetoric of constraint, and such.  These are important, but they are strictly inside baseball.  Neither NIMH nor your local acting associate deputy assistant vice-provost for academic affairs cares about them.  These issues need to be addressed to practitioners, especially budding practitioners in the undergraduate and graduate population.

There are others, to be sure, but you get the idea.


Correspondent 5

I tried to think of counter-examples to the claim that all neurobiology rests on more fundamental, psychological results.  I think in general the claim is true but how about the Melzack-Wall gate?  I do not know if it is a valid result but it does not seem to rest on prior results from the science of psychology though it does rest on the fact that we do experience pain.


First: I don't claim that "all neurobiology rests on more fundamental, psychological results".  That wouldn't be right, because there are lots of principles in neurobiology that rest solely on biological results.  To take a famous example, the debate between "the soups and the sparks" about the nature of neural transmission was settled solely on the basis of physiological evidence.  And Brodmann's cytotechtonic architecture of brain tissue.  Lots of others.  My claim is only that neurobiological evidence doesn't constrain psychological theory.  That is, and again you have to pardon the dualism, finding out how the brain works doesn't help us understand how the mind works.  Finding out how the brain works is critical to understanding how the brain does it (which I assume it does).  But it's not going to settle theoretical claims at the psychological level of analysis.  It's not going to tell us whether categories are represented as prototypes or exemplars or theories or something else.

Still, even on my chosen fighting-grounds the Melzack-Wall gate is a challenging case for me.  There are a couple of others, including the Hubel-Wiesel work (which arguably inspired "pandemonium" - type theories of human information-processing) and a more recent piece on the Asch conformity paradigm where the researcher tried to use fMRI to distinguish between "conformity at the level of behavior" and "conformity at the level of belief". 

I looked back at the M-W (1965) paper, as well as their 1962 paper on somesthesis in general  (and have just done so again, and it seems to me that the Gate-Control theory is a theory about the neurophysiological mechanisms of pain, rather than about pain itself.  They start off with an attack on specificity theory, which again is about the neural underpinnings of the various sensory modalities (and qualities within a sensation).  They argue against what they characterize as an "implicit" "psychological assumption" of a "conceptual nervous system" with a "direct-line communication system from the skin to the brain" with specialized pain receptors attached to specialized afferent nerves projecting a a pain center somewhere in the brain.  So, the easiest way out for me is to say that M-W isn't about pain so much as its physiological mechanisms.  So, of course, like any theory of biological mechanisms, it rests on biological evidence.

The question is whether the Gate-Control theory tells us anything about the psychology of pain -- like, whether pain is a separate sensory modality (separate from intense touch or whatever), and whether there are different kinds of pain that wouldn't have been noticed in the absence of the Gate-Control theory.  Here is where I thought the counterexample might lie.  For example, it might be argued that there are two types of pain, and that after the discovery of A-delta and C fibers we then discovered that they mediated two types of pain.  And, in fact, it's been proposed that A-delta fibers mediate "pricking" pain, while C fibers mediate "burning" pain.  Something like that was suggested by Bishop (1946), long before the Gate-Control Theory was announced.

The problem is that these various types of pain have a long history going back before the neurophysiology, reaching back at least to Titchener (1920), and are based on psychological evidence -- in fact, evidence from Titchenerian experimental introspection.  Bishop himself, for example, assumes the two types of pain as a given -- which means that his theory of the physiological basis of pain rests on the psychophysics, and doesn't tell us anything that psychologists didn't already know.

Interestingly, Melzack and Torgerson (1971) devised the McGill Pain Questionnaire to measure the subjective experience of pain.  Intuitively, they separated pain into two components, sensory and affective, but were critical of the idea that any classification of pain types mapped onto any particular fibers.  Instead, they argued that different types of pain, pricking or burning or whatever, reflected different patterns of activity in a number of different fibers.  Precisely what those different patterns were -- that was yet to be determined (and, as far as I can tell, still hasn't been determined).  But they seemed to suggest that the search for the neural substrates of different types of pain would be directed by psychophysical and psychometric evidence about what those different types of pain were.  They don't seem to entertain the possibility that discovery of different patterns of neural activity will lead us to discover types of pain that we didn't already know about.

So, in the end, I don't think that the Gate--Control theory is a counterexample.  But, I've got to admit, you made me work it through, and I'm thankful for that.

And it might still be the case that neurophysiological evidence will clarity the psychological nature of somesthesis sometime in the future.  Neurophysiologically, somesthesis is a mess.  As far as I can tell, nobody has any idea what its neural mechanisms are.  The Gate-Cotnrol theory of pain is popular, there doesn't seem to be any good alternative hanging around, and it may be a model for the other somesthetic senses as well.  There are lots of different proximal stimuli, lots of different receptor organs, lots of different fibers (A-delta and C are only two, apparently), and lots of different projection areas in the brain.  M&W (1965) describe five distinct neural pathways mediating stimulation of a single tooth!  I've got to remain open to the possibility that someone will take the entire body of physiological evidence and tell us, finally, how many somesthetic modalities there are, and what the primary qualities are within each modality.

Just a footnote: my argument is really with a group of cognitive neuroscientists who employ what I've called "the rhetoric of constraint" -- that is, that somehow neurophysiological evidence will constrain psychological theory.  I think that's wrong, but I really object to their attitude toward psychology (as Mike Gazzaniga put it: "Psychology is dead, and the only people who don't know it are psychologists").  And to the implication that, ultimately, psychological theory rests on biology.  I think it doesn't, and that psychology can be and is an independent science of mental life, with very limited biological constraints.  Otherwise, psychology really would be, in the words of Dick Neisser (who didn't think it was), just "something to do until the biochemist comes".  Instead, I think that we can work out the principles of mental life at the psychological level of analysis, and then -- John Searle's phrase -- "kick the problem over to the neuroscientists".  Physiology thus is an optional tool for psychologists, and psychologists can investigate the biological basis of mental life in the same way that they can investigate its sociocultural basis; but physiology isn't a tool of psychology.

 

References

Allport, G. W. (1954). The historical background of social psychology. In G. Lindzey & E. Aronson (Eds.), Handbook of social psychology (Vol. 1, pp. 1-46). New York: Random House.

Anderson, J. R. (1976). Language, Memory, and Thought. Hillsdale, NJ: Lawrence Erlbaum.

Anderson, J. R. (1995). Cognitive psychology and its implications (4th ed.). New York, NY, USA: W. H. Freeman & Co, Publishers.

Atkinson, R. C., & Shiffrin, R. M. (1968). Human memory: A proposed system and its control processes. In K. W. Spence & J. T. Spence (Eds.), The psychology of learning and motivation (Vol. 2, pp. 89-105). New York: Academic Press.

Barker, F. G. (1995). Phineas among the phrenologists: the American Crowbar Case and nineteenth century theories of cerebral localization. Journal of Neurosurgery, 82, 672-682.

Baron-Cohen, S. (1995). Mindblindness : an essay on autism and theory of mind. Cambridge, Mass.: MIT Press.

Bartlett, F. C. (1932). Remembering: A study in experimental ad social psychology. Cambridge: Cambridge University Press.

Berreby, D. (2005). Us and them: Understanding your tribal mind. Boston: Little, Brown.

Blakemore, S. J., Winston, J., & Frith, U. (2004). Social cognitive neuroscience: Where are we heading? Trends in Cognitive Sciences, 8, 215-222.

Blaxton, T. A. (1989). Investigating dissociations among memory meaures: Support for a transfer approprite processing framework. Journal of Experimental Psychology: learning, Memory,  & Cognition, 15, 657-668.

Butters, N., & Cermak, L. (1975). Some analyses of amnesic syndromes in brain-damaged patients. In R. L. Isaacson & K. H. Pribram (Eds.), The hippocampus (Vol. 2, pp. 377-409). New York: Plenum.

Cacioppo, J. T., & Berntson, G. G. (1992). Social psychological contributions to the decade of the brain: Doctrine of multilevel analysis. American Psychologist, 47, 1019-1028.

Cacioppo, J. T., Berntson, G. G., & McClintock, M. K. (2000). Multilevel Integrative Analyses of Human Behavior: Social Neuroscience and the Complementing Nature of Social and Biological Approaches. Psychological Bulletin, 126(6), 829.

Cacioppo, J.T., Berntson, G.G., Adolphs, R., Carter, C.S., Davidson, R.J.,  McClintock, M.K., McEwen, B.S., Meaney, M.J., Schacter, D.L., Sternberg, E.M.,  Suomi, S.S., & Taylor, S.E. (Eds).  (2005).  Foundations in social neuroscience.  Cambridge, Ma.: MIT Press.  

Cacioppo, J.T., Visser, P.S., & Pickett, C.L. (Eds.).  (2006).  Social neuroscience: People thinking about thinking people.  Cambridge, Ma.: MIT Press.

Cannon, W. B. (1932). The wisdom of the body. New York: Norton.

Cantor, N., & Kihlstrom, J. F. (1987). Personality and social intelligence. Englewood Cliffs, N.J.: Prentice-Hall.

Cantor, N., & Kihlstrom, J. F. (1989). Social intelligence and cognitive assessments of personality. In Social intelligence and cognitive assessments of personality. (pp. 1-59). Hillsdale, NJ, USA: Lawrence Erlbaum Associates, Inc.

Carlson, R. (1984). What's social about social psychology? Where's the person in personality research? Journal of Personality & Social Psychology, 47(6), 1304-1309.

Churchland, P. M. (1981). Eliminative materialism and the propositional attitudes. Journal of Philosophy, 78, 67-90.

Churchland, P. M. (1995). The engine of reason, the seat of the soul : a philosophical journey into the brain. Cambridge, Mass.: MIT Press.

Churchland, P. M., & Churchland, P. S. (1991). Intertheoretic reduction: A neuroscientist's field guide. Seminars in the Neurosciences, 2, 249 - 256.

Churchland, P. M., & Churchland, P. S. (1998). On the contrary : critical essays, 1987-1997. Cambridge, Mass.: MIT Press.

Churchland, P. S. (1986). Neurophilosophy : Toward a unified science of the mind-brain. Cambridge, Mass.: MIT Press.

Cohen, N. J., Ryan, J., Hunt, C., Romine, L., Wszalek, T., & Nash, C. (1999). Hippocampal system and declarative (relational) memory: Summarizing the data from functioal neuroimaging studies. Hippocampus, 9, 83-98.

Cohen, N. J., & Squire, L. R. (1980). Preserved learning and retention of pattern analyzing skill in amnesia: Dissociation of knowing how and knowing that. Science, 210, 207-210.

Coltheart, M. (2006a). Perhaps functional neuroimaging has not told us about the mind (so far)? Cortex, 42, 422-427.

Coltheart, M. (2006b). What has functional neuroimaging told us about the mind (so far)? Cortex, 42, 323-331.

Craik, F. I. M., Moroz, T. M., Moscovitch, M., Stuss, D. T., Winocur, G., Tulving, E., et al. (1999). In search of the self: A positron emission tomography study. Psychological Science, 10(1), 26-34.

Damasio, A. R. (1994). Descartes' Error: Emotion, Reason, and the Human Brain.

Damasio, H., Grabowski, T., Frank, R., Galaburda, A. M., Damasio, A. R., & Macmillan, M. B. (1994). The return of Phineas Gage: Clues about the brain from the skull of a famous patient. Science, 264, 1102-1105.

Davidson, R. J., Jackson, D. C., & Kalin, N. H. (2000). Emotion, Plasticity, Context, and Regulation: Perspectives From Affective Neuroscience. Psychological bulletin, 126(6), 890.

Downing, P. E., Jiang, Y., Shuman, M., & Kanwisher, N. (2001). A cortical area selective for visual processing of the human body. Science, 293, 2470-2473.

Farah, M. J. (2005). Neuroethics: the practical and the philosophical. Trends in Cognitive Sciences, 9(1), 34-40.

Finger, S. (1994). Origins of neuroscience : a history of explorations into brain function. New York: Oxford University Press.

Fodor, J. A. (1983). The modularity of the mind. Cambridge, Ma.: MIT Press.

Fodor, J. A. (2000). In J. A. Fodor (Ed.), The mind doesn't work that way: The scope and limits of computational psychology (pp. 55-78). Cambridge, Ma.: MIT Press.

Gallistel, C. R. (1999). Themes of thought and thinking [review of The Nature of Cognition, ed. by R.J. Sternberg]. Science, 285, 842-843.

Gardner, H. (1983). Frames of mind: the theory of multiple intelligences. New York: Basic Books.

Gauthier, I., Behrmann, M., & Tarr, M. J. (1999). Can face recognition really be dissociated from object recognition? Journal of Cognitive Neuroscience, 11(4), 349-370.

Gauthier, I., & Tarr, M. J. (1997). Becoming a "greeble" expert: Exploring mechanisms for face recognition. Vision Research, 37(12), 1673-1682.

Gauthier, I., Tarr, M. J., Anderson, A. W., Skudlarski, P., & Gore, J. C. (1999). Activation of the middle fusiform "face area" increases with expertise in recognizing novel objects. Nature Neuroscience, 2(6), 568-573.

Gazzaniga, M. S. (1988). Life with George: The birth of the Cognitive Neuroscience Institute. In W. Hirst (Ed.), The making of cognitive science: Essays in honor of George A. Miller (pp. 230-241). New York: Cambridge University Press.

Gazzaniga, M. S. (2005). The ethical brain (Vol. 10): Dana Press.

Gazzaniga, M. S. (Ed.). (1995). The cognitive neurosciences. Cambridge, Ma.: MIT Press.

Gazzaniga, M. S., Ivry, R. B., & Mangun, G. R. (1998). Cognitive neuroscience: The biology of the mind. New York: Norton.

Gillihan, S. J., & Farah, M. J. (2005). Is Self Special? A Critical Review of Evidence From Experimental Psychology and Cognitive Neuroscience. Psychological Bulletin, 131(1), 76-97.

Glimcher, P. (2003). Decisions, uncertainty, and the brain: The science of neuroeconomics. Cambridge, Ma.: MIT Press.

Goleman, D. (1995). Emotional intelligence. New York: Bantam.

Goleman, D. (2006). Social intelligence: The new science of human relationships. New York: Bantam Books.

Gordon, E. (1990). Integrative Neuroscience: Bringing Together Biological, Psychological and Clinical Models of the Human Brain: CRC.

Graf, P., & Schacter, D. L. (1985). Implicit and explicit memory for new associations in normal and amnesic subjects. Journal of Experimental Psychology: Learning, Memory, and Cognition, 11, 501-518.

Gross, C. G. (1998a). Brain, vision, memory: Tales in the history of neuroscience. Cambridge, Ma.: MIT Press.

Gross, C. G. (1998b). Emanuel Swedenborg: A neuroscientist before his time. In C. G. Gross (Ed.), Brain, vision, memory: Tales in the history of neuroscience (pp. 119-134). Cambridge, Ma.: MIT Press.

Gross, C. G. (1998c). From Imhotep to Hubel and Wiesel: The story of visual cortex. In C. G. Gross (Ed.), Brain, vision, memory: Tales in the history of neuroscience (pp. 1-90). Cambridge, Ma.: MIT Press.

Harmon-Jones, E., & Devine, P.G.  (2003).  Introduction to the special section on social neuroscience: Promise and caveats.  Journal of Personality & Social Psychology, 85, 589-593.

Hatfield, G.  (1988)  Neuro-philosophy meets psychology: Reduction, autonomy, and physiological constraints.  Cognitive Neuropsychology, 5, 723-746.

Hatfield, G.  (2000).  The brain's "new" science: Psychology, neurophysiology, and constraint.  Philosophy of Science, 67, S388-S403.

Heatherton, T. F. (2004). Introduction to special issue on social cognitive neuroscience. Journal of Cognitive Neuroscience, 16, 1681-1682.

Higgins, E. T., & Kruglanski, A. W. (Eds.). (2000). Motivational science: Social and personality perspectives. New York: Psychology Press.

Hilgard, E. R. (1980). The trilogy of mind: Cognition, affection, and conation. Journal for the History of the Behavioral Sciences, 16, 107-117.

Jackendoff, R. (1992). Is there a faculty of social cognition? In R. Jackendoff (Ed.), Languages of the Mind: Essays on Mental Representation (pp. 19-31). Cambridge, Ma.: MIT Press.

Jackendoff, R. (2007). Cognition of society and culture. In R. Jackendoff (Ed.), Language, consciousness, culture. Cambridge, Ma.: MIT Press.

Jackendoff, R. S. (1994). Social organization. In R. S. Jackendoff (Ed.), Patterns in the Mind: Language and Human Nature (pp. 204-222). New York:: Basic Books.

Kanwisher, N. G., McDermott, J., & Chun, M. M. (1997). The fusiform face area: A module in human extrastriate cortex specialized for face perception. Journal of Neuroscience, 17, 4301-4311.

Keenan, J. M., & Baillet, S. D. (1980). Memory for personally and socially significant events. In R. S. Nickerson (Ed.), Attention and performance (Vol. 8, pp. 651-659). Hillsdale, N.J.: Erlbaum.

Kihlstrom, J. F. (2006). Does neuroscience constrain social-psychological theory? Dialogue [Newsletter of the Society for Personality & Social Psychology], 21(1 (Spring)), 16-17, 32.

Kihlstrom, J. F., & Cantor, N. (2000). Social intelligence. In Handbook of intelligence. (pp. 359-379). New York, NY, US: Cambridge University Press.

Klein, S. B., & Kihlstrom, J. F. (1986). Elaboration, organization, and the self-reference effect in memory. Journal of Experimental Psychology: General, 115(1), 26-38.

Klein, S. B., & Kihlstrom, J. F. (1998). On bridging the gap between social-personality psychology and neuropsychology. Personality & Social Psychology Review, 2(4), 228-242.

Klein, S. B., Loftus, J., & Kihlstrom, J. F. (1996). Self-knowledge of an amnesic patient: Toward a neuropsychology of personality and social psychology. Journal of Experimental Psychology: General, 125(3), 250-260.

Kosslyn, S. M., & Koenig, O. (1992). Wet mind: The new cognitive neuroscience. New York: Free Press.

Landy, F. J. (2006). The long, frustrating and fruitless search for social intelligence: A cautionary tale. In K. R. Murphy (Ed.), A critique of emotional intelligence: What are the problems and how can they be fixed? (pp. 81-123). Mahwah, N.J.: Erlbaum.

Liberman, A. M., Cooper, F. S., Shankweiler, D. P., & Studdert-Kennedy, M. (1967). Perception of the speech code. Psychological Review, 74, 431-461.

Liberman, A. M., & Mattingly, I. G. (1985). The motor theory of speech perception revised. Cognition, 21, 1-36.

Lieberman, M. D. (2005). Principles, processes, and puzzles of social cognition: An introduction for the special issue on social cognitive neuroscience. NeuroImage, 28, 745-756.

Lieberman, M. D. (2007). Social cognitive neuroscience: A review of core processes. Annual Review of Psychology, 58, 259-289.

MacFarquhar, L. (2007, February 12). Two heads: A marriage devoted to the mind-body problem. New Yorker, 56-69.

MacLean, P. (1970). The triune brain, emotion, and scientific bias. In F. O. Schmitt (Ed.), The neurosciences: Second study program (pp. 336-349). New York: Rockefeller University Press.

Macmillan, M. (2000). An odd kind of fame: stories of Phineas Gage. Cambridge, Mass.,: MIT Press.

Macmillan, M. (2000). Restoring Phineas Gage: A 150th. retrospective. Journal of the History of the Neurosciences, 9, 42-62.

Macmillan, M. B. (1986). A wonderful journey through skulls and brains: The travels of Mr. Gage's tamping iron. Brain and Cognition, 5, 67-107.

Marr, D. (1982). Vision: A computational investigation into the human representation and processing of visual information. San Francisco: Freeman.

McClelland, J. L., & Rumelhart, D. E. (Eds.). (1986). Parallel distributed processing: Explorations in the microstructure of cognition (Vol. 1). Cambridge, Ma.: MIT Press.

McClure, S. M., Li, J., Tomlin, D., Cypert, K. S., Montague, L. M., & Montague, P. R. (2004). Neural correlates of behavioral preference for culturally familiar drinks. Neuron, 44, 379-387.

McKinney, L. O. (1994). Neurotheology: virtual religion in the 21st century. Arlington, Ma.: American Institute for Mindfulness.

McKone, E., Kanwisher, N., & Duchaine, B. C. (2007). Can generic expertise explain special processing for faces? Trends in Cognitive Sciences, 11(1), 8-15.

Milner, P. M. (1970). Physiological psychology (1st ed.). New York: Holt, Rinehart, & Winston.

Morgan, C. T. (1943). Physiological psychology (1st ed.). New York: McGraw-Hill.

Morgan, C. T., & King, R. A. (1966). Introduction to psychology (3rd ed.). New York: McGraw-Hill.

Ochsner, K. N., Beer, J. S., Robertson, E. R., Cooper, J. C., Gabrieli, J. D. E., Kihlstrom, J. F., et al. (2005). The neural correlates of direct and reflected self-knowledge. NeuroImage, 28, 797-814.

Ochsner, K. N., & Kosslyn, S. M. (1999). The cognitive neuroscience approach. In B. M. Bly & D. E. Rummelhart (Eds.), Handbook of cognition and perception (Vol. X: Cognitive Science, pp. 319-365). San Diego, Ca.: Academic Press.

Ochsner, K. N., & Lieberman, M. D. (2001). The emergence of social cognitive neuroscience. AMERICAN PSYCHOLOGIST, 56(9), 717-734.

Orlic, S., Redzepovic, S., Jeromel, A., Herjavec, S., & Iacumin, L. (2007). Influence of indigenous Saccharomyces paradoxus strains on Chardonnay wine fermentation aroma. International Journal of Food Science & Technology, 42, 95-101.

Paivio, A. (1986). Mental representations: A dual coding approach. New York: Oxford University Press.

Panksepp, J. (1992). A critical role for "affective neuroscience" in resolving what is basic about basic emotions. Psychological Review, 99(3), 554-560.

Panksepp, J. (1996). Affective Neuroscience: A Paradigm To Study the Animate Circuits for Human Emotions. In Emotion: Interdisciplinary perspectives. (pp. 29-60). Hillsdale, NJ, England: Lawrence Erlbaum Associates, Inc.

Panksepp, J. (1998). Affective neuroscience: The foundations of human and animal emotions. New York, NY, US: Oxford University Press.

Pinker, S. (1997). How the mind works. New York: Norton.

Posner, M. I., & DiGirolamo, G. J. (2000). Cognitive Neuroscience: Origins and Promise. Psychological Bulletin, 126(6), 873.

Posner, M. I., Pea, R., & Volpe, B. T. (1982). Cognitive-neuroscience: Developments toward a science of synthesis. In J. Mehler, E. C. T. Walker & N. Garrett (Eds.), Perspectives on mental representation: Experimental and theoretical studies of cognitive processes and capacities (pp. 251-276). Hillsdale, N.J.: Erlbaum.

Quarton, G. C., Melnechuk, T., & Schmitt, F. O. (Eds.). (1967). The neurosciences: A study program. New York: Rockefeller University Press.

Rilling, J. K. (2007). Laboratory of Darwinian Neuroscience, from http://www.anthropology.emory.edu/FACULTY/ANTJR/labhome.html

Roediger, H. L., & McDermott, K. B. (1993). Implicit memory in normal human subjects. In F. Boller & J. Grafman (Eds.), Handbook of Neuropsychology (pp. 63-131). Amsterdam: Elsevier Science Publishers.

Rogers, T. B., Kuiper, N. A., & Kirker, W. S. (1977). Self reference and the encoding of personal information. Journal of Personality & Social Psychology, 35, 677-688.

Rosen, J. (2007, March 11). The brain on the stand: How neuroscience is transforming the legal system. New York Times Magazine, 50-84.

Rozin, P. (1996). Sociocultural influences on human food selection. In E. D. Capaldi (Ed.), Why we eat what we eat: The psychology of eating (pp. 233-263). Washington, D.C.: American Psychological Association.

Rumelhart, D. E., & McClelland, J. L. (Eds.). (1986). Parallel distributed processing: Explorations in the microstructure of cognition (Vol. 2). Cambridge, Ma.: MIT Press.

Salovey, P., & Mayer, J. D. (1989). Emotional intelligence. Imagination, Cognition, and Personality, 9, 185-211.

Saxe, R., & Kanwisher, N. (2003). People thinking about thinking people: The role of the temporo-parietal junction in "theory of mind". NeuroImage, 19, 1835-1842.

Schacter, D. L. (1987). Implicit memory: History and current status. Journal of Experimental Psychology: Learning, Memory, and Cognition, 13, 501-518.

Schacter, D. L., & Tulving, E. (1982). Amnesia and memory research. In L. S. Cermak (Ed.), Human memory and amnesia (pp. 1-32). Hillsdale, N.J.: Erlbaum.

Schacter, D. L., & Tulving, E. (1982). Memory, amnesia, and the episodic semantic distinction. In R. L. Isaacson & N. E. Spear (Eds.), The expression of knowledge (pp. 33-65). New York: Plenum.

Schmitt, F. O. (1970a). Promising trends in neuroscience. Nature, 227, 1006-1009.

Schmitt, F. O. (Ed.). (1970b). The neurosciences: A second study program. New York: Rockefeller University Press.

Schmitt, F. O., Melnechuk, T., Quarton, g. C., & Adelman, G. A. (Eds.). (1966). Neuroscience Research Symposium Summary (Vol. 1). New York: Rockefeller University Press.

Schreier, P. (1979). Flavour composition of wines: A review. Critical Review in Good Science & Nutrition, 12, 59-111.

Scoville, W. B., & Milner, B. (1957). Loss of recent memory after bilateral hippocampal lesions. Journal of Neurology, Neurosurgery & Psychiatry, 20, 11-21.

Searle, J. (1980). Minds, brains, and programs. Behavioral & Brain Sciences, 3, 417-457.

Searle, J. R. (1992). The rediscovery of the mind. Cambridge, Mass.: MIT Press.

Searle, J. R. (1995). The construction of social reality. New York: Free Press.

Simon, H. A. (1955). A behavioral model of rational choice. Quarterly Journal of Economics, 69, 99-118.

Spurzheim, J. G. (1834). Phrenology or the doctrine of the mental phenomenon (3rd ed.). Boston: Marsh, Capen, & Lyon.

Squire, L. R., & Knowlton, B. J. (1994). The organization of memory. In The mind, the brain, and complex adaptive systems. (pp. 63-97). Reading, MA, US: Addison-Wesley/Addison Wesley Longman, Inc.

Squire, L. R., & Zola-Morgan, S. (1991). The medial temporal lobe memory system. Science, 253, 1380-1386.

Stich, S. P. (1983). From folk psychology to cognitive science. Cambridge, Ma.: MIT Press.

Swedenborg, E. (1740/1845-1846). The economy of the animal kingdom, considered anatomically, physically, and philosophically. London: Newberry.

Tarr, M.J., & Cheng, Y.D.  (2003). Learning to see faces and objects.  Trends in Cognitive Sciences, 7, 23-30.

Tarr, M. J., & Gauthier, I. (2000). FFA: A flexible fusiform area for subordinate-level visual processing automatized by expertise. Nature Neuroscience, 3(8), 764-769.

Taylor, E. H., & Cadet, J. L. (1989). Social intelligence: A neurological system. Psychological Reports, 64, 423-444.

Taylor, S. E. (2006). Tend and Befriend: Biobehavioral Bases of Affiliation Under Stress. Current Directions in Psychological Science, 15(6), 273-277(275).

Teitelbaum, P. (1967). Physiological psychology: Fundamental principles. Englewood Cliffs, N.J.: Prentice-Hall.

Thurstone, L. L. (1931). The measurement of attitudes. Journal of Abnormal & Social Psychology, 4, 25-29.

Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: heuristics and biases. Science, 185, 1124-1131.

Valenstein, E. S. (2005). The war of the soups and the sparks : The discovery of neurotransmitters and the dispute over how nerves communicate. New York: Columbia University Press.

Warrington, E. K., & Weiskrantz, L. (1970). Amnesia: Consolidation or retrieval? Nature, 228, 628-630.

Westen, D.  (2007). The political brain: The role of emotion in deciding the fate of the nation.  New York: Public Affairs Press.

Wexler, B. E. (2006). Brain and culture: Neurobiology, ideology, and social change. Cambridge, Ma.: MIT Press.

Wilson, E. O. (1998). Consilience:The unity of knowledge. New York: Knopf.

Zerubavel, E. (1997). Social mindscapes : An invitation to cognitive sociology. Cambridge, Mass.: Harvard University Press.

 

This page last modified 09/09/2014.