Home Curriculum Vitae Publications Conference Reports Forthcoming Extramural Colloquia Expert Testimony Teaching Healthcare The Human Ecology of Memory Research Archive Publications and Reports Rants

 

Reason and Emotion 

in Moral Judgment

 

John F. Kihlstrom

University of California, Berkeley

 

Note: This paper began as a talk given to the Moral Psychology Study Group, University of California, Berkeley, December 2, 2011. I thank Audun Dahl for his comments on that presentation. An expanded version was presented at a conference on social perception organized by Jennifer Hudin for the Berkeley Social Ontology Group, March 15, 2013. A briefer written version was published in the Hedgehog Review (Vol. 15, No. 1, pp. 8-18, March 2013).

 

Social interaction involves at least three cognitive tasks. First, the person must categorize the situation he is in, the people who are there with him, and what they are doing. The categories assigned will allow him to make inferences about unobserved activities and features, predict what is going to happen next, and generally behave appropriately. Second, he must make a number of judgments about these entities, including causal attributions concerning the activities that are transpiring in that situation. Third, he must often make moral judgment about whether the people, and their actions, are good or bad, desirable or undesirable. Of these three activities, it would seem that only the last one is specific to social cognition. We do not make moral judgments about the revolution of the planets around the sun, or plate tectonics, or even the fact that lions eat gazelles. But we do make moral judgments when someone robs a house, or rescues a kitten from a fire, or takes our girlfriend (or parking space).

How we make these moral judgments has been a concern of philosophers and psychologists for a long time. In the West, this history begins with the Greeks and their debates over "the good life": the Sophists and Plato, the Stoics and the Epicureans. There is the Judeo-Christian tradition, with the Ten Commandments, Jesus' summary of The Law, the Sermon on the Mount, and his "new commandment" that we love one another. The medieval period gave us Aquinas's marriage of Platonic and Christian thought. The Enlightenment brought us Hobbes's ethical naturalism, Hume's utilitarianism, and Kant's categorical imperative. The 20th century saw the rise of meta-ethics, concerned with the nature of moral judgment, rather than with questions of right and wrong per se, and that is where the psychology of moral judgment begins as well. For example, Lawrence Kohlberg (Kohlberg, 1969) offered his neo-Piagetian stage theory of moral development, describing the transitions from preconventional to conventional to postconventional reasoning, and Carol Gilligan (Gilligan, 1982) distinguished between rational moral judgments based on justice and relational judgments based on compassion.

Kohlberg's view dominated the textbooks for a long time -- not just because it was virtually the only game in town, but also because it was consistent with the cognitive revolution in psychology of the 1960s and 1970s. But in retrospect, we can see in the Kohlberg-Gilligan debate a foreshadowing of what might be called the affective counterrevolution which rose in the 1980s, and which affected all of psychology, including the psychology of moral judgment.

 

Cognition and Emotion in Psychology

To understand what happened, let us quickly review the relations between cognition and emotion in psychology. At its beginnings, psychology was almost exclusively focused on cognition. The primacy of cognition was implicit in psychology's philosophical roots: Descartes' proposition that reason is the mental faculty which is distinctively human, and the corresponding shift from metaphysics to epistemology as the focus of philosophy; and the emphasis of the British empiricists on the experiential origins of knowledge. Accordingly, the 19th-century psychophysicists and physiological psychologists focused their experiments on sensation and perception, and -- with the exception of Wundt -- had very little to say about emotion. With the behaviorist revolution "psychology lost is mind" (to paraphrase R.S. Woodworth (Woodworth, 1929): stimulus-response theory threw cognition out the window, and emotion went with it -- with the salient exception of W.B. Cannon's construal of emotion in terms of the "flight or fight" reflex. The cognitive revolution underscored the role of thought and knowledge as mediators between environmental stimulus and organismal response, gave a cognitive interpretation of learning in terms of the formation of expectancies, and returned a host of topics to psychology, especially attention, "short-term" memory, and language.

The cognitive viewpoint asserts, first and foremost, that people respond to their mental representation of the stimulus. Behavior is mediated by cognitive states of knowledge, belief, and expectation, acquired through perception, stored in and retrieved from memory, manipulated and transformed through processes of reasoning and problem-solving, and translated into behavior through processes of judgment and decision-making. The cognitive point of view was exemplified by a theory of rational choice in which consciousness was the default option. Some cognitive psychologists (and other cognitive scientists) construed the word cognitive as referring to any internal mental state, including emotional and motivational states. But strictly speaking, cognitive psychology construes emotion as a cognitive construction -- a belief about one's emotion that is a product of a more or less rational analysis of the situation in which one finds oneself. Thus, Schacter and Singer (Schachter & Singer, 1962) argued that emotions were shaped by the person's perception of the situation in which he experienced undifferentiated physiological arousal. Lazarus (Lazarus, 1968) argued that we could control our emotions by changing the way we thought about our situation. Smith and Ellsworth (Smith & Ellsworth, 1985) listed the various types of appraisals that gave rise to particular emotions. And Clore and Ortony (e.g., Ortony, Clore, & Collins, 1988) argued that emotions depended on the cognitive value of an event, with respect to the person's goals ("goals" themselves reflect a cognitive construal of motivation, but that is a topic for another paper). In each theory, cognition comes first, and emotion is determined by the cognition.

The success of the cognitive revolution can be seen in the fact that every university department of psychology has a graduate program devoted to cognitive psychology, but hardly any of them have similar graduate programs devoted to emotion (or motivation, for that matter). Similarly, we have a large number of textbooks devoted to aspects of cognition, but hardly any providing similar coverage of emotion (or motivation, but I digress). Almost half of Henry Gleitman's Psychology, perhaps the best introductory text since James' Principles, is devoted to cognition (8 of 17 chapters and 328 out of 715 pages in the 8th edition of 2011, not counting the material on cognitive development), while motivation and emotion share only a single chapter.

Beginning in the 1980s, the hegemony of cognition was challenged by what I have come to think of as an affective counterrevolution, exemplified by the Zajonc-Lazarus debate (Lazarus, 1981, 1984; Zajonc, 1980, 1984) in the pages of the American Psychologist. The general thrust of this affective counterrevolution was that emotion was at least independent of cognition, if not actually primary. Thus, Zajonc himself argued that "preferences need no inferences" because they could be shaped by "subliminal" stimuli processed outside of conscious awareness. Paul Ekman proposed a set of reflex-like basic emotions that were part of our phylogenetic heritage. Building on the earlier work of Cannon, Bard, and Papez, Paul MacLean (MacLean, 1990), and Joseph LeDoux (LeDoux, 1996), proposed that emotional reactions are controlled by brain structures that are different from those involved in cognitive processing. Accordingly, Jap Panksepp (Panksepp, 1996) argued for a new interdisciplinary affective neuroscience modeled on, but independent of, cognitive neuroscience. Emotion now has its own textbooks (e.g., Niedenthal, Krauth-Gruber, & Ric, 2006), and we can only assume that free-standing graduate groups are not far behind.

 

Threats to Reason in Moral Psychology

05PASSP.JPG (90810 bytes)What does all this have to do with moral judgment? My point is that the affective counter-revolution, with its insistence on the independence of affect from cognition, and on the dominance of affect over cognition, constitutes a threat to the role of reason in moral psychology. Actually, it is not the only threat. Moral reason is also threatened by the rise of what I have called a "People Are Stupid" school of psychology (Kihlstrom, 2004), which argues that people are fundamentally irrational, and that our thoughts and actions are overwhelmingly subject to unconscious, automatic influences that operate outside phenomenal awareness and voluntary control. And it is threatened by the biologization of psychology in general, everything from behavior genetics and evolutionary psychology to the neuroscientific doctrine of modularity, the general thrust of which is, once again, to limit the role of mind in behavior -- and, indeed, to dispense with psychology itself (Mike Gazzaniga, the founder of cognitive neuroscience, has written that "Psychology is dead, and the only people who don't know it are psychologists" (M.S Gazzaniga, 1998, p. xi).

33Brooks.JPG (88210 bytes)If you don't believe this to be the case, check out David Brooks' book, The Social Animal (2011; reviewed in Kihlstrom, 2012). Brooks is probably the foremost interpreter of psychological research and theory to the general public, by virtue of his New York Times Op-Ed pieces even more visible than Malcolm Gladwell or Jonah Lehrer, and in this book he is constantly referring to Hume's dictum that reason is a slave to passion. For Brooks, and the psychologists whom he relies on, thought and action is dominated by unconscious processes of emotion, intuition, and automaticity.

Even more to the point is a series of essays commissioned by the John Templeton Foundation in the spring of 2010, as part of its "Big Questions" series. These essays got almost no attention in the professional media -- neither the APA Monitor nor the APS Observer covered them -- despite the fact that this was the first time a Big Question targeted psychology. The Big Question was: "Does moral action depend on reasoning?" Among other authorities, five psychologists were asked to respond, and four of them said, essentially, "No".

28GazzTemp.JPG (109689 bytes) Mike Gazzaniga led off by asserting that "all decision processes... are carried out before one becomes consciously aware of them.
29LehrerTemp.JPG (81029 bytes) Joshua Greene wrote that "moral judgment depends critically on both automatic settings and manual mode".
30GreeneTemp.JPG (101189 bytes) Jonah Lehrer (not a psychologist, strictly speaking, but one of the foremost interpreters of psychology to the general public, and the immediate source for much of Brooks' book) asserted that "moral decisions often depend on... moral emotions" that "are beyond the reach of reason".
31DamasioTemp.JPG (120765 bytes) Anthony Damasio (a cognitive neurologist, if not exactly a psychologist) wrote that "morality is based on social emotions that have their origins in 'prerational' emotional brain systems, neuromodulator molecules..., and genes which have 'early evolutionary vintage'".
32KihlTemp.JPG (109052 bytes) And then there was me -- but we're not there yet.

 

Critique of Moral Intuitionism

06MoralIntuit.JPG (86256 bytes)Each in his own way, these authors reflect a point of view proposed by Greene and Jonathan Haidt known as moral intuitionism (Greene & Haidt, 2002; Haidt, 2007). Greene and Haidt note that morality serves two important functions: at the micro level, it guides our social interaction, while at the macro level it binds groups together. But where does morality come from? Greene and Haidt argue for intuitive or rational primacy in moral judgments. Far from reflecting the operation of human reasoning, thee judgments are the product of evolved brain modules that generate what might be called the yuck factor -- an intuitive, emotional "gut feeling" that certain things are, well, just plain wrong. When we are asked to justify them, the reasons we give for our moral judgments are neither necessary nor sufficient; rather, they are more like post-hoc rationalizations.

07Mill.JPG (107619 bytes)Although moral intuitionism is relatively new as a psychological theory, the general idea is old enough to have been critiqued by John Stuart Mill, in his 1843 treatise which created A System of Logic (see Ryan, 2011). When we rely on intuitions, Mill wrote, there is no need to question prevalent moral judgments, nor any need to explain how our intuitions came to be what they are; nor do we have any means of resolving competing individuals' intuitions. They just are what they are. Mill agreed that intuition played an important role in some fields, such as mathematics, but he thought that a reliance on intuition should not extend to ethics and politics, because it "sanctifies" traditional opinions and provides an intellectual buttress to conservatism.

Indeed, moral intuitionism can be seen as a threat to democracy. How do you debate, how do you compromise, with someone whose moral judgments rely on intuitions? In this respect, I was put in mind of a quote by Heinrich Himmler, commander of the Gestapo in Nazi Germany, who wrote that "In my work for the Fuhrer and the nation I do what my conscience tells me is right and what is common sense" (quoted in the biography by Peter Longerich, 2011).

Of course, it doesn't matter if moral intuitionism is a threat to democracy, if in fact it is true -- that is, if it's a valid scientific theory about how moral judgments are made. Accordingly, it's important to examine the evidentiary base for moral intuitionism, to determine the extent to which it is actually supported by empirical evidence.

09TrolleyProb.JPG (123329 bytes)As far as I can tell, the reference experiment for moral intuitionism is a philosophical conundrum known as the Trolley Problem, originally devised by Phillipa Foot and popularized by Judith Jarvis Thomson, among others. 

 

10ProbDat.JPG (60311 bytes)You all know the Trolley Problem, so there's no need to repeat it here, except to say that many more people think that it's morally justifiable to switch a trolley from one track to another, sacrificing one life to save five, than think it's morally justifiable to push a fat man off a bridge onto the trolley tracks, killing him but saving those same five lives. The trick, of course, is that both versions of the Trolley Problem involve the same expected outcome -- one life lost, five lives saved. From a rational-choice point of view, it's a no-brainer. So why do people resist the choice And why is there such variance in choice, depending on how equivalent outcomes are framed? The conclusion, therefore, is that rational choice can't account for people's moral judgments. Something else must be involved, and that something else consists of emotional intuitions - -the "yuck factor" generated by a specialized brain module that became part of our phylogenetic equipment over the course of evolutionary time.

But it turns out that there are problems with the Trolley Problem. In the first place, it strikes me that the Trolley Problem lacks ecological validity in Orne's sense (Orne, 1962). It's not at all clear that the Trolley Problem is representative of the kinds of moral dilemmas that confront us in the ordinary course of everyday living. When was the last time you were on a bridge, next to a fat man, with a trolley racing along the tracks below you toward five people tied to the track by some Snidely Whiplash? But of course, that just may be my intuition, and there's no arguing with intuitions.

More important, note that, in the Trolley Problem, reason is ruled out by experimental fiat. That is, the Trolley Problem has been constructed such that all outcomes are rationally equivalent, and subjects cannot make a choice based on expected outcomes or utilities. They have to do something else. Perhaps, under such circumstances, people do rely on their moral intuitions, or on some other basis for judgment. But it hardly seems correct to conclude, from their responses in this highly constrained situation, that emotion supplants reason in moral judgment.

Nor is there any comparison of effect size. What we'd really like to see, in an experiment such as this, is an experimental manipulation of both emotional and rational factors, so we can determine when emotion dominates reason, under what circumstances, and by how much.

12UMG.JPG (78711 bytes)Finally, there is no consideration of a "cognitive" alternative. In this respect, it's interesting to note that a cognitive alternative to moral intuitionism is available. Inspired by Noam Chomsky's notion of a Universal Grammar underlying human language, John Mikhail (Mikhail, 2007) has offered a universal moral grammar that provides a good explanation of responses to various versions of the Trolley Problem in purely cognitive terms -- it's a grammar, after all -- without invoking emotions or intuitions. Mikhail begins, like any good cognitive psychologist, by invoking what he calls "the poverty of the moral stimulus" -- that the situations that demand moral judgment usually do not contain enough information to enable us to make that judgment. People form a mental representation of the situation, and then apply a moral grammar to render a moral judgment. It's all very cognitive -- all very rational.

The bottom line is that there is no good empirical reason to think that emotion and intuition rules moral judgment. Maybe, as in the Trolley Problem, affect and intuition act as a sort of tie-breaker, in those circumstances when rational choice does not suffice. Maybe reason serves to challenge and correct our moral intuitions. Or maybe affect serves as information for cognition. In any case, the neither cognition nor emotion is dominating the other. Rather, it seems that in moral judgment as in other aspects of mental life, cognitive, emotional, and motivational processes work together, and the balance between them varies depending on the situation.

14KantQuote.JPG (61565 bytes)Notice that, in this formulation, emotion is more than a cognitive construction. I'm a cognitive psychologist, but I have always distrusted the idea that emotions are merely cognitive constructions -- that we don't really feel anything, we just think we do. I've always preferred the formulation by Immanuel Kant, who asserted (in The Critique of Judgment, 1791, as paraphrased by Watson) that "there are three absolutely irreducible faculties of mind: knowledge, feeling, and desire". What Kant meant was that neither of these faculties could be reducible to the other(s), as in the cognitive-constructivist account of emotion. Emotion, then, has an existence that is independent of cognition. But just because emotion is not reducible to cognition, that does not mean that cognition and emotion cannot interact. We know that emotion can color perception, memory, and thought; and we know that thinking can generate, and regulate, emotion. We can dispense with arguments about the primacy of either cognition or affect, and get on with the business of discovering how they work, separately and together, and how they each play a role in matters such as moral judgment.

 

Critique of the Critique of Conscious Will

We could end this presentation right here, except that there has recently emerged yet another threat to reason in moral psychology -- namely, a critique of the concept of conscious will. You don't have to think about things too hard to understand that the very concept of moral judgment depends on the freedom of the will. Neither concept applies in the natural world of planets and continents and lions and gazelles, where events are completely determined by events that went before. Moral judgment only applies when the target of the judgment has a real choice -- the freedom to choose among alternatives, and whose choices make a difference to its behavior. The problem of free will, of course, is that we understand that we are physical entities: specifically, the brain is the physical basis of mind; and the brain, as a physical system, is not exempt from the physical laws that determine everything else that goes on in the universe; and so neither are our thoughts and actions. So the problem of free will is simply this: how do we reconcile our conscious experience of freedom of the will with the sheer and simple fact that we are physical entities existing in a universe that consists of particles acting in fields of force?

Philosophers have debated this problem for a long time -- at least so long as materialism began to challenge Cartesian dualism. Those who are compatibilists argue that the experience of free will is compatible with physical determinism, while incompatibilists argue that it is not, and we must reconcile ourselves to the fact that we are not, in fact, free to choose what to do and what to think. Those incompatibilists who have read a little physics may make a further distinction between the clockwork determinism of classical Newtonian physics and the pinball determinism of quantum theory, maybe invoking Heisenberg's observer effect and uncertainty principle (they're apparently not the same thing) as well, but injecting randomness and uncertainty into a physical system is not the same as giving it free will, so the problem remains where it was.

Psychologists too have entered the fray: those of a certain age, such as mine, will remember the debate between Carl Rogers and B.F. Skinner over the control of human behavior (Rogers & Skinner, 1956; Wann, 1964). These days, many psychologists, for their part, appear to come down on the side of incompatibilism, arguing essentially that free will is an illusion -- a necessary illusion, if we are to live in a society governed by laws, but an illusion nonetheless. 

16WegBook.JPG (106343 bytes)As 17WegFig.JPG (71951 bytes)a case in point, consider The Illusion of Conscious Will, in which Dan Wegner (2002) invokes the concept of automaticity and asserts that "the real causal mechanisms underlying behavior are never present in consciousness" (p. 97). Just to make his meaning clear, be presents the reader with a diagram contrasting the "apparent causal path" between thought and action with the "actual causal path" connecting action to an "unconscious cause of action". 

18GazzBook.JPG (99685 bytes)More 19GazzSnake.JPG (133218 bytes)recently, Mike Gazzaniga (2011, pp. 105-106) has picked up on the theme, writing that the "illusion" of free will is so powerful that "we all believe we are agents... acting willfully and with purpose", when in fact "we are evolved entities that work like a Swiss clock" (no pinball determinism for him!). To illustrate his point, he recounted an instance in which, while walking in the desert, he jumped in fright at a rattlesnake: he "did not make a conscious decision to jump and then consciously execute it" -- that was a confabulation, "a fictitious account of a past event; rather "the real reason I jumped was an automatic nonconscious reaction to the fear response set into play by the amygdala" (pp. 76ff).

Similarly, Sam Harris (2012), a neuroscientist who burst on the scene with a vigorous critique of religion, has weighed in with a critique of free will, arguing, like Wegner, that free will is simply an illusion. "Our wills are simply not of our own making. Thoughts and intentions emerge from background causes of which we are unaware and over which we exert no conscious control.

This argument isn't just inside baseball. In its March 23, 2012 issue, the Chronicle of Higher Education published a forum entitled "Free Will is an Illusion", with a contribution by Mike Gazzaniga; the May 13, 2012 issue of the New York Times carried an Op-Ed piece by James Atlas entitled "The Amygdala Made Me Do It"; and the May-June 2012 issue of Scientific American Mind featured a cover story by Christopher Koch detailing "How Physics and Biology Dictate Your 'Free' Will". These aren't the only examples, so something's happening here. What we might call psychological incompatibilism is beginning to creep into popular culture. Which, like moral intuitionism, is OK if it's true. The question is: Is it true?

20RPOrig.JPG (78791 bytes) Wegner, Gazzaniga, and Harris are inspired, in large part, by a famous experiment performed by the late Benjamin Libet, a neurophysiologist, involving a signal known as the readiness potential (Libet, Gleason, Wright, & Pearl, 1983). When someone makes a voluntary movement, an event-related potential appears in the EEG about 600 milliseconds beforehand. 

 

21LibetExp.JPG (80344 bytes)Libet added to this experimental setup what he called the "clock-time" method. Subjects viewed a light which revolved around a circle at a rate of approximately 1 per 2.5 seconds; they were instructed to move their fingers anytime they wanted, but to use the clock to note the time of their first awareness of the wish to act. 

 

22RPType2.JPG (63194 bytes)Libet discovered that the awareness of wish preceded the act by about 200 msec -- not much of a surprise there. But he also discovered that the readiness potential preceded the awareness of the wish by about 350 msec (200 + 350 = c. 600 msec). So there is a second type of readiness potential, which Libet characterized as a predecisional negative shift. Libet concluded that the brain decides to move before the person is aware of the decision, which manifests itself as a conscious wish to move. Put another way, behavior is instigated unconsciously (Wegner's "unconscious cause of action"), conscious awareness occurs later, as a sort of afterthought, and conscious control serves only as a veto over something that is already happening. In other words, conscious will really is an illusion, and we are nothing more than particles acting in fields of force after all.

Libet's observation of a predecisional negative shift has been replicated in other laboratories, but that does not mean that his experiment is immune to criticism and his conclusions are correct (for extended discussions of Libet's work, including replies and rejoinders, see (Banks & Pockett, 2007; Libet, 1985, 2002, 2006). In the first place, there's a lot of variability around those means, and the time intervals are such that that the gap between the predecisional negative shift and the readiness potential could be closer to zero. And there are lot of sources of error, including error in determining the onset of the readiness potential, and error in determining the onset of the conscious wish (as for the latter, think about keeping track of a light that is rotating around a clockface once ever 2.5 seconds). Still, that difference is unlikely to be exactly zero, and so the problem doesn't go away.

At a different level, Libet's experiment has been criticized on the grounds of ecological validity. The action involved, moving one's finger, is completely inconsequential, and shouldn't be glibly equated with choosing where to go to college, or whom to marry, or even whether to buy Cheerios or Product 19 -- much less whether to throw a fat man off a bridge to stop a runaway trolley. The way the experiment is set up, the important decision has already been made -- that is, to participate in an experiment in which one is to raise one's finger while watching a clock. And it's made out of view of the EEG apparatus. I find this argument fairly persuasive. But still, there's that nagging possibility that, if we recorded the EEG all the time, in vivo, we'd observe the same predecisional negative shift before that decision was made, too.

26MillerExp.JPG (63232 bytes)More recently, though, Jeff Miller and his colleagues found a way to address this critique. They noted that the subjects' movements are not truly spontaneous, for the simple reason that they must also watch the clock while making them. They compared the readiness potential under two conditions. In one, the standard Libet paradigm, subjects were instructed to watch the clock while moving their fingers, and report their decision time. In the other, they were instructed to ignore the clock, and not asked for any reports. Subjects in both conditions still made the "spontaneous" decision whether, and when, to move their fingers. But Miller et al. observed the predecisional negative shift only when subjects also had to watch the clock and report their decision time. They concluded that Libet's predecisional negative shift was wholly an artifact of the attention paid to the clock. It does not indicate the unconscious initiation of ostensibly "voluntary" behavior, nor does it show that "conscious will" is illusory. Maybe it is, but the Libet experiment doesn't show it.

The Miller experiment is important enough that we'd like to see it replicated in another laboratory, though I want to stress that there's no reason to think that there's anything wrong with it. When they did what Libet did, they got what Libet got. When they altered the instructions, but retained voluntary movements, Libet's effect disappeared completely -- not just a little, but completely. The ramifications are pretty clear. This doesn't mean that the problem of free will has been resolved in favor of compatibilism, though it does suggest that compatibilism deserves serious consideration. Personally, I like the implication of a paper by John Searle, titled "Free Will as a Problem in Neurobiology" (Searle, 2001). We all experience free will, and there's no reason, in the Libet experiment or any other study, to think that this is an illusion. It may well be a problem for neurobiology, but it's a problem for them to solve. I don't lose any sleep over it. But if free will is not an illusion, and we really do have a meaningful degree of voluntary control over our experience, thought, and action, that moral judgment is secure from this threat as well. We should be willing to make moral judgments, using all the information -- rational and intuitive -- that we have available to us.

 

Yes, Within Limits

And that's where I came down on my response to the Templeton Foundation's Big Question: "Does Moral Action Depend on Reasoning?". We are currently in the midst of a retreat from, or perhaps even a revolt against or an assault on reason. Some of this is politically motivated, but some is aided and abetted by psychologists who, for whatever motive, seek to emphasize emotion over cognition, the unconscious over the conscious, the automatic over the controlled, brain modules over general intelligence, and the situation over the person (not to mention the person-situation interaction).

Moral intuitionism represents a fusion of automaticity and emotion, and like the literature that comprises the "automaticity juggernaut" (Kihlstrom, 2008) it relies mostly on demonstration experiments that reveal that gut feelings can play a role in moral judgments. There is no reason to generalize their findings to what people do in the ordinary course of everyday living. As I wrote in the Templeton essay (pp. 37-38):

Freedom of the will is real, but that does not mean that we are totally free. Human experience, thought, and action are constrained by a variety of factors, including our evolutionary heritage, law and custom, overt social influences, and a range of more subtle social cues [like demand characteristics]. But within those limits we are free to do what we want, and especially to think what we want, and we are able to reason our way to moral judgments and action.

*****

It is easy to contrive thought experiments in which moral reasoning seems to fail us.... When, in (thankfully) rare circumstances, moral reasoning fails us, we must rely on our intuitions, emotional responses, or some other basis for action. But that does not mean that we do not reason about the moral dilemmas that we face in the ordinary course of everyday living -- or that we reason poorly, or that we rely excessively on heuristic shortcuts, or that reasoning is infected by a host of biases and errors. It only means that moral reasoning is more complex and nuanced than a simple calculation of comparative utilities. Moral reasoning typically occurs under conditions of uncertainty... where there are no easy algorithms to follow. If a judgment takes place under conditions of certainty, where the application of a straightforward algorithm will do the job, it is probably not a moral judgment to begin with/

If you believe in God, then human rationality is a gift from God, and it would be a sin not to use it as the basis for moral judgment and behavior. If you do not believe in God, then human rationality is a gift of evolution, and not to use it would be a crime against nature.

 

References

Banks, W. P., & Pockett, S. (2007). Benjamin Libet's work on the neuroscience of free will. In M. Velmans & S. Schneider (Eds.), The Blackwell companion to consciousness (pp. 657-670). Malden, Ma.: Blackwell.

Gazzaniga, M. S. (1998). The mind's past. Berkeley: University of California Press.

Gazzaniga, M. S. (2011). Who's in charge? Free will and the science of the brain. New York: Ecco.

Gilligan, C. (1982). In a different voice: Psychological theory and women's development. Cambridge, Ma.: Harvard University Press.

Greene, J. D., & Haidt, J. (2002). How (and where) does moral judgment work? Trends in Cognitive Sciences, 6, 517-523.

Haidt, J. (2007). The New Synthesis in Moral Psychology. Science, 316(5827), 998-1001.

Harris, S. (2012). Free will. New York: Free Press.

Kihlstrom, J. F. (2004). Is there a "People are Stupid" school in social psychology? [Commentary on "Towards a balanced social psychology: Causes, consequences, and cures for the problem-seeking approach to social behavior and cognition" by J.I. Krueger and D.C. Funder]. Behavioral & Brain Sciences, 27, 348-349.

Kihlstrom, J. F. (2008). The automaticity juggernaut. In J. Baer, J. C. Kaufman & R. F. Baumeister (Eds.), Psychology and free will (pp. 155-180). New York: Oxford University Press.

Kihlstrom, J. F. (2012). The person-situation interaction. In D. Carlston (Ed.), Oxford handbook of social cognition (pp. in press). New York: Oxford University Press.

Kohlberg, L. (1969). Stage and sequence: The cognitive-developmental approach to socialization. In D. Goslin (Ed.), Handbook of socialization: Theory in research (pp. 347-479). Boston: Houghton-Mifflin.

Lazarus, R. S. (1968). Emotions and Adaptation: Conceptual and Empirical Relations. Nebraska Symposium on Motivation, 16, 175-266.

Lazarus, R. S. (1981). A cognitivist's reply to Zajonc on emotion and cognition. American Psychologist, 36(2), 222-223.

Lazarus, R. S. (1984). On the primacy of cognition. American Psychologist, 39(2), 124-129.

LeDoux, J. E. (1996). The emotional brain: The mysterious underpinnings of emotional life. New York: Simon & Schuster.

Libet, B. (1985). Unconscious cerebral initiative and the role of conscious will in voluntary action. Behavioral & Brain Sciences, 8, 529-566.

Libet, B. (2002). The Timing of Mental Events: Libet's Experimental Findings and Their Implications. Consciousness and Cognition, 11(2), 291-299.

Libet, B. (2006). The timing of brain events: Reply to the "Special Section" in this journal of September 2004, edited by Susan Pockett. Consciousness and Cognition, 15(3), 540-547.

Libet, B., Gleason, C. A., Wright, E. W., & Pearl, D. K. (1983). Time of conscious intention to act in relation to onset of cerebral activities (readiness-potential): The unconscious initiation of a freely voluntary act. Brain, 106, 623-642.

MacLean, P. D. (1990). The Triune Brain in Evolution: Role of Paleocerebral Functions. New York: Springer.

Mikhail, J. (2007). Universal moral grammar: Theory, evidence and the future. Trends in Cognitive Sciences, 11(4), 143-152.

Niedenthal, P. M., Krauth-Gruber, S., & Ric, F. (2006). Psychology of emotion: Interpersonal, experiential, and cognitive approaches. New York: Psychology Press.

Orne, M. T. (1962). On the social psychology of the psychological experiment: With particular reference to demand characteristics and their implications. American Psychologist, 17, 776-783.

Ortony, A., Clore, G. L., & Collins, A. (1988). The cognitive structure of emotions. New York, NY, US: Cambridge University Press.

Panksepp, J. (1996). Affective Neuroscience: A Paradigm To Study the Animate Circuits for Human Emotions. In Emotion: Interdisciplinary perspectives. (pp. 29-60). Hillsdale, NJ, England: Lawrence Erlbaum Associates, Inc.

Rogers, C. R., & Skinner, B. F. (1956). Some issues concerning the control of human behavior. Science, 124, 1057-1066.

Ryan, A. (2011). The passionate hero, then and now [review of John Stuart Mill: Victorian Firebrand by R. Reeves]. New York Review of Books, 58(19), 59-63.

Schachter, S., & Singer, J. E. (1962). Cognitive, social, and physiological determinants of emotional state. Psychological Review, 69, 379-399.

Searle, J. R. (2001). Free will as a problem in neurobiology. Philosophy, 72.

Smith, C. A., & Ellsworth, P. C. (1985). Patterns of cognitive appraisal in emotion. Journal of Personality & Social Psychology, 48(4), 813-838.

Wann, T. W. (Ed.). (1964). Behaviorism and phenomenology: Contrasting bases for modern psychology. Chicago: University of Chicago Press.

Wegner, D. M. (2002). The illusion of conscious will. Cambridge, Ma.: MIT Press.

Woodworth, R. S. (1929). Psychology: A study of mental life (2nd ed.). New York: Holt.

Zajonc, R. B. (1980). Feeling and thinking: Preferences need no inferences. American Psychologist, 35, 151-175.

Zajonc, R. B. (1984). On the primacy of affect. American Psychologist, 39, 117-123.

 

This page last updated 04/19/2013 3:18 PM.