Home Curriculum Vitae Publications Conference Reports Forthcoming Extramural Colloquia Expert Testimony Teaching Healthcare The Human Ecology of Memory Research Archive Publications and Reports Rants


Does Moral Action Depend on Reasoning?


Note: In 2010, I received an invitation from the John Templeton Foundation, whose work focuses on the interface of science and religion, to contribute to their "Big Questions" program, which publishes collections of brief essays in places like the New York Times and The Atlantic. The topic this time was moral reasoning, which has been much debated among psychologists, philosophers, and other cognitive scientists.

Here's how the Foundation posed its "Big Question" for 2010:

In most of the world's philosophical and religious traditions, reason is considered a key element in helping human beings to discern what is morally appropriate and right, and thus in guiding our moral behavior.  In recent years, however, a number of researchers in psychology, neuroscience, and philosophy have made the case that reason does not function in the way these traditions have supposed.

Some argue, for example, that our moral intuitions and ideas are pre-rational or non-rational.  Others see moral decision-making primarily as a result of particular environmental or social contexts.  Still others hold that moral belief and action spring from deep-seated emotional and psychological dispositions about which we have limited or no self-awareness.  Even from many traditional religious points of view, the role of reason can be problematic, since it suggests that we have an ability to know moral truths and live moral lives free fro divine guidance. Where and how are the crucial lines to be drawn in this discussion? To what extent are people capable of consciously controlling their moral behavior?  Is our capacity for reasoned moral decision-making real or illusory? Are we as free as we like to think?  Does moral action depend on reasoning?

The Foundation's format requires that essays begin with a flat-footed answer to the question.  It also limited respondents to about 1000 words.  The following version is annotated so that the reader will get a better idea of my arguments.  

Link to edited version, published in the New York Review of Books (May 13, 2010, p. 7) and elsewhere.


Yes, Within Limits

Freedom of the will is real, but that does not mean that we are totally free. Human experience, thought, and action are constrained by a variety of factors, including our evolutionary heritage, law and custom, overt social influences, and a range of more subtle social cues.1 But within those limits, we are free to do what we want, and especially to think what we want, and we are able to reason our way to moral judgments and action.

Many evolutionary psychologists assert that reasoning in general and moral reasoning in particular are constrained by cognitive modules that evolved when we were hunter-gatherers on the East African savannah during the Pleistocene era. There is no question that patterns of behavior, just like body morphology, are subject to evolution by natural selection, and it is certainly possible that some aspects of our mental life have evolved in this way.2

But perhaps the more important legacy of evolution is not a "mental toolkit" specifically geared to some "environment of early adaptation" but rather our general intelligence-- an ability to learn and to solve problems that has enabled our species not just to adapt to new environments but to adapt our environments to us. Evolution has also given us a capacity for language, which permits us to conjure, reflect on, and communicate ideas that have never been thought before. These distinctive traits allowed us to move out of our primeval environment and to cover the planet, including permanent human settlements at the Amundson-Scott South Pole Station ("the last place on earth") and the International Space Station, orbiting some 200 miles in the sky.

Some social psychologists argue that human experience, thought, and action are overwhelmingly controlled by the situations in which they take place, and that therefore personal agency has little or no role in explaining behavior, including moral behavior. On this view, there are no rotten apples, only rotten barrels. This "doctrine of situationism" has descended to us from the stimulus-response behaviorism of John B. Watson and B.F. Skinner, and it is just as wrong-headed.

People control their objective situation through their choices and overt behavior, and they control their subjective situation through their mental activity-- how they perceive and categorize the situation, what relevant knowledge they retrieve from memory, and how they solve the problem of what to do. According to this alternative "doctrine of interactionism," the person and the situation are interdependent, and the situation is at least as much a function of the person as the person's behavior is a function of the situation. The bulk of causal agency remains with the person.3

Some theorists acknowledge that cognitive processes mediate between the situations that we face and our responses to them, but they assert that our thoughts are themselves automatically elicited by features of the situation, in an almost reflexive manner. Because our thoughts and actions occur automatically, they argue, there is little room for conscious, deliberate reflection. We are on automatic pilot most of the time, and conscious will is an illusion.

Such claims for "the automaticity of everyday life" run like a juggernaut through contemporary social psychology, but upon close examination, the evidence supporting them is not very good. There is no question that some aspects of cognition occur automatically. You would never finish reading this essay, for instance, if you had to deliberately piece together every word from its letters and every sentence from its words. But in most everyday situations, once we get beyond the first instant, our experience, thought, and action are largely the product of conscious rather than unconscious processes.4

A variant on the automaticity argument is that moral judgment is driven by emotional "gut feelings" and other intuitions, and that the reasons we give for our actions are largely after-the-fact rationalizations. But it is a mistake to conflate the intuitive with the emotional. Intuition can be purely cognitive, and relying on intuition has its own rational justification. It would be surprising if emotion did not play a role in moral judgment and behavior, but it remains an open question whether that role is central or peripheral. When there is no reason to make one choice over another, it is rational to let emotion be our guide. At least we can feel good about the choice we have made.5

It is easy to contrive thought experiments in which moral reasoning seems to fail us. Most people agree that it is acceptable to divert a trolley that threatens to kill five people onto a track where it will kill just one person instead. On the other hand, most people agree that it is not acceptable to throw someone off a footbridge, in the path of that same trolley, to save those same five lives. From a strictly utilitarian perspective, the two outcomes are the same: five lives saved versus one life lost.

When, in (thankfully) rare circumstances, moral reasoning fails us, we must rely on our intuitions, emotional responses, or some other basis for action. But that does not mean that we do not reason about the moral dilemmas that we face in the ordinary course of everyday living-- or that we reason poorly, or that we rely excessively on heuristic shortcuts, or that reasoning is infected by a host of biases and errors. It only means that moral reasoning is more complex and nuanced than a simple calculation of comparative utilities. Moral reasoning typically occurs under conditions of uncertainty (another constraint, which comes with human existence), where there are no easy algorithms to follow. If a judgment takes place under conditions of certainty, where the application of a straightforward algorithm will do the job, it is probably not a moral judgment to begin with.6

If you believe in God, then human rationality is a gift from God, and it would be a sin not to use it as the basis for moral judgment and behavior. If you do not believe in God, then human rationality is a gift of evolution, and not to use it would be a crime against nature.


John Kihlstrom is a professor of psychology at the University of California, Berkeley and the author of over a hundred scientific articles. He is the former editor of the journal Psychological Science and the co-author, with Nancy Cantor, of Personality and Social Intelligence.



1.  Here I had mostly in mind Martin Orne's notion of demand characteristics, which can be defined broadly as the totality of cues in a situation that contain information about the nature of that situation, and how the person is expected to behave in that situation.  Return.

2.  Ethology, a discipline of biology founded by Nikko Tinbergen, Konrad Lorenz, and Karl von Frisch (who together received the Nobel Prize in Physiology or Medicine in 1973) is based on the view that certain patterns of behavior  --  think of food-begging in herring gulls, imprinting in geese, and the waggle dance of honeybees --  are universal within a species precisely because they evolved due to natural selection.  There is no reason that natural selection could not have affected the development of certain "habits of mind", as well. A good example, initially proposed by Chomsky and by Fodor, are the cognitive modules involved in language.  However, I am deeply skeptical of strong claims by some evolutionary psychology for the evolution of an extensive set of mental modules, and especially the claim that any such modules that might exist evolved in the service of our reproductive capacity.  See my "Top 10 Questions to Ask Your Local Evolutionary Psychologist", in preparation.  Return.

3.  What I call the Doctrine of Interactionism has its recent origins in the work of the late Kenneth Bowers (1973), and its deeper roots in the work of Kurt Lewin (1935, 1951).  Social psychologists often attribute the Doctrine of Situationism to Lewin, but this is wrong: Lewin was quite clear that, in his view, the person and the environment were interdependent factors in the cause of behavior.  A more complex version of interactionism is contained in what I call the Doctrine of Reciprocal Determinism, proposed by Albert Bandura, according to which the person, the environment, and behavior constitute a complex, dynamic system characterized by bidirectional causality.   See my "The Person and the Situation in Social Cognition", in preparation.  Return.

4.  Although, as my 1987 Science paper makes clear, I believe that unconscious mental structures and processes play a role in experience, thought, and action, I am skeptical of claims that unconscious, automatic processes dominate our everyday life.  See my essay on "The Automaticity Juggernaut" (2008).  Return.

5.  As part of my attempt to develop a non-Freudian approach to unconscious mental life, my colleagues and I have proposed a concept of "implicit thought" which may well play a role in intuition.  See my papers, written with Jennifer Dorfman and Victor Shames, on "Intuition, Incubation, and Insight: Implicit Cognition in Problem-Solving" (1996) and Intimations of Memory and Thought" (1996).  Return.

6. I often suspect that there is a sort of "People Are Stupid" school within psychology, especially social psychology, which assumes that people don't think very hard about what they are doing --  and when they do think, they don't think very well.  I vented this suspicion in a commentary, "Is There a 'People Are Stupid' School in Social Psychology?". Return.

DilbertGutReaction.jpg (142619 bytes)


This page last revised 06/22/2010 02:17:30 PM.