Home Introduction Cognitive Psychology Cognitive Perspective Social Perception Social Memory Social Categorization Social Judgment Language Automaticity Self Social Neuropsychology Personality Social Intelligence Development Sociology of Knowledge Social Construction Conclusion Lecture Illustrations Exam Information


Social Judgment and Inference

For an overview of theory and research on reasoning, problem-solving, judgment, and decision-making,  

see the General Psychology lecture supplements on Thinking.

 

Social cognition begins with social perception, and social perception has the same purpose as nonsocial perception:

Accordingly, the social perceiver has four fundamental tasks:

 

A Brief History of Causality

Philosophical analyses of judgments of causality go back at least as far as Aristotle.  

As a first pass, we should distinguish between two kinds of relations between cause and effect:

  • Necessary causes must be present for the effect to occur.  But by themselves they may not be sufficient to cause the effect, because they must combine with other causes.
  • Sufficient causes are sufficient by themselves to cause the effect.  But they may not be necessary to cause the effect, because other causes may suffice, as well.

Aristotle taught that knowledge of a thing requires description, classification, and causal explanation.  He distinguished among four types of causality:

  • Material Cause: the substance of which a thing is made.
  • Formal Cause: the design of the thing;
  • Efficient Cause: the maker or builder of the thing; and
  • Final Cause: the purpose or function of the thing.

Hume provided the classic analysis of the conditions for inferring causality:

  • Temporal Succession
  • Contiguity (spatial and temporal)
  • Constant Conjunction

Mill's methods for inferring causality:

  • Agreement
  • Difference
  • Residues
  • Concomitant Variation

Koch's Postulates for establishing the cause of an infectious disease:

  • The micro-organism must be found in all cases of the disease.
  • The micro-organism must be isolated and grown in culture.
  • Inoculation with the isolated micro-organism must produce new cases of the disease (at least in susceptible individuals).
  • The micro-organism must be re-isolated from newly diseased individuals.

Causal inference in epidemiology (Bradford Hill, 1965).  Example; smoking and health.

  • Strength of Association
  • Consistency
  • Specificity
  • Temporality
  • Biological Gradient (dose-response curve; threshold effects)
  • Plausibility
  • Coherence
  • Experiment
  • Analogy (laboratory models)

Stephen Kern traces the history of our understanding of the causes of human behavior through an analysis of novelistic depictions of murder in A Cultural History of Causality (Princeton University Press, 2004).


Lewin's Formula

Social-psychological theorizing on attributions of causality, or causal explanation generally, begins with "Lewin's Grand Truism", B = f(P, E).  Behavior is caused by factors internal to the person acting in combination with factors in the external environment.  Accordingly, social psychologists sought to determine precisely what those factors were.



It is one thing to identify the actual causes of social behavior in the person and the environment, but that is not really what social cognition is all about.  Social cognition seeks to understand how the individual explains what happens.  These explanations may be quite different from those that a scientist might give.  But it is thee explanations that will determine how a person reacts to what other people do.


Phenomenal Causality

In a sense, all of science is about establishing causality, and that goes for psychology as well as physics and biology.  What causes us to perceive and remember things the way we do?  What causes individual differences in personality?  But as Heider (1944, 1958) noted, in social cognition the problem of causal attribution is one of phenomenal causality -- not so much what actually causes events to occur (which is a problem for behavioral and social science), but rather the "naive" social perceiver's intuitive, subjective beliefs about causal relations.

 

Michotte

The empirical study of causal inferences begins with work by Michotte (1946, 1950), who asked subjects to view short animated films depicting interactions among colored disks.

007Mich1.jpg
                    (80526 bytes)In Demonstration 1, the red disk moves into contact with the blue disk, which moves immediately.  008Mich2.jpg
                      (73939 bytes)In this case, subjects typically believed that the movement of the red disk triggered the movement of the blue disk -- what Michotte termed the launching effect.  When the red disk contacted the blue disk, as in Demonstration 2, but the blue disk moved only after some delay, the launching effect diminished.

 

009Mich3.jpg
                    (89157 bytes)Subjects also perceive causal relations when the two disks move together, even when they do not touch.  010Mich4.jpg
                    (83730 bytes)Michotte termed this the entrainment effect.  In Demonstration 3, the larger red disk tends to be perceived as chasing the smaller blue disk, and the blue disk as running away.  In Demonstration 4, the larger red disk tends to be perceived as leading the smaller blue disk, and the blue disk as following.  Note the ascription --the attribution of human motives -- chasing and running away, leading and following -- to these inanimate objects.  

While Hume argued that causality was inferred from such stimulus features as spatiotemporal contiguity, Michotte argued that causality was perceived directly, without need for inferences (much as Gibson proposed that such properties as distance and motion were perceived directly, without the involvement of any "higher" cognitive processes).  But in the present context, Michotte's most interesting observations concerned the attribution of mental states to inanimate objects.  When ordinary people describe causality, they very often do so in anthropomorphic terms, ascribing human motives to nonhuman objects.  Phenomenal causality is very often experienced in social terms.

Link to a YouTube video of the Michotte animation.

 

Heider: "Behavior Engulfs the Field"

Still, Michotte was not a social psychologist, and he was not particularly interested in social cognition.  Within social psychology, the study of phenomenal causality, and of causal attribution, really begins with Fritz Heider (1944, 1958).  Like Lewin and Asch, Heider (1896-1988) was a refugee from Hitler's Europe; and like Lewin and Asch, Heider was greatly influenced by the Gestalt school of psychology. 



Echoing Lewin's formula, B = f(P, E), Heider argued that there were two possible causes of behavior:

In this way, as we will see, Heider set the stage for a major line of theorizing about causal attribution.

Unlike Lewin, however, he was not concerned about the actual causes of behavior, but rather the perceived causes.  Determining the actual causes of behavior is the job of a professional psychologist, and (from Heider's point of view) social cognition is concerned with how the naive, intuitive, "lay" psychologist judges causes.  So how does a person go about making attributions of causality?

Heider's own solution was the statement that "Behavior engulfs the field".  Because we have no direct access to another's mental states, information about persons is provided by their behavior.  Thus, the perceptual field consists of the actor and his or her behavior.

Heider further noted a general tendency for people to ascribe behavior to the actor.  On the assumption that actors intend the outcomes of their actions, once an action has been categorized, the intention is given; and then the intention itself is derived from stable traits and other characteristics of the person.

This formulation is an extension of the Gestalt principle that "the whole is greater than the sum of its parts".  For Heider, the actor and the act are perceived as a unit, joined together by perceived cause and effect, in such a way that the actor is perceived as the cause of his or her action.

Note, too, that for Heider explanation stops at the psychological level of analysis.  For Heider, Sue struck John because she was angry.  There is no reduction to underlying physiological processes, which would involve an infinite regress of causal explanations.  Heider is concerned with phenomenal causality, and with explanations in terms of "folk" or "common-sense" psychology.

013Heider.jpg
                  (57775 bytes)Like Michotte, Heider employed a film to explore phenomenal causality.  In the film, two triangles -- one large (T) and one small (t) -- and a small circle (c) move in and out of a rectangular enclosure. But Heider noticed that his subjects typically described the film in terms of a "plot" that attributed feelings and desires to the objects.  In other words, subjects organized the chaos of the film by attributing intentionality to the objects -- giving the film a narrative structure in which the objects behaved in accordance with motives and desires.

Link to the Heider/Simmel animation on YouTube.

Here is Heider's own description of the film (Heider & Simmel, 1944, p. 245):

Or:

The Michotte and Heider-Simmel animations undoubtedly set the stage for The Red Balloon (1956), a French short feature directed by Albert Lamorisse.  The film portrays the adventures of a little boy (played by the director's son, Pascal) who finds a balloon whose movements have all the hallmarks of sentience and agency.  The film won the Palme d;Or for short films at the Cannes Film Festival, and an Oscar for best original screenplay.

Link to "The Red Balloon" on YouTube.


Naive Analysis of Action

For Heider (1958), the key to interpersonal relations is the perception of other people's behavior.  We know what the other person (P) does; how we respond will depend on why we think s/he did it.  Following "Lewin's grand truism", B = f(P, E), Heider believed that people attributed individual behavioral acts to relatively unchanging dispositional properties.  But unlike some later interpretations, "disposition" did not, for Heider, refer only to personal dispositions, such as personality traits; there were also environmental dispositions -- features of the environment that tended to elicit certain behaviors. 

The motivational factor, in turn, creates a distinction between personal and impersonal causality

Even in cases of personal causality, though, Heider thought that naive observers distinguished among various levels of responsibility.

  1. At Level 1, the most primitive level, P is held responsible for any effect associated with him or her.
  2. At Level 2, .P is held responsible for any outcome act which is attributed to him/her, even if s/he could not have foreseen that outcome.
  3. At Level 3, P is held responsible for outcomes that were foreseen, but also for those whose outcomes were unforeseen due to carlessness or lack of concern.
  4. At Level 4, P is held responsible only for outcomes that were intentionally produced.
  5. At Level 5, the msot advanced level, P's responsibility is diminished for intended outcomes when circumstances justify or constrain his/her actions.

Although this is all a theoretical analysis, a study by Shaw & Sulzer (1964) largely verified these predictions.

For more details of Heider's analysis of the naive analysis of action, see Shaw and Costanzo, Theories of Social Psychology (1e, 1970), from which this account is taken.


Correspondent Inference Theory

The first formal theory of causal attribution was proposed by Jones and Davis (1965), as an elaboration on Heider's principles.  Their  correspondent inference theory is based on what they called the action-attribute paradigm:

In this way, we make correspondent inferences -- that people's actions correspond to their intentions, and that their intentions correspond to their personal qualities, namely traits and attitudes.  In correspondent inferences, an act and an attribute are similarly described -- e.g., a dominant person behaves in a dominant manner.  This is known as the attribute-effect linkage:

Given an attribute-effect linkage which is offered to explain why an act occurred, correspondence increases as the judged value of the attribute departs from the judge's conception of the average person's standing on that attribute.

Thus, correspondent inferences are predicated on implicit personality theory -- the observer's intuitions about trait-behavior relations, and about how the "average person" would behave.

 

Undesirable and Unexpected Effects Provide Information

Jones and Davis understood that acts are not performed in isolation, and that usually actors have a choice between plausible alternatives.  In making causal attributions, people pay attention to:

Noncommon effects are assumed to reflect the actor's intended outcomes, and so correspond to his dispositions.

Jones and Davis argued that correspondence is greater for undesirable effects, which can't be predicted from knowledge of social norms, and which provide information about the dispositions of the individual.  

Later, Jones & McGillis (1976) substituted expectancies for desirability.  That is, correspondence is greater for unexpected effects; expected effects can be predicted from social norms.

Casablanca.jpg (26333 bytes)

"Since no one is to blame, I demand no explanation"
Victor Laszlo to Rick Blaine in Casablanca (1943)

 

The Attitude-Attribution Paradigm

The first empirical test of correspondent inference theory was reported by Jones and Harris (1967).  Their study employed an attitude-attribution paradigm in which subjects were asked to judge the attitude of another person based on the target's expressed opinion and the context in which the expression occurred.  Debaters gave speeches that either favored or opposed recognition of the Castro regime in Cuba, or favored or opposed racial segregation (note that in both cases, at the time, the "con" side was normative).  The judges were further informed that the speeches were made under two conditions: choice (the debater could choose which side he would present) or no choice (the debater was assigned a position by his coach).


018JonesHarrisCastro.jpg (47188 bytes)In the Castro debate, pro-Castro speakers were rated as actually favoring Castro.  This was especially true in the choice condition, but it was also true in the no-choice condition.  Here is a good example of the role of desirability and expectation: in the United States in 1967, even among college students, favoring Castro was both undesirable and unexpected.  So, speakers who favored Castro were rated as actually having pro-Castro attitudes.

 


019JonesHarrisSeg.jpg (55891 bytes)The results for the segregation debate were a little more complex, and depended on whether the debater was characterized as from the North or the South.  Pro-segregation speakers were generally rated as favoring segregation -- especially if they had a choice, or came from the North.  Again, though we see the role of desirability and expectation.  Given the stereotypes about the American North and South that prevailed in 1967, we would expect Northerners to oppose racial segregation; Southerners might support racial segregation, or they might simply say they do, in order to conform with local (Southern) norms.  But a Northerner, who advocates racial segregation, is doing something that is both undesirable and unexpected, from a "Northern" point of view.  Therefore, the inference is that Northerners who spoke favorably about racial segregation really did favor the practice.

The general finding of this experiment, then, was that attitudes were attributed in line with the speaker's behavior. 

Later, Lee Ross (1977) would argue that this general ignorance of situational constraints reflected the Fundamental Attribution Error: the general tendency to underestimate the role of situational factors in behavior, and to overestimate the role of personal dispositions.  But we're not there yet.

 

The Covariation Calculus for Causal Attribution

A further analysis of causal attribution was provided by Harold Kelley (1967, 1971), who also expanded on Heider's insights.  Kelley noted that we do not always simply attribute an action to the actor's dispositions.  Sometimes we make more complex and subtle causal judgments. Kelley proposed a covariation model of causal attribution based on the statistical analysis of variance (ANOVA).  He proposed that we infer causality from multiple observations of behavior, just as a formal experiment entails multiple trials.


Based on Lewin -- everything in social psychology goes back to Lewin! --  he argued that there are 3 principal causes of behavior:

Following the ANOVA model, Kelly also argued that it was possible to have joint causes -- for example, an interaction between the actor and the target.

From multiple observations of behavior, Kelley proposed that we extract three kinds of information relevant to causal attribution:

In making causal attributions, Kelley argued that the judge performs a "naive experiment" in which each type of data varies one principal cause while keeping the others constant.  By observing all 8 possible combinations (2x2x2), the judge can determine which element covaries with the behavior, and then infer that this element is the cause of the behavior.

Consider, for example, the following observation: John laughed at the comedian.  Depending on the particular pattern of consensus, consistency, and distinctiveness information available, the perceiver would be expected to make different attributions for John's behavior.  (Slide: President Obama laughing at a White House Correspondents' Dinner).



026Case1.jpg (82091 bytes) 028Case2.jpg (90324 bytes) 030Case3.jpg (92473 bytes) 032Case4.jpg (101147 bytes)

046CovariationCalculus.jpg (53258 bytes)The result is what we may call (following Brown, 1986) a covariation calculus for causal attribution -- a framework for analyzing data about causation.

 

 

 

Anecdotal Evidence

In his original paper, Kelley did not actually provide a test of his "calculus".  But intuitively it feels right, and everyday observation provides anecdotal evidence that people do seem to use something like it.

034Dudley.jpg
                  (72412 bytes)In the movie 10 (1979; directed by Blake Edwards), Dudley Moore plays George Webber, a Hollywood songwriter who should be happy in his marriage ((Julie Andrews plays his wife), but who undergoes a midlife crisis stimulated by a chance encounter with a  beautiful, younger woman (Bo Derek) who is on her way to her wedding.  On impulse, he follows her to Mexico on her honeymoon.  One evening, in the hotel bar, he runs into Mary Lewis (played by Dee Wallace), who recognizes him from one of Truman Capote's parties.  They retire to his room, but when George cannot consummate their affair, the following conversation ensues as they lie in bed:

 

Mary:        George?  George:    Huh?
Mary:        Is it me?  George:    No.
Mary:        Yes it is. George:     OK.
Mary:        Is it? George:    No!
Mary:        It is me, isn't it? George:    No!
Mary:        Has it ever happened to you before? George:    [Shakes his head in silence.]
Mary:        Well, it's happened to me before! 

[She gets up, gathers her clothes, and leaves]


Mary has determined that the problem happens to her but not to George; covaries with her, holding George constant.  Therefore, it is her -- at least, so it seems to her, which is all that matters in the present context.

035Who.jpg (84356
                  bytes)Here's another example (this one from Roger Brown, 1986).  On December 3, 1979, the rock group The Who began its American tour with a concert in Cincinnati, Ohio.  The concert sold out -- half of the seats reserved in advance, half of the seats sold for general admission.  When only a few doors opened, there was a stampede among the general-admission ticket-holders, resulting in 11 dead.  The next concert, in Providence, Rhode Island, was canceled; and when the open date was offered to Portland, Maine, the town refused.  On both occasions, politicians and newspaper editorials blamed The Who and its fans for the deaths in Cincinnati -- that the sorts of people who liked The Who were the sorts of people who would kill each other to get a good seat.

The other cities on the tour schedule went ahead with their concerts, and there were no other incidents.  At the end of the tour, the general conclusion was that the tragedy was due to the specific situation in Cincinnati: sold-out general admission seats, coupled with the fact that only a few doors were open to allow people into the hall.

Here, again, the determination was that the effect co-varied with the circumstances, holding The Who (and its fans) constant.

 

Experimental Evidence

More relevant, however, is actual experimental evidence.  A classic study by Leslie McArthur (1972) -- McArthur is now Leslie Zebrowitz, the prominent authority on face perception -- constituted the first experimental test of the covariation calculus.  McArtuhur presented subjects with event scenarios consisting of the description of an event, accompanied by consistency, distinctiveness, and consensus information -- to wit:



After reading each scenario, the subjects were asked to choose among five alternative causes:

The results confirmed the essential predictions of the covariation calculus.

When the actor's consistency is high, but the distinctiveness of his action is low, and the consensus among other actors is also low, attributions for the behavior are driven toward the actor. 039McArthurActor.jpg (48935 bytes)
If consistency remains high, but distinctiveness is high and consensus is high, attributions for the behavior are driven toward the target. 041McArtTarget.jpg (46728 bytes)
If consistency is low, but distinctiveness is high and consensus is high, attributions for the behavior are driven toward the context. 043McArtSit.jpg (48735 bytes)
If consistency is high, and distinctiveness is high, but consensus is low, subjects don't quite know what to do, but they tend to attribute the behavior to the interaction or the relationship between the actor and the target. 045McArtInterac.jpg (51265 bytes)

Here are the basic elements of the covariation calculus -- or, as Roger Brown called it, a "pocket calculus" for making causal attributions.

 

 

 

The covariation calculus is not all that is involved in causal attribution.  McArthur's and later research determined that there are other principles operating as well:

 

The Covariation Calculus and Correspondent Inference

Jones and McGillis (1976) noted that both the covariation calculus and correspondent inference theory assume a "naive scientist" -- that perceivers intuitively perform something like an experimental analysis to isolate distinctive causes.  In correspondent inference, the "experiment" focuses on noncommon effects, while the covariation calculus focuses on the locus of variance.

In some respects, correspondent inference generates a hypothesis which is then tested by the covariation calculus.  If the correspondent inference is correct, then the pattern of consensus, consistency, and distinctiveness information should lead to an attribution to the actor.  

But the covariation calculus stops at the level of "something about the actor".  It doesn't say what that "something" is.  If that is what the pattern of information shows, then correspondent inference goes beyond to make an  attribution to some specific trait of the actor.

 

Dimensions of Causal Attribution

In the discussion so far, consensus, consistency, and distinctiveness were characterized as dichotomous variables -- high vs. low.  But obviously each of these represents a continuously graded  dimension. 

Additional dimensions were added by Bernard Weiner (1971), a colleague of Kelley's at UCLA, who was interested in causal attributions for success and failure as part of his cognitive theory of achievement motivation.  

In Weiner's view, people's desire to work hard at challenging tasks was determined by their beliefs about why they succeeded, or failed, at those tasks. According to Weiner, people primarily attribute success or failure to one of four factors: ability, effort, task difficulty, and luck.


 So, according to Weiner's theory, in making attributions about success and failure people pay attention to information about internality and stability, and make causal attributions accordingly:

 

Internal

External

Stable

Ability

Difficulty

Variable

Effort

Luck

 To illustrate the process, consider the following scenarios.

 1.  John passed the test.

Why did John pass the test?  Because of some stable attribute of John that other people didn't have.  In other words, John is smart.

 2.  John passed the test.


Why did John pass the test?  Because of some attribute that he shared with everyone else -- namely, the test was easy.

Like Leslie McArthur, Irene Frieze, a student of Weiner's, did a dissertation showing that people actually did use consensus and consistency information to compute internality and stability, and thus make "logically" correct attributions for success and failure.

Later, Lyn Abramson, Lauren Alloy, and their colleagues added a third dimension: globality.

3.  John failed the math test.
Why did John fail the math test?  Because he lacks the specific ability to do math.

As these examples suggest, Weiner's dimensions of internality and stablity are closely related to Kelley's dimensions of consensus and consistency (Read & Stephan, PSPB 1979).  High consistency increases attributions to stable factors, while low consistency increases attributions to variable factors.  High consensus increases attributions to external factors, while low consensus increased attributions to internal factors.  

 

Attributional Style

In their clinical research, Abramson, Alloy, and their colleagues noticed that depressed individuals often made stable, internal attributions global attributions concerning their own negative outcomes -- that is, they tended to blame themselves for the bad things that happened to them, and these attributions extended to many different aspects of their lives.  Now it's one thing to take responsibility for the occasional bad event.  It's something else to take responsibility for every bad thing that happens to you.  If you do that, you're bound to get depressed!

In their cognitive theory of depression, Abramson and Alloy proposed that a characteristic way of making causal attributions -- attributing bad events to stable, internal, global factors, was a cognitive style that rendered people vulnerable to depression in the face of a negative experience.  And they developed the Attributional Style Questionnaire (ASQ) to measure this cognitive tendency.  The ASQ was later revised by Christopher Peterson, and it's this form -- one for adults, another for children -- that's in use today.

The ASQ poses a number of situations, such as:

You meet a friend who compliments you on your appearance.

First, the subjects are instructed to vividly imagine the event happening to him or her.  Then, in a "free listing" portion of the item, they are asked to write down "the one major cause" of the event.  Then they are asked to use 1-9 rating scales to answer four questions about the event.

  1. Is your friend's compliment due to something about you (9) or something about the other person or circumstances (1)?
  2. In the future when you are with friends, will this cause be present again?  (Never = 1; Always = 9)
  3. Is the cause something that just affects interacting with friends or does it also influence other aspects of your life?  (Just this particular situation = 1; Influences all situations in my life = 9.)
  4. How important would this situation be if it happened to you?  (Not at all = 1; Extremely = 9).

These questions are intended to gather information about the perceived internality, stability, and globality of the cause, and the personal importance of the event, respectively.  The person's free responses can also be coded for the three attributional dimensions, although this information does not normally enter into the scoring of the scale.  

Similar questions were asked concerning other scenarios, such as:

Half of the scenarios in the ASQ are positive, the others are negative.  

Based on these responses, we can compute scores representing the person's tendency to make internal (Question #1), stable (#2), and global (#3) attributions -- especially in situations that are important to that individual (#4).  Basically, by summing up the scores across items, and giving special weight to "important" items.

Abramson and Alloy posited that a tendency to make internal, stable, and global attributions, especially for important negative events, was a risk factor for depression.  Not a cause of depression, but a risk factor.  In technical jargon, depressogenic attributional style is a diathesis that interacts with environmental stress to produce an episode of illness.  That is, if something important and negative happens to such a person, he or she is likely to take responsibility for it and get depressed.  

A person who made external attributions, or thought it was just a one-time thing, or was able to compartmentalize the event, would react differently.  But it's the combination of negative events, coupled with a tendency to make internal, stable, global (ISG) attributions about that event and similar events that can lead you to be depressed.

Abramson and Alloy, and their colleagues, have demonstrated that the ISG attributional style is, in fact, correlated with depression.  But that's just a correlation.  Based on such studies, we don't know whether the attributional style causes the depression, or the depression causes the attributional style.  

But David Cole and his colleagues at Vanderbilt University did a study of depression in children that allowed them to tease apart cause and effect.  They measured attributional style, negative life events, and depressive symptoms on four different occasions, every 12 months, beginning as early as second grade (Journal of Abnormal Psychology, 2008).

  • They found, first, that internality, stability, and globality didn't "congeal" into a stable attributional style until about middle or high-school.
  • Second, they found that the ISG attributional style did, in fact, interact with negative life events to produce depression -- but only after about 8th grade.  Before that, there were no systematic effects of attributional style on depression.
Attributional style is a mode of thought that is presumably acquired through a history of social learning.  And, in fact, Abramson and Alloy have suggested that a form of cognitive therapy, aimed at changing the patient's attributional style, might be an effective treatment for depression.  Similarly, Cole's results suggest that acting early, to change the way children think about the events in their lives, might prevent them from getting depressed in the first place.  The point is that attributional style isn't carried on the genes -- it's something that can be learned and unlearned. 

 

Single Observations

Kelly (1972) noted that the covariation calculus cannot be applied unless there are multiple observations of behavior.  That's because the covariation calculus depends on observations of covariation, and there's no variation in a single observation.

 

Causal Schemata

In the case of single observations, the judge can make correspondent inferences.  Or, he can rely on causal schemata -- abstract ideas about causality in various domains.  Based on the presence or strength of an effect, a judge can use causal schemata to make inferences about the cause of the effect.

  • Multiple Sufficient Causes: There are many potential causes, any one of which would be sufficient to produce an effect, but none of which are necessary.  Some causes are facilitatory, while others are inhibitory in nature.  Given the presence of an effect:
    • If one facilitatory cause is present, other facilitatory causes are discounted.
    • If one cause facilitates and another inhibits, the facilitatory cause is augmented.
    • If an effect is observed despite the presence of an inhibitory cause, the facilitatory cause is augmented.
    • If an outcome is not observed despite the presence of a facilitatory cause, the inhibitory cause is augmented.
  • Multiple Necessary Causes: There are many causes, none of which is individually sufficient to produce an effect, but all of which are necessary.
    • If an outcome is present, we can infer the presence of all the causes as well.
    • If an outcome is absent, but one cause is present, we can infer the absence of at least one of the other causes.
  • Compensatory Causes: There are multiple causal factors, varying in strength (not just presence or absence).
    • If an outcome is present, and one cause is weak, we can infer that the other cause is strong.
  • Graded Effects: There are multiple causal factors, varying in strength, but the outcome varies in strength as well.
While the covariation calculus is an abstract framework for making causal attributions, causal schemata are specific to some domain.  For example, people might view achievement as caused by some combination of ability and effort.  People might have different causal theories in different domains.  

Consider, for example, the domain of college admissions.  There are lots of different reasons why an applicant might be admitted to college: good grades, high test scores, playing oboe, or being a champion basketball player, to name a few.  None of these causes is necessary, but any one of them might be sufficient.  Thus, the applicable causal schema is multiple sufficient causes.  If a talented oboe player gets into Harvard despite having relatively low SATs, then we attribute this outcome to the fact that the Harvard student orchestra needs an oboe player.  If he has good SATs, but got into Harvard when someone else with comparable scores did not, then we can discount his SAT scores and augment his oboe-playing.

Suppose that you hold the theory of achievement requires both high ability and serious effort.  As the old joke goes, there's only one way to Carnegie Hall: you've got to have talent and you've got to practice.  Thus, the applicable schema is multiple necessary causes.  If a student graduates with honors, then -- if you hold this theory -- you can infer that he's both smart and studied hard.  But if he studied hard and still didn't do well, then you can infer that he's just not very smart.  

But suppose you hold a variant on this theory, which holds that, when it comes to academic achievement, high levels of one factor can make up for low levels of another factor.  In this case, the applicable schema is compensatory causes.  If a student passes a course despite not working very hard, then we infer that he must be really smart.  If a student passes despite low aptitude test scores, then we infer that he must have worked really hard.

The examples discussed so far involve dichotomous effects -- admitted or rejected, honors or not, pass or fail.  But, of course, sometimes outcomes vary along a continuum -- like the A-F scheme for college grades.  This situation calls for the graded effects schema.  Suppose that you held the view that high ability and great effort are required to get an A in a course, but high levels of one or the other would be enough to get a C or a B; if both are relatively lacking, the student will flunk.  If you observe a student get an A, then you can infer both that he is smart and that he tried hard; if he gets a B, despite being smart, you infer that he didn't try very hard.  If he gets a D, maybe he was lacking in both areas.


Counterfactual Reasoning in Causal Attribution

Robert Spellman has noted that causal attribution in legal and medical cases always involves singular events, which means that judges (by which we mean jurors can't apply the covaration calculus either.  Because any individual crime is committed only once, and any episode of illness occurs only once, we can't even construct a 2x2 table crossing action/no action with outcome/no outcome.  Instead, she has suggested that jurors and physicians apply a kind of counterfactual reasoning

To take a medical example, of reasoning about the cause of cancer in a patient:

She further suggests that, under these circumstances, people attribute responsibility by a principle of counterfactual potency:


Linguistic Schemata

Causal attribution sometimes involves something like a calculus, and sometimes involves something like a schema, but causal attribution can also be implicit in language.

Consider the following examples of anaphoric reference from Garvey and Caramazza (1974):

  • Jane hit Mary because she had stolen a tennis racket.
    • In this sentence, judges say that the pronoun she refers to Mary.
  • Jane angered Mary because she had stolen a tennis racket.
    • But in this sentence, they say that the pronoun she refers to Jane.
Why do people attribute the hitting to Mary, but the "angering" to Jane?

Here is another question of anaphoric reference, from recent work by Joshua Hartshorne:

  • In the sentence, Sally frightened Mary because she is strange, subjects interpret she as referring to Sally.
  • But in the sentence, Sally feared Mary because she is strange, they interpret the same pronoun as referring to Mary.
Brown and Fish (1983) observed a linguistic bias in causal attributions.  

Consider the following problem: Ted helps Paul.  Why?

  • Ted is the kind of person who helps people.
  • Paul is the kind of person whom people help.
  • Some other reason.
069B&F1.jpg
                (58994 bytes)Note that the problem entails a single observation, so the covariation calculus cannot apply.  As it happens, people overwhelmingly make the causal attribution to Ted rather than Paul.  The same thing happens with a number of other verbs, such as cheats, attracts, and troubles.

 

 

On the basis of such observations, we might hypothesize that there is yet another schema for causal inference, based on the syntax of language -- to wit: Causal attribution is biased toward the grammatical subject of the sentence.

072B&F2.jpg
                    (51256 bytes)But this isn't always the case.  If the problem is changed, to Ted likes Paul, the attribution shifts from Ted, the subject of the sentence, to Paul, the object.  This also happens with other verbs, such as detests and notices.

 

 

073B&F3.jpg
                    (53119 bytes)When Ted charms Paul, the attribution is to Ted; but when Ted loathes Paul, the attribution is to Paul.  What makes the difference?

 

 

 

Brown and Fish noted that the pattern of causal attributions seemed to be related to the derivational morphology of verbs.

  • In the case of Ted helps Paul, the adjective derived from the verb help is helpful, and it is a quality that is attributive to the grammatical subject of the sentence.
  • In the case of Ted likes Paul, the adjective derived from like is likable, and it is attributive to the grammatical object of the sentence.
  • In the case of Ted charms Paul, the derived adjective is charming, attributive to the subject.
  • In the case of Ted loathes Paul, the derived adjective is loathsome, attributive to the object.
Based on this analysis, we might hypothesize a more complex linguistic schema: Greater causal weight is given to the grammatical subject or the grammatical object of a sentence depending on whether the adjective derived from the verb is attributive to the subject or the object.

That's a mouthful, but the fact that it's a mouthful isn't its only problem.  The real problem is that, in order to apply this syntactical schema, judges would have to consult a dictionary every time they made causal attributions!

Brown and Fish found the solution to this problem by shifting attention from the syntactic roles of subject and object in the grammar of a sentence to their semantic roles.  As the UCB linguist Charles Fillmore noted, there are two pairs of such roles:

  • Agent and Patient: In the case of behavioral action verbs, such as helps in Ted helps Paul, the Agent causes or instigates some action, and the Patient is the recipient of the action.
  • Stimulus and Experiencer: In the case of mental state verbs, as likes in Ted likes Paul, the Stimulus gives rise to a certain experience while the Experiencer has that experience.
Brown and Fish further noted a relationship between semantic role and derivational morphology:
  • Agents almost always take the subject role, as in Ted likes Paul.  A dictionary count revealed that adjectives derived from action verbs are attributive to the Agent, not the Patient, by a ratio of 18:1.
  • Stimuli and Experiencers share the subject role, as in Ted attracts Paul (where Paul feels the attraction), and Ted detests Paul (where Ted feels the detestation).  In either case, a dictionary count revealed that adjectives derived from mental state verbs are attributive to the stimulus, not the experiencer, by a ratio of 14:1.
These considerations led Brown and Fish to hypothesize a pair of linguistic schemata for causal attribution:
  • In the Agent-Patient schema, we give greater causal weight to the Agent than to the Patient.
  • In the Stimulus-Experiencer schema, we give greater causal weight to the Stimulus than the Experiencer.

081B&F4.jpg
                    (37022 bytes)A series of 082B&F5.jpg
                    (69515 bytes)formal experiments confirmed that people actually use these linguistic schemata in making causal attributions.  In these experiments, Brown and Fish constructed simple sentences such as Ted likes Paul from behavioral action verbs and mental-state verbs that had been randomly sampled from the dictionary. 



  • In the case of the behavioral action verbs, the Agent was usually the subject, as in Ted helps Paul.
  • In the case of mental state verbs, they used sentences in which the stimulus was the subject, as in Ted charms Paul (where Ted does the charming), and also sentences in which the experiencer was the subject, as in Ted likes Paul (where Ted does the liking).
After each sentence, they asked subjects to make causal attributions to both the subject and the object.  Ratings were made on a 9-point scale. For convenience in data analysis, cases where the subject and object received equal ratings were divided between them.

084B&F7.jpg
                    (46348 bytes)When the sentences were worded in the active voice, as in Ted likes Paul, the subjects attributions generally followed predictions. 

 

083B&F6.jpg
                    (45213 bytes)This was also true when the subjects were worded in the passive voice, as in Paul is liked by Ted.

 

 

 

086B&F8.jpg
                    (42911 bytes)And also when the rating scale was replaced by a free list of reasons, where subjects could write about either Ted or Paul.

 

 

 

089B&F9.jpg
                    (48026 bytes)Finally, Brown and Fish employed a quite different procedure, in which subjects read sentences such as Ted likes Paul and then had to make rating of distinctiveness and consensus:



  • "Probably Ted likes many/few other people" (indicating low or high distinctiveness, respectively);
  • "Probably few/many other people like Paul" (indicating low or high consensus, respectively).
Applying the Agent-Patient schema, subjects rated the scenario low on consensus, which would yield attributions to the Actor in the covariation calculus of causal attribution.

Applying the Stimulus-Experiencer schema, subjects rated the scenario higher on consensus, which would yield attributions to the Target.  

These linguistic schemata have important implications for the Fundamental Attribution Error, to be discussed below.  We do not generally attribute causality to the actor.  Instead, we attribute causality to the agent, or the stimulus, or the experiencer.  In other words, the Fundamental Attribution Error is fundamentally misframed.  

In any event, these linguistic schemata indicate that causal attributions are associated with the semantic structure of language.  

The implication of this conclusion is that someone ought to re-do the McArthur study, with careful attention to the different semantic roles of Agent and Patient, and Stimulus and Experiencer, and determine the extent to which linguistic schemata alter the application of the covariation calculus.  If you do this experiment, remember: You read it here first!

093B&FAdj.jpg (30215 bytes)Brown and Fish considered, further, whether the linguistic schemata reflected a "Whorffian" result -- that is, whether the linguistic schemata implied that thought (in this case, causal attributions) was structured, controlled, or influenced by language (in this case, semantics).  They argued that the linguistic schemata were not Whorffian in nature.  The reason for this is that English derives adjectives from verbs by adding one or another of a relatively small set of suffixes, such as -ful and -some


  • In the case of Ted helps Paul, the derived adjective helpful is attributive to the Agent, but English could just as well have the word helpworthy, attributive to the Patient.
  • In the case of Ted charms Paul, charming is attributive to the Stimulus, but there could be a word charmable attributive to the Experiencer.
  • Same with like: likable (attributive to the Stimulus) exists in English; likeful (attributive to the Experimenter) could exist, but does not.
  • And the same with loathe: loathsome exists, but Ioathful does not.
The point is that English has the ability to create derived adjective attributable to either party -- to the Patient as well as the Agent, and to the Experiencer as well as the Stimulus; it just doesn't do this, by ratios of 14-18:1.  

So, the existence of linguistic schemata for causal attribution appear to be a contra-Whorffian result, in which thought -- something like the Fundamental Attribution Error, perhaps, but not exactly -- is structuring language.

 

Language and Social Interaction

These linguistic effects on social judgment remind us of the intimate relationship between language and social interaction.  Viewed strictly from a cognitive point of view (as Noam Chomsky might, for example), language is a tool of thought -- a powerful means of representing and manipulating knowledge.  But from a social point of view, language is a similarly powerful tool for communication -- for expressing one's  thoughts, feelings, and desires.  Without language, our social interactions would be much different.

Which raises the question of the effect of language on social interaction -- another aspect of the Whorffian hypothesis, I suppose.  Do social interactions differ depending on the language in which they are conducted?   Maybe.

Sylvia Chen and Michael Bond (2010), two psychologists at Hong Kong Polytechnic University, conducted an interview study of personality in native Chinese who were fluent in English.  When the students were interviewed in English, as opposed to Cantonese, these subjects appeared more extraverted, assertive, and open to new experiences.  The investigators speculated that speaking a language primes the speaker to express the features of personality stereotypically associated with native speakers of that language.

 

Errors, Heuristics, and Biases in Social Judgment

Like Norman Anderson's cognitive algebra, Kelley's covariation calculus for causal attribution illustrates normative rationality in social judgment and decision-making.    

According to the normative model of human judgment and decision-making:

  • People reason about events by following normative principles of logic.  
  • Their judgments, decisions, and choices are based on a principle of rational self-interest.  
  • Rational self-interest is expressed in the principle of optimality: the desire to maximize gains and minimize losses.  
  • Rational self-interest is also based on utility, by which people seek to optimize in the most efficient manner possible. 
This is a classical philosopher's view of human thought, promoted from Aristotle to Descartes and beyond -- a sort of philosopher's prescription for how people should think.  It is also enshrined in traditional economic theory as the principle of rational choice, an idealized description of how judgments and decisions are made.  And, as we will see, it is expressed in various ways.  But psychology is an empirical science, not so much concerned with prescribing how people should think as with describing how they do think.  

One aspect of normative rationality is a reliance on algorithms for reasoning.  Algorithms are cognitive "recipes" for combining information in the course of reasoning, judgment, choice, decision-making, and problem-solving.  When appropriately applied, they are guaranteed to yield the correct answer to a problem.  The covariation calculus is, in a sense, an algorithm for causal reasoning -- a recipe that will always yield the logically correct causal explanation.

To some extent, it is clear that people do follow these normative rules -- which is why the covariation calculus has been confirmed experimentally.  However, in other respects they appear to depart systematically from them, causing errors and biases to occur in social judgment.

Some of these errors and biases stem from the very nature of social judgment, which is that it frequently takes place under conditions of uncertainty:




  • when an appropriate algorithm is unavailable or unknown;
  • when there is not enough information available to apply a known algorithm; or
  • when there is not enough time, or motivation, to allow application of the algorithm. 
Thus, as Kelley argued, when reasoning about the causes of single events, when there is not enough information available to apply the full covariation calculus, people rely on causal schemata instead.  The linguistic schemata described by Brown and Fish also serve under these circumstances.

But there are other instances where people depart from normative rationality even though there is enough information available to permit them to apply reasoning algorithms such as the covariation calculus, resulting in systematic biases in causal attribution.  

 

The Fundamental Attribution Error

Chief among these errors is what has come to be known as the Fundamental Attribution Error (FAE; Ross, 1977) -- the tendency for people to overestimate the role of dispositional factors, and underestimate the role of situational factors, in making causal attributions.




As with so many other topics in social psychology, the FAE was first noticed by Heider:

Changes in the environment are almost always caused by acts of persons in combination with other factors. The tendency exists to ascribe the changes entirely to persons (Heider, 1944, p. 361).

Here is Ross's statement of the effect, in his classic review of "The Intuitive Psychologist and His Shortcomings":

[T]he intuitive psychologist's shortcomings...start with his general tendency to overestimate the importance of personal or dispositional factors relative to environmental influences.... He too readily infers broad personal dispositions..., overlooking the impact of relevant environmental forces and constraints (Ross, 1977, p. 183).

And another statement, from a book written by Ross with Richard Nisbett, Human Inference: Strategies and Shortcomings of Social Judgment:

[T]he tendency to attribute behavior exclusively to the actor's dispositions and to ignore powerful situational determinants of the behavior. Nisbett & Ross (1980, p. 31)

Ross found evidence of the FAE in Jones and Harris' (1967) studies employing the attitude attribution paradigm.  Recall that the subjects in this experiment tended to attribute debater's favorable statements about Castro (and, to a lesser degree, racial segregation) to their favorable attitudes toward these objects -- despite their knowledge that the debaters had no choice in the position they took.  They ignored this situational information, and made a causal attribution to the debaters' attitudinal dispositions.


He also found it in McArthur's study of the covariation calculus.  

For example, in a control condition, people were asked to make causal attributions in the absence of any information about consistency, distinctiveness, or consensus.  Under these conditions, they probably should have made their attributions randomly (setting aside the linguistic schemata, which nobody knew about at the time).  However, the judges made more attributions to the actor than to the situation.



  • True enough, but it's also the case that only about 25% of control attributions were made to the actor -- which doesn't strike one as a particularly strong bias.

058McArt6.jpg
                    (57453 bytes)McArthur also used ANOVA to estimate the proportion of variance in causal attributions that could be attributed to each kind of information.  Overall, the judges relied more on consistency information, which drives attributions toward the Actor, than consensus information, which would drive attributions toward the Situation.



  • But it's also true that consistency information will drive attributions toward the situation -- when consistency is low.  And that's just what happens: consistency information played a relatively large role in attributions to the situation.

Subsequent research has identified a number of sources of the FAE:

  • Correspondence bias.
  • Failure to recognize situational influences.
  • Automaticity
  • Correcting dispositional attributions for situational influences requires cognitive effort.
  • Anchoring and adjustment.

 

Other Biases in Causal Attribution

Now-classic studies uncovered a number of attributional errors, apparently reflecting systematic biases in the attributional process.

In the Actor-Observer Difference in Causal Attribution (Jones & Nisbett, 1972), also known as the Self-Other Difference, people tend to attribute other peoples' behavior to their dispositions, but their own behavior to the situation. 




The person tends to attribute his own reactions to the object world, and those of another, when they differ from his own, to personal characteristics [of the other]. Heider (1958, p. 157)

[T]here is a pervasive tendency for actors to attribute their actions to situational requirements, whereas observers tend to attribute the same actions to stable personal dispositions. Jones & Nisbett (1972, p. 80)

The flavor of the Actor-Observer difference can be given in the following anecdote from Jones and Nisbett (1972, p. 79):

When a student who is doing poorly... discusses his problems with a[n] adviser, there is often a fundamental difference of opinion between the two.  The student... is usually able to point to environmental obstacles such as a particularly onerous course load, to temporary emotional stress..., or to a transitory confusion about life goals....  The adviser... is convinced...instead that the failure is due to enduring qualities of the student -- to lack of ability, to irremediable laziness, to neurotic ineptitude.

Notice that the Actor-Observer Difference sets limits on the Fundamental Attribution Error.  That is to say, we make the Fundamental Attribution Error when explaining the behavior of other people, but not when explaining our own behavior.

Like the Fundamental Attribution Error, a tremendous amount of attention has been given by social-cognition researchers to the Actor-Observer Difference.  One explanation of the Actor-Observer Difference is that actors have more information about the causes of their own behavior than do observers.  Presumably, if perceivers had as much information about the behavior of another person as they do about their own behavior, they wouldn't make the Fundamental Attribution Error in the first place.  

Nevertheless, subsequent research (reviewed by Watson, 1982) indicated that the Actor-Observer difference is actually rather weak. 




  • Consistent with the Actor-Observer difference, was some evidence that attributions concerning the self are more likely to invoke situations than attributions concerning other people.
  • However, there was very little evidence for three other aspects of the Actor-Observer Difference:
    • Attributions to traits are not more likely for others than for oneself.
    • Attributions concerning the self are not more likely to invoke situations rather than traits.
    • Attributions concerning others are not more likely to invoke traits rather than situations.

We will return to this issue later.

In the Self-Serving Bias in Causal Attribution (Hastorf, Schneider, & Polefka, 1970), also known as the Ego Bias, the Ego-Defensive Bias, the Ego-Protective Bias, or just plain Beneffectance (Greenwald, 1980), people tend to take responsibility for positive outcomes, but not for negative outcomes.




That reason is sought that is personally acceptable. It is usually a reason that flatters us, puts us in a good light, and it is imbued with an added potency by the attribution. Heider (1958, p. 172)

We are prone to alter our perception of causality so as to protect or enhance our self esteem. We attribute success to our own dispositions and failure to external forces. Hastorf, Schneider, & Polefka (1970, p. 73)

Again, the self-serving bias may be illustrated by an academic anecdote:

In asking students to judge an examination's quality as a measure of their ability to master course material, I have repeatedly found a strong correlation between obtained grade and belief that the exam was a proper measure.  Students who do well are willing to accept credit for success; those who do poorly, however, are unwilling to accept responsibility for failure, instead seeing the exam (or the instructor) as being insensitive to their abilities (Greenwald, 1980, p. 604).

And maybe by this observation about the power of prayer (from a letter to the editor by Charles F. Eikel, New York Times Magazine, November 2004):

Claims of speaking with God or hearing from God... are made by the same people who, when things go bad, say, "It was God's will", and when they go well, "My prayers were answered".


The self-serving bias is one aspect of what Greenwald (1980) labeled the totalitarian ego.  Much like Jones and Nisbett, Greenwald found evidence for the totalitarian ego, and especially for what he called beneffectance, in students' approach to academic achievement.





Much as the Actor-Observer Difference sets limits on the Fundamental Attribution Error, the Self-Serving Bias sets limits on the Actor-Observer Difference.  That is, we explain our failures by making external attributions, but we explain our successes by making internal attributions.  Of course, those internal attributions presumably entail the Fundamental Attribution Error.  

An early review of the empirical evidence for the Self-Serving Bias was mixed (Miller & Ross, 1975):




A later meta-analysis, by Campbell and Sedikides (1999), found that the Self-Serving Bias was magnified under under a number of conditions, especially threat to self-esteem -- which makes sense. 



We shall have more to say about these claims of attributional bias later in the supplement.

But claims that causal attributions and other aspects of social judgment were riddled with errors and biases led to the development of a large industry within social cognition, documenting a large number of ostensible departures from normative rationality.  This list was produced by Kruger and Funder in 2004.  A quick Google search on "Cognitive Errors" will turn up dozens more errors ostensibly infecting decision-making, belief, probability judgments, social behavior, and memory.

 


The Ultimate Attributional Error?

A variant on the FAE is the Ultimate Attributional Error, first described by Thomas Pettigrew, one of the leading authorities on stereotyping and prejudice (Personality & Social Psychology Bulletin, 1979). It consists of two aspects -- or four, depending on how you count -- characteristic of the attributions that ingroup members make with respect to outgroup members.



  1. Negative behaviors by outgroups are attributed to internal, dispositional causes.
  2. Positive behaviors by outgroups are attributed to situational causes.
    1. Or to good luck or special advantage.
    2. Or to high motivation.
    3. Or to exceptional cases.
  3. Positive behaviors by ingroups are attributed to internal, dispositional causes.
  4. Negative behaviors by ingroups are attributed to situational causes.
    1. Or to bad luck or special disadvantages.
    2. Or to low motivation.
    3. Or to exceptional cases.

Obviously, the UAE is, more or less, an extension of the self-serving bias from the individual to one's ingroup.  As such, it makes a great deal of sense.  However, a review by Hewstone (Eur. J. soc Psych., 1990) indicates that evidence for the UAE is somewhat limited.

  • Ingroup members make more internal attributions for positive acts of their ingroup, and fewer internal attributions for negative acts of their ingroup, compared to the same acts when made by outgroup members.
  • Outgroup members' failures are generally attributed to a lack of ability, while their successes are generally explained in such a was as to diminish them.
  • Group differences (i.e., those favoring ingroups) are made in an ingroup-favoring way.
 

 

Judgment Heuristics in Social Cognition

What accounts for these errors and biases?

One view is that they reflect a set of conditions collectively known as judgment under uncertainty. Under these circumstances, when algorithms cannot be applied, people rely on judgment heuristics: shortcuts, or "rules of thumb", that bypass the logical rules of inference.  Judgment heuristics permit judgments to be made under conditions of uncertainty, but they also increase the likelihood of making an error in judgment.

Analysis of errors in judgment by Daniel Kahneman (formerly of UC Berkeley), the late Amos Tversky (of Stanford University) and others has revealed a number of heuristic principles that people apply when making both social and nonsocial judgments.  Kahneman shared the Nobel Prize in Economics for his work on judgment heuristics and other aspects of judgment and decision-making; Tversky did not share the prize as well, because by the time it was awarded he had died from cancer, and the Nobel prizes are not given posthumously.

Four of the best-known judgment heuristics are:

  • representativeness
  • availability
  • simulation, and 
  • anchoring and adjustment.

 

For an an overview of judgment heuristics, see the General Psychology lecture supplements on Thinking.


These judgment heuristics can be employed in making causal attributions:

  • In representativeness, attributions may focus on causes that resemble the effect to be explained.
  • In availability, attributions may focus on the most salient element in the perceptual field.
  • In simulation, attributions may focus on the first scenario that comes to mind.
  • In anchoring and adjustment, attributions may be dominated by first impressions.
110F&TSalience.jpg (48165 bytes)An interesting demonstration of availability in causal attributions is provided by an experiment by Taylor and Fiske (1975) employing a version of the "getting acquainted paradigm" where a single real subject interacted with two other individuals who in fact were confederates of the experimenter.  The situation was arranged so that the subject faced one confederate or the other, or was seated at right angles to the confederates.  After the trio performed a group task, the subject was asked to assign responsibility for the outcome.


  • Subjects who faced Confederate A assigned more responsibility to A than to B.
  • Subjects who faced Confederate B assigned more responsibility to B than to A.
  • Subjects who were seated in the center, between A and B, assigned responsibility to them equally.

Taylor and Fiske suggested that causal attributions were "top of the head" phenomena, and not the products of the systematic thought implied by the covariation calculus.


A Confirmatory Bias in Hypothesis-Testing?

A similar story can be told about another frequently cited error: the confirmatory bias in hypothesis- testing.  In the context of causal attribution, you can think of "Something About the Person" as a causal hypothesis to be tested by collecting further evidence.

Inspired by a classic study of the "Triples Task" by Johnson-Laird (1960), Snyder (1981a, 1981b) proposed that social judgment was characterized by an inappropriate (and irrational) confirmatory bias.

 

Testing Logical Hypotheses

In their experiment, Wason and Johnson-Laird presented subjects with cards displaying three numbers, such as 2 - 4 - 6.  they were told that the number string conformed to a simple rule, and that their task was to infer this rule.  They were to test their hypotheses by generating new strings of three numbers, and would receive feedback as to whether their string conformed to the rule.  According to Wason and Johnson-Laird, a typical trial would go something like this:

Subject

Experimenter

8 - 10 - 12 Yes, that conforms 
14- 16 - 18 Yes
20 - 22 - 24 Yes
Hypothesis: Add 2 to the preceding number No, that is incorrect
2 -  6 - 10 Yes, that conforms
1 - 50 - 99 Yes
Hypothesis: The second number is the arithmetic mean of the 1st and 3rd numbers No, that is incorrect
3 - 10 - 17 Yes, that conforms
0 - 3 - 6 Yes
Hypothesis: Add a constant to the preceding number No, that is incorrect
1 - 4 - 9 Yes, that conforms
Hypothesis: Any three numbers, ranked in increasing order of magnitude Yes, that is correct!

Johnson-Laird's account of the modal subject in this experiment goes like this:

  • The subject begins with a hypothesis like sequential even numbers.
  • Employing a verification (confirmatory) strategy, he then constructs a test consisting of a new set of sequential even numbers.
  • If the response is positive, then he constructs a second test consisting of yet another set of sequential even numbers.
The problem is that there's a flaw in the subject's logic: the assumption that if the sequence fits the rule, then the test "proves" the hypothesis; and if the sequence doesn't fit the rule, then the test "disproves" the hypothesis.  The problem is that the test ignores alternative hypothesis that would produce the same sequence.

A logical approach would construct pairs of alternative hypotheses, so that the subject would offer a sequence that would prove one hypothesis and simultaneously disprove the other.

Something very similar occurs in a variant on the "triples" task, now known as the "Card Task" -- employed by Wason and Johnson-Laird (1968) for the study of formal (logical) hypothesis-testing, in which categorical statements can be disconfirmed by a single counterexample.  In the Card Task, subjects are shown four cards, each of which has a letter on one side and a number on the other.  The faces shown to the subject might be 

A M 6 3

The subject is then given a rule, such as:

P Q
If there is a vowel on one side,  Then there is an even number on the other side.

The subject is then asked to select cards to turn over to determine whether the rule is correct.

Wason and Johnson-Laird reported that the modal subject either selected the A card alone, or else the A card and the 6 card.  However, this is logically incorrect. 

  • The problem is that turning over the 6, to discover that it has a consonant on the other side, doesn't falsify the rule.  According to the rule, if there's a consonant on one side, there can be an even number on the other side.  The rule just says that if there's a vowel on one side, there must be an even number on the other side.  So, only an A paired with an odd number, and a 3 paired with a vowel, will falsify the rule.
  • The logically correct way to test a logical hypothesis of the form if P then Q is to test P and Not Q
    • P must be associated with Q: the A card must have an even number on the other side.
    • Not Q must be associated with Not P: the 3 card must have a consonant on the other side.

Philosophers of science such as Karl Popper and Paul Lakatos have made compelling arguments favoring a disconfirmatory strategy for scientific hypothesis-testing.  That doesn't mean that scientists always do it, and in fact many of them (including myself) don't do it very often at all.  But it does mean that, technically, the disconfirmatory strategy is far to be preferred, on strictly logical grounds, to a confirmatory (or verificationist) strategy.

In both cases, it seems that the subjects are employing a confirmatory strategy: confirming whether the sequence adds 2 to the previous number (or whatever), or confirming whether vowels have even numbers, and even numbers have vowels. And, in fact, the literature on human judgment indicates that the intuitive hypothesis-tester does seem to be prone to a confirmation bias. 

  • Perceptions of the correlation between two variables is inordinately influenced by co-occurrence -- that is, the number of entries in the +/+ cell of the 2x2 table.
  • The typical response to disconfirmatory information is to discredit it, or to reinterpret it as actually confirming the hypothesis.
  • As illustrated in the Johnson-Laird studies, hypothesis testers frequently employ biased tests that are unlikely to disconfirm their hypotheses.  At the very best, this is an inefficient strategy for hypothesis-testing.  At worst, it is just stupid and irrational.


Social Hypothesis-Testing

It's this kind of strategy that Snyder and his colleagues thought they observed in social cognition.

Consider an experiment by Snyder and Swann (1978), employing the "getting acquainted" paradigm.  First, they presented subjects with a profile of the "prototypical" extravert or introvert.  then they gave subjects a pool of 26 questions, from which they were to select 12 to test the hypothesis that a target was extraverted or introverted.  The questions were distributed as follows:

  • 11 questions, answers to which would provide positive evidence of the trait.
    •  e.g., for extraversion, What kinds of situations do you seek out if you want to meet new people?
  • 10 questions, answers to which would provide negative evidence of the trait.  
    •  e.g., again for extraversion, What factors make it hard for you to really open up to people?
  • 5 questions, answers to which would provide no evidence either way.
    • e.g., What are your career goals?
The finding was that subjects were biased approximately 2:1 in favor of behavioral confirmation.  

This bias was robust in face of a number of manipulations:

  • Low-credibility source of the hypothesis.
  • Making uncertainty explicit.
  • Providing a monetary incentive for accuracy.
  • Presenting a competing hypothesis concerning the opposite trait.
  • Mentioning in the prototype both consistent and inconsistent attributes.
    • e.g., for extraversion, Sociable, Not Shy.
    • and for introversion, Shy, Not Sociable.
The results of this and similar studies quickly became enshrined in the literature as a "fact" about social cognition -- just another aspect of the "error and bias" literature, and another brick in the wall of PeopleAreStupidism.  But results such as these should not be taken uncritically.
  • For example, the S&S study does not share the logical structure of the Wason card test.  There is no logical hypothesis, where a single counterexample would be sufficient to disconfirm the hypothesis (extraverts can be introverted on occasion).
  • The pool of questions didn't contain any diagnostic questions -- ones that would definitely rule out extraversion or introversion.  Again, this is because the non-neutral questions could elicit similar responses from both extraverts and introverts.
  • The task set for subjects is not really a hypothesis-testing task.  In fact, their instructions were to "choose questions that will help you link the profile [i.e., the prototype] with the person".  The subjects were actually never asked to test the hypothesis that the target was extraverted, and in fact they were led to assume that the profile was accurate.
  • The non-neutral questions are phrased conditionally, in that they assume that the interviewee belongs to the target category.  An "extraverted" question would be awkward to ask of an introvert, because an introvert would have to reject the premise of the question -- to say, in effect that "I don't want to meet new people".
In other words, the subjects in the S&S study aren't really given the opportunity to test a hypothesis, much less behave normally when doing so.

In this light, consider an experiment by Trope and Bassok (JPSP 1982). 

  • They gave subjects a profile of an intuitive or analytic person, explicitly presented as a hypothesis to be tested.
  • Then they gave the subjects a list of 8 handwriting features, ostensibly characteristic of one type of person or the other (printing, threadlike connections, deviation from standard).  
    • For each feature, they were given its rate of occurrence in the handwriting of intuitive and analytic people, using a full range of probabilities, from .2 - .8.
    • They also varied the diagnosticity of each feature -- that is, the difference between p(feature | trait) and p(feature | opposite).
  • The subjects' task was to choose a handwriting feature to test the hypothesis that the target was intuitive (or analytic).

Trope and Bassok then tested for three strategies.

  • In a confirmatory strategy, subjects should select a feature only when p(feature | trait) is high, regardless of p(feature  opposite).
  • In a non-diagnostic strategy, subjects should select a feature where p(feature  trait) is equal to the p(feature | opposite), regardless of the value of p.
  • In a diagnostic strategy, subjects should select a feature where p(feature | trait) is either greater or less than p(feature | opposite).
The results indicated a strong preference for diagnostic information, and no evidence that subjects employed a confirmatory strategy.

Moreover, in a further experiment by Trope and Alon (JPers 1984), subjects were given the task to discover whether a target was introverted or extraverted, but were free to generate their own questions, rather than forced to select from a pool of questions supplied by the experimenter.  The subjects' questions were then coded:

  • Consistent questions asked about a feature consistent with the hypothesized trait.
  • Inconsistent questions asked about a feature inconsistent with the hypothesized trait.
  • Biased questions contained an implicit premise that the target had the hypothesized trait.
  • Nondirectional questions were neutral, or offered the subject a choice between a trait-consistent and a trait-inconsistent feature.

The result was that there was little evidence of biased questioning, suggesting that the Snyder studies may have forced subjects to do something that they wouldn't ordinarily do.  In fact, 73% of the questions were coded as non-directional, with the remainder roughly balanced between consistent and non-consistent.

So, it turns out that the Snyder studies don't provide good evidence for a confirmatory strategy in hypothesis-testing, which motivates us to examine the whole claim a little more closely.  It turns out that the confirmatory bias is not all it's cracked up to be: it's not really that all-pervasive, it's not really confirmatory in nature, and it's not really a bias.


Logical Hypothesis-testing Revisited

Consider a conceptual replication and extension of the "Card Task" by Johnson-Laird et al. (1972).  In this study, subjects were given a logical hypothesis of the form if P then Q -- which, of course is logically tested by selecting P and Not Q.

  • One of the problems had an abstract framing of the logical hypothesis, as in the earlier experiments: cards with letters on one side, and numbers on the other: D, C, 5, and 4.  The rule to be tested was if there is a D on one side, then there is a 5 on the other side. Only 2 out of 24 subjects employed the proper test, selecting D and 4.
  • The other problem had a concrete framing of the same logical hypothesis: letters, sealed or unsealed, with 5- or 4-pence postage stamps on the front (the experiment was done in England).  The rule to be tested was if the letter is sealed, then it has a 5-pence stamp on it.  In this case, fully 21 out of 24 subjects performed the proper, disconfirmatory test. 
So, subjects do employ the normative disconfirmatory strategy in domains that are familiar to them.  It seems, in this case at least, that the shortcoming is in the problem, not the problem-solver!

To be fair, it's clear that subjects don't always test hypotheses properly.  They don't always handle abstract representations of problems the way they should.  They tend to ignore p(evidence | hypothesis is untrue).  But in familiar domains, when the structure of the task is made clear to them, they are given a logical hypothesis to test against an alternative, given information about diagnosticity, (or permitted to formulate their own questions), subjects aren't nearly as irrational, illogical, or stupid as some people like to think they are.

The broader point is that people may behave normatively when they understand the task they have been given.  When they don't understand the task, it seems hardly fair for us to criticize them for behaving inappropriately.  The general principle here is that, when evaluating performance, we need to understand the experimental situation from the subject's point of view.  Until we have such an analysis, we probably should hold off on drawing conclusions about bias and irrationality.

 

Is a Confirmatory Bias Always Bad?

Just to make muddy the waters further, it's not clear that a confirmatory bias is always a bad way to test a hypothesis.  

Consider one more analysis, by Klayman and Ha (1987), on rule-discovery as hypothesis testing.  They asked subjects to consider a set of objects which differed from another set in some unspecified way, and gave them the task of finding a rule that will exactly specify the members of the target set.  Thus, effectively, subjects were asked to test a hypothesis concerning a rule, by making a prediction about whether a given object is in the target set.  This is essentially what goes on in concept-identification, and it's also what goes on in the "Triples Task".  

Anyway, K&H looked for two kinds of hypothesis-tests:

  • Positive, testing an instance that should conform to the rule.
  • Negative, testing an instance that should not.
Wason's experiments suggested that people will prefer positive to negative tests.  In a general positive-test strategy, subjects should examine instances in which the property occurs, or is expected to occur.  From a Popperian view, this positive test strategy is normatively proscribed, because it leads to inefficiencies and errors in hypothesis-testing.  But K&H argue that a positive test strategy is, actually, a pretty good heuristic, because it reflects a greater concern with sufficient conditions, as opposed to necessary ones.  A positive test strategy guarantees that all chosen cases are, in fact, in the set -- even if it doesn't guarantee that all cases in the set are, in fact chosen.  When mistaken choices are consequential, judgment will naturally focus on positive instances.  And in the real world, "sufficiency is often sufficient".  But even when necessity is important, K&H argue that a positive test strategy may still be alright.  

Consider the wide variety of relations between the hypothesized rule and the correct rule in hypothesis-testing situations:

  • If the hypothesized rule (e.g., Numbers increase by 2s) is embedded in the correct rule (e.g., Any ascending sequence), then the positive test strategy cannot produce falsification.
  • If the hypothesized rule (e.g.  Numbers increase by 2s) overlaps with the correct rule (e.g., Any 3 even numbers), the positive test strategy can lead to falsification.
  • If the hypothesized rule (e.g., Numbers increase by 2s) surround the correct rule (e.g., Consecutive even numbers), only the positive test strategy can lead to falsification.
And also consider the baserates for a case conforming to a rule:
  • If the baserate is < .5, there are many conditions where a positive test strategy is superior to a negative test strategy.
  • Even if the baserate is >= .5, there are still some such conditions.
And consider whether the judgment is made in a deterministic or a probabilistic environment.  In a deterministic environment, the rule makes no errors, and there is also error-free feedback about predictions.  But that doesn't always hold for the real world:
  • Natural categories are fuzzy sets, not proper sets.
  • There is some degree of true randomness in the world.
  • Many real-world problems are too complex to allow perfect prediction.

Under conditions such as these, unambiguous falsification is impossible, and the positive test strategy is still preferable under many conditions.  

This is especially true when you consider the costs associated with the positive and negative test strategies.  Consider a real-world variant: if a student gets better than 1300 on the GRE, then s/he will be a good graduate student.  If you really want to test the hypothesis using the prescribed negative test strategy, then you would need to admit some poor graduate students (Not Q).  But the cost of admitting poor students to graduate school is simply too great to bear.

The bottom line is similar to the lessons drawn concerning judgment heuristics.  From a strictly prescriptive point of view, disconfirmation should be the goal of all hypothesis-testing.  And, in formal terms, the best route to the goal of disconfirmation is a negative test strategy.  But the negative test strategy may be impossible in probabilistic environments, where disconfirmation is not informative.  And even when the negative test strategy is possible, it can be very expensive, and makes considerable demands on cognitive capacity.

For these reasons, Klayman and Ha called for a re-evaluation of "confirmation bias".  They noted that a number of phenomena, fall under the rubric of a positive test strategy: rule discovery, concept identification, general hypothesis-testing (like the Card Problem), intuitive personality testing, judgments of contingency, and trial-and-error learning.  In each of these cases, the positive test strategy is a generally useful heuristic, even if it is normatively wrong.  

So the "confirmatory bias in hypothesis testing" isn't really a confirmatory bias after all.

 

Bounded Rationality

Judgment heuristics, and our biases in hypothesis testing, and the like seem to undermine the assumption, popular in classical philosophy and early cognitive psychology, that the decision-maker is logical and rational -- has an intuitive understanding of statistical principles bearing on sampling, correlation, and probability, follows the principles of formal logic to make inferences, and makes decisions according to the principles of rational choice.

In fact, psychological research shows that when people think, solve problems, make judgments, decisions, and choices, they depart in important ways from the prescriptions of normative rationality.  These departures, in turn, challenge the view of humans as rational creatures -- or do they?

The basic functions of learning, perceiving, and remembering depend intimately on judgment, inference, reasoning, and problem solving. In this lecture, I focus on these aspects of thinking. How do we reason about objects and events in order to make judgments and decisions concerning them?

According to the normative model of human judgment and decision making, people follow the principles of logical inference when reasoning about events. Their judgments, decisions, and choices are based on a principle of rational self-interest. Rational self-interest is expressed in the principle of optimality, which means that people seek to maximize their gains and minimize their losses. It is also expressed in the principle of utility, which means that people seek to achieve their goals in as efficient a manner as possible. The normative model of human judgment and decision making is enshrined in traditional economic theory as the principle of rational choice. In psychology, rational choice theory is an idealized description of how judgments and decisions are made.

But in these lectures, we noted a number of departures from normative rationality, particularly with respect to attributions of causality.   

These effects seem to undermine the popular assumption of classical philosophy, and early cognitive psychology, that humans are logical, rational decision makers -- who intuitively understand such statistical principles as sampling, correlation, and probability, and who intuitively follow normative rules of inference to make optimal decisions.



But do the kinds of effects documented here really support the conclusion that humans are irrational? Not necessarily. Normative rationality is an idealized description of human thought, a set of prescriptive rules about how people ought to make judgments and decisions under ideal circumstances. But circumstances are not always ideal. It may very well be that most of our judgments are made under conditions of uncertainty, and most of the problems we encounter are ill-defined. And even when they're not, all the information we need may not be available, or it may be too uneconomical to obtain it. Under these circumstances, heuristics are our best bet. They allow fairly adaptive judgments to be made. Yes, perhaps we should appreciate more how they can mislead us, and yes, perhaps we should try harder to apply algorithms when they are applicable, but in the final analysis:

It is rational to inject economies into decision making, so long as you are willing to pay the price of making a mistake.

Human beings are rational after all, it seems. The problem, as noted by William Simon, is that human rationality is bounded. We have a limited capacity for processing information, which prevents us from attending to all the relevant information, and from performing complex calculations in our heads. We live with these limitations, but within these limitations we do the best we can with what we've got. Simon argues that we can improve human decision-making by taking account of these limits, and by understanding the liabilities attached to various judgment heuristics. But there's no escaping judgment heuristics because there's no escaping judgment under uncertainty, and there's no escaping the limitations on human cognitive capacity.

Simon's viewpoint is well expressed in his work on satisficing in organizational decision-making -- work that won him the Nobel Memorial Prize in Economics. Contrary to the precepts of normative rationality, Simon observed that neither people nor organizations necessarily insist on making optimal choices -- that is, always choosing that single option that most maximizes gains and most minimizes losses. Rather, Simon showed that organizations evaluate all the alternatives available to them, and then identify those options whose outcomes are satisfactory (hence the name, satisficing). The choice among these satisfactory outcomes may be arbitrary, or it may be based on non-economic considerations. But it rarely is the optimal choice, because the organization focuses on satisficing, not optimizing.

Satisficing often governs job assignments and personnel selection. Otherwise, people might be overqualified for their jobs, in that they have skills that are way beyond what is needed for the job they will perform.

Satisficing also seems to underlie affirmative action programs. In affirmative action, we create a pool of candidates, all of whom are qualified for a position. But assignment to the position might not go to the candidate with the absolutely "highest" qualifications. Instead, the final choice among qualified candidates might be dictated by other considerations, such as an organizational desire to increase ethnic diversity, or to achieve gender or racial balance. Affirmative action works so long as all the candidates in the pool are qualified for the job.

Another way of stating Simon's principles of bounded rationality and satisficing is with the idea of "fast and frugal heuristics" proposed by the German psychologist Gerd Gigerenzer.

The bottom line in the study of cognition is that humans are, first and foremost, cognitive beings, whose behavior is governed by percepts, memories, thoughts, and ideas (as well as by feelings, emotions, motives, and goals). Humans process information in order to understand themselves and the world around them. But human cognition is not tied to the information in the environment.

We go beyond the information given by the environment, making inferences in the course of perceiving and remembering, considering not only what's out there but also what might be out there, not only what happened in the past but what might have happened.

In other cases, we don't use all the information we have. Judgments of categorization and similarity are made not by a mechanistic count of overlapping features, but also by paying attention to typicality. We reason about all sorts of things, but our reasoning is not tied to normative principles. We do the best we can under conditions of uncertainty.

In summary, we cannot understand human action without understanding human thought, and we can't understand human thought solely in terms of events in the current or past environment. In order to understand how people think, we have to understand how objects and events are represented in the mind. And we also have to understand that these mental representations are shaped by a variety of processes -- emotional and motivational as well as cognitive -- so that the representations in our minds do not necessarily conform to the objects and events that they represent.

Nevertheless, it is these representations that determine what we do, as we'll see more clearly when we take up the study of personality and social interaction.

The bottom line is that, precisely because they inject economies into the judgment process, the use of judgment heuristics does not necessarily reflect a departure from normative rationality.  So long as we are willing to pay the price of error, the use of what Gigerenzer calls "fast and frugal heuristics" can be very rational indeed -- because it enables us to make everyday judgments quickly, with little effort -- and mostly correctly.

 

The "People are Stupid" School of Psychology

On the other hand, some psychologists have taken the position that judgment heuristics and biases are not "rational" shortcuts, but rather the tools of lazy reasoners.  I have sometimes called these theorists the "People Are Stupid" School of Psychology, also PASSP, PeopleAreStupidism, or simply Stupidism (Kihlstrom, 2004) -- a school of psychology that occupies a place in the history of the field alongside the structuralist, functionalist, behaviorist, and Gestalt "schools".



  • The fundamental assumption of this group of psychologists is that people are fundamentally irrational.
    •  We don't think very hard about anything.
    • We prefer heuristic shortcuts to more disciplined logical reasoning.
    • We allow our emotions and motives get in way of our cognitions.
  • People usually operate on "automatic pilot". 
    • We don't pay much attention to what we are doing.
    • We are inappropriately swayed by first impressions and immediate responses.
  • Our behavior is mostly unconscious, a product of mental processes that operate automatically.
    • The "reasons" we give for our behavior amount to little more than post-hoc rationalizations. 
  • As a result of automatic processes, we usually don't know what we are doing.
    • We don't really know what we feel or want.
    • We can't forecast what we will do -- or what we will believe, feel, or want -- in the future.
  • Actually, too much thinking can be bad for you, because it gets in the way of adaptive behavior.
    • Unconscious thought, operating automatically and outside of awareness, is superior to conscious reflection.
    • Unconscious, incidental learning, again operating automatically and outside of awareness, is superior to conscious, intentional learning.  
  • We don't even know how stupid we are.
    • Because of the limitations on our cognitive abilities, we fail to appreciate when our judgments and behaviors are less than optimal. 
 

Choosing Not to Choose

Because We Can't Choose Anyway -- and Because We Don't Know What We Want

Iyengar_NYT03182010.jpg
                                          (22346 bytes)"We don't know what our taste is, and we don't know what we are seeing. I'm a great believer in the idea of not choosing based on our taste" -- this from Sheena Iyengar, a social psychologist at Columbia University Business School, and author of the Art of Choosing (2010), discussing how she chooses what she will wear, and how she will decorate her apartment ("An Expert on choice chooses" by Penelope Green, New York Times, 03/18/2010; see also "To Choose or Not to Choose" by Evan R. Goldstein, chronicle of Higher Education, 03/19/2010).  

In fact, Iyengar relies on consensus of a sort of "committee of experts" consisting of her husband, friends, and her students.  Now, Iyengar is blind, so it's not surprising that she relies on others to tell her how she looks and what her apartment looks like.  But notice that this isn't just a strategy for coping with her personal disability-- she actually believes that "We don't know what our taste is".


To the extent that it is really a trend in social psychology -- and I'm not entirely sure that it is -- "People Are Stupid"-ism appears to have a number of sources:

  • As a reaction to the cognitive revolution in social psychology, with its tacit emphasis on normative rationality. 
    • The affective counterrevolution in psychology is an appropriate corrective to what may have been an undue emphasis on cognitive processes -- especially conscious cognitive processes -- in the earlier era.
    • As part of the affective counterrevolution, a number of social psychologists have emphasized the difference between "cold" and "hot" cognition -- that is, between cold calculation and cognition that is shaped and distorted by emotion and motivation.
      •  In many ways, the resurgence of interest in emotional and motivational affects on cognition is a revival of Bruner's "New Look" in perception, from the late 1940s-1960.
  • As a reassertion of the situationism of the classic social-psychological research on social influence, with its affinities with behaviorism.
  • As a byproduct of the wholesale embrace of the concept of automaticity in social psychology.

 

Error and Bias Revisited

All of this would be well and good, I suppose, if it were actually true that social (and nonsocial) cognition is riddled with error and bias; that automatic processes dominate conscious ones; and that free will is an illusion.  But it's not necessarily the case.

  • As Simon argued, and Kahneman and Tversky agreed, the use of judgment heuristics constitutes a rational response to judgment under conditions of uncertainty.
  • As Gigerenzer argues, even under conditions of certainty, the use of heuristics may be considered rational by virtue of the fact that they are "fast and frugal".
  • As I have argued, the claim that automatic processes pervade behavior is not adequately supported by empirical evidence.
  • Even the "big three" errors in causal attribution -- the Fundamental Attribution Error, the Actor-Observer Difference, and the Self-Serving Bias -- are not necessarily errors.
    • The Self-Serving Bias may help protect self-esteem; but if so, it is in the rational service of a goal, and thus shouldn't be classified as an error.
    • Sabini et al. (2002) have argued persuasively that many classic demonstrations of the Fundamental Attribution Error can also be construed as rational tactics to maintain self-esteem.

In particular, the Actor-Observer Difference has recently been challenged by Malle (2006), who provided the first comprehensive review of this literature since Watson (1982).  In contrast to Watson, who offered a narrative review of a selected subset of the literature available to him, Malle performed a quantitative meta-analysis of 173 published studies.  For each, he calculated a bias score representing the extent of the Actor-Observer difference:



  • A positive score reflects a bias toward internal, personal attributions.
  • A negative score reflects a bias toward external, situational attributions.
109Malle1.JPG (65744 bytes)Bias scores were then calculated separately for actors and observers.  The most salient finding of Malle's review is that the average bias score, aggregated across the 173 studies, is essentially zero, with complete overlap between actors and observers.  Put another way, there is no evidence for a difference in causal attribution between actors and observers.

 

 

110Malle2.JPG (55398 bytes)The results were the same when Malle looked at a subset of studies involving positive and negative events.  Here, there was evidence for a weak bias toward external attributions for negative events and internal attributions for positive events -- but the bias was very weak.

 

 

111Malle3.JPG (73413 bytes)And, if you think about it, there wasn't much evidence for the fundamental attribution error.  If there is little or no actor-observer difference in causal attribution, then people make attributions about others the same way they make attributions about themselves.  And these attributions appear to consist of a nicely balanced mix of the internal and the external.

 

 

112Malle4.JPG (53568 bytes)So, why all this attention to the Fundamental Attribution Error, the Actor-Observer Difference, and the like?  Apparently, it takes a long time to correct the record.  In another analysis, Malle plotted the magnitude of the Actor-Observer difference as a function of when the study was published. 



  • Perhaps the most interesting feature, somewhat paradoxically, is that in the very earliest studies, published in the early 1970s, the Actor-Observer difference was, in fact, vanishingly small.
  • The effect then grew over the next five years or so, but then began to diminish in size.
  • In the latest research, if anything, the effect has reversed.


A Misconception from the Beginning?

In the final analysis, it may be that the classic literature on causal attribution -- the literature that dominates most lectures and textbooks is based on a misconception concerning causal relations in social interaction.  

In the first place, consider what started it all: What E.E. Jones called "Lewin's grand truism",

B = f(P, E).

Heider took this scientific statement about the actual causes of behavior as a framework for thinking about the naive scientist's analysis of phenomenal or perceived causality.  Just as traditional personality and social psychologists attributed causality to the person (among personality psychologists) or the environment (among social psychologists), Heider assumed that the naive psychologist did pretty much the same thing.

114Lewin1939.JPG (77995 bytes)But this argument assumes that P and E are independent of each other -- which was not Lewin's idea at all!.  Lewin's view was that P and E were interdependent, and that together they constituted a single field, which he sometimes called the Life Space.

 

 

115DocInt.JPG (41470 bytes)Moreover, 116IntModes.JPG (95816 bytes) according to the Doctrine of Interactionism, we now understand that P constructs E through his or her behavior (evocation, selection, and behavioral manipulation), and through his or her mental activity (cognitive transformation).

 

 

Therefore, considering "Lewin's grand truism" and the Doctrine of Interactionism, it appears that Heider -- and especially all the social psychologists who followed his lead -- made a  false distinction between the person and the environment.

  • It is not the situation that causes behavior, but the perceived situation -- and perception is internal to the person.
  • According to the Doctrine of Mentalism, mental states are to actions as cause to effect.  But mental states are intentional in nature, and intentionality is also a feature of the person.
Therefore, from a psychological point of view, correct attributions are always to the person.
  • This is true if you're a professional psychologist analyzing the cause of behavior.
  • And it is also true if you're a naive scientist trying to make sense of your world.

The temptation is to say that we should start the study of causal attribution all over again, from the beginning, abandoning the false distinction between the person and the environment and embracing the implications of the Doctrine of Mentalism and the Doctrine of Interactionism -- and the psychological argument that it's not the situation that causes behavior, but the perception of the situation.  

 

A Folk-Conceptual Theory of Causal Attribution

Just such a re-start has been attempted by Bertram Malle (2005), in what he calls a folk-conceptual theory of causal attribution.  In this work, Malle abandons the Heiderian model of the social perceiver as naive scientist, and seeks to understand how ordinary people actually reason about behavior.  After all, that's what social cognition tries to understand: how people -- folk -- think about themselves and other people.

 


Actions and Behaviors, Reasons and Causes

Malle argues that the folk-conceptualization of causality makes a distinction between two kinds of behaviors:

  • unintentional behavior, such as involuntary reflexes and other habitual responses; and
  • intentional action, such as voluntary actions of all sorts.
These two kinds of behaviors, in turn, are explained in terms of two quite different kinds of generating factors:
  • Intentional behaviors are explained by reasons -- the beliefs, desires, values, and the like that caused the person to act as he did.
    • Anne studied for the test all night because she wanted to do well.
    • This, of course, is exactly the kind of causal reasoning implied by the Doctrine of Mentalism -- that mental states stand in relation to action as cause to effect.
    • Explanation by reasons is based on an assumption of rationality -- that there is an almost-logical connection between the behavior and the reason given for it.
      • If Anne wants to do well, it is reasonable that she will study all night.
    • And it is also based on an assumption of subjectivity -- that the actor is aware of the mental states that lead him or her to behave as he or she does. 
      • Anne studies all night precisely because she is aware of her desire to do well.
  • Unintentional behaviors are explained by causes -- the factors that give rise to the behavior in an almost mechanical fashion 
    • Anne was nervous about the test results because she wanted to do well.
    • This bears a closer resemblance to the kind of causality envisioned by traditional stimulus-response theories.
    • Explanation by causes makes no assumption of rationality -- the connection between cause and effect is mechanical, but not logical. 
      • Ann's desire to do well made her nervous, but anxiety isn't a rational, or logical, outcome of desiring to do well.

The reasons people give for voluntary actions typically form a combination of belief and desire.  In the folk-conceptualization of causality, mental states of belief and desire are necessary precursors to intentional action.

  • In the example above, Anne studied for the test all night because she desired to do well and (apparently) believed that is was necessary to study all night in order to do well.
  • But in the other example, there's just the desire, which makes Ann nervous.

 

Mental State Markers

The actor's mental state is often marked linguistically. 

  • In the statement She went to the cafe because she wanted an authentic cappuccino, the actor's desire is explicitly marked by the verb to want.
  • In the statement She went to the cafe because she thinks they have authentic cappuccino, the actor's belief is explicitly marked by the verb to think.
In other instances, however, the actor's mental state is linguistically unmarked.
  • In the statement She didn't speak up because the teacher was there, the causal analysis -- that she didn't want to be overheard, or somesuch, is only implicit in the causal reasoning.
  • Note that in the statement She went to the cafe because she wanted an authentic cappuccino, the actor's desire marked but her belief -- that the cafe actually serves the authentic cappuccino that she wants -- is unmarked.  
  • And in the statement She went to the cafe because she thinks they have authentic cappuccino, the actor's belief is marked , but her desire -- that she actually wants the authentic cappuccino that she thinks they serve -- is unmarked.

The distinction between marked and unmarked mental states is important, because the absence of explicit markers for the actor's beliefs and desires can confuse internal with external causes.

  • In the statement She didn't speak up because the teacher was there, it is possible to think of the teacher wasn't there as an external cause of her failure to speak up.  But the real reasons are clearly internal: she believed that the teacher would overhear her and she didn't want to be overheard.
  • Similarly, in the statement She went to the cafe because they have authentic cappuccino, the behavior might be mistakenly attributed to an external cause -- the fact that the cafe serves an authentic cappuccino.  But the real reason is internal -- that she knows (or believes) that they serve an authentic cappuccino, and she wants one.


Types of Causes

While reasons apply only to intentional acts, causes can apply to any physical event, whether it is behavioral or not.  Malle argues that causes can be arrayed on several dimensions corresponding to the dimensions familiar from the discussion above:

  • Internal vs. External
    • Compare She cried because she was unhappy vs. She cried because she was slicing onions.
    • Compare The tree fell because the wind was strong vs. The tree fell because its roots were shallow.
  • Stable vs. Variable
    • Compare She cried because she's a depressive vs. She cried because he broke up with her.
    • Compare The tree fell because of the tornado vs. The tree fell because the winds are strong here.
  • Global vs. Local
    • Compare She cried because she cries at everything vs. She cried because the teacher criticized her.
    • Compare The tree fell because the soil is bad here vs. The tree fell because it was planted poorly.


Enabling Factors

Whether the behavior is explained by a cause or a reason, the outcome is often controlled by the presence of factors that mediate between the cause and the effect:

  • Skill, as in She got an A because she's very smart.
  • Opportunity, as in She got an A because her date was cancelled.
  • Removed obstacles, as in She got an A because she found her notes.

 

Causal History of Reasons

Beliefs and desires, in turn, may be explained in terms of causes that make no assumption of subjectivity or rationality.  These causal antecedents of reasons may come in many forms, for example:

  • Unconscious processes, favored by Freudian psychoanalysts, as in He planted the garden because he loved his mother.
    •  In Freudian theory, a behavior like planting a garden might be attributable to his desire to grow flowers, but his desire to grow flowers, in turn, might be caused by the displacement or sublimation of his sexual desire for his mother.
  • Personality factors, as in He planted the garden because he's cheap.
    • He planted the garden because he wanted inexpensive flowers, but he wanted inexpensive flowers because he's a cheapskate and doesn't like to spend money.
  • Socialization and culture, as in He planted the garden because he's a farm kid
    • He planted the garden because he wanted his own fruits and vegetables, but he wanted his own fruits and vegetables because that was what his family always did.
  • Immediate context, as in He planted the garden because he was living in a hippie commune.
    • He planted the garden because he wanted to get along with his housemates, and hippies like to plant community gardens.

 

Flowcharts for Causal Explanation

Malle doesn't offer anything like Kelley's covariation calculus for causal attribution, but he does provide flowcharts that trace the logic of folk explanation.

Here's what the flowchart looks like as a whole:

  • Intentional actions are explained in terms of reasons -- beliefs and desires. 
    • The reasons themselves may be explained in terms of a causal history of reasons.
    • Reasons give rise to intentions.
    • Enabling factors determine whether the intentions will translate into actions.
  • Unintentional behaviors are explained in terms of mere causes
    • Enabling factors may determine whether the causes translate into behaviors.
    • But reasons do not figure in the explanation.
    • And therefore, any question of causal history of reasons is moot.





124IntentionalFlow.JPG
                                        (47993 bytes)
125UnintentionalFlow.JPG
                                        (57001 bytes)

 

The Actor-Observer Asymmetry Revisited

As persuasive as Malle's analysis of the folk conception of causation might be, it is an empirical question whether it alters our understanding of causal attribution.

Malle and his colleagues (2007) performed a number of studies in which they tested the implications of the folk-conception of causality against the traditional P-E framework.

Recall that the traditional framework simply holds that actors tend to attribute their behavior to external causes (situational factors), while observers tend to attribute actors' behavior to internal causes (like personality dispositions).  By contrast, Malle's revisionist analysis makes an entirely different, and more subtle, set of predictions:



  • Reason asymmetry:
    • Actors will be more likely than observers to offer reasons for behavior, because actors have privileged access to their reasons, by virtue of consciousness (subjectivity). 
    •  Actors will be less likely than observers to offer a causal history of reasons, because these causes often are inaccessible to conscious awareness.
  • Belief asymmetry
    • Actors will use more belief reasons.
    • Observers will use more desire reasons, because they often explain the behavior of others by simulating what they themselves would want in the circumstances.
  • Marker asymmetry
    • Actors will often leave their reasons unmarked, because they're so obvious to them, by virtue of their direct representation in consciousness.

128MalleA-O.JPG (49961 bytes)Across the nine studies, Malle et al. (2007) found little evidence for the traditional Actor-Observer asymmetries, but considerable evidence for the asymmetries predicted by the folk-conceptual theory.



  • The traditional prediction that observers will be more likely to attribute behavior to the actor's traits, was only very weakly supported.
  • The traditional prediction that observers would attribute behavior to the actors' personal factors, while observers would attribute behavior to situational factors, was not supported at all.
  • The revisionist prediction that actors would leave their reasons unmarked more often than observers do, received considerably stronger support.
  • The revisionist prediction that actors would offer more belief reasons, and observers more desire reasons, was also more strongly supported.
  • The revisionist prediction that actors would be more likely than observers to offer reasons was very strongly supported.

 

The Bottom Line on Causal Attribution

Malle's work suggests that the folk-conceptual theory offers a powerful framework for understanding how real people actually explain social behavior in the real world.

The traditional P-E framework is inappropriate, for a number of reasons:

  • It treats all causes as "mere causes", attributing behavior to traits and situational factors that have nothing to do with intentionality, reasons, beliefs, or desires.
  • It ignores the reasons and desires that real people invoke when they actually explain their behavior.
Recall that, from a psychological point of view, voluntary behavior is always to be explained in terms of the organism's mental states -- beliefs, feelings, and desires.  

In this respect, it might be said that, because it ignores  the Doctrine of Mentalism, the traditional P-E framework is a distinctly unpsychological psychological theory of causal attribution.

And it also ought to be said that, precisely because it invokes the Doctrine of Mentalism, the folk-conceptual theory is better psychology.

Note, too, that beliefs, feelings, and desires are internal states and dispositions of the actor.

Therefore, 

The Fundamental Attribution Error is not an error!  But it is fundamental!


For an excellent overview of traditional attribution theory, see Causal Attribution: From Cognitive Processes to Collective Beliefs (1989) by Miles Hewstone, a British psychologist.

 

Moral Judgment

Social interaction involves at least three cognitive tasks.

If the causal attribution involves factors that are internal to the person, as opposed to external in the situation, the actor often faces a fourth task:

How we make these moral judgments has been a concern of philosophers and psychologists for a long time. In the West, this history begins with the Greeks and their debates over "the good life": the Sophists and Plato, the Stoics and the Epicureans. There is the Judeo-Christian tradition, with the Ten Commandments, Jesus' summary of The Law, the Sermon on the Mount, and his "new commandment" that we love one another. The medieval period gave us Aquinas's marriage of Platonic and Christian thought. The Enlightenment brought us Hobbes's ethical naturalism, Hume's utilitarianism, and Kant's categorical imperative. The 20th century saw the rise of meta-ethics, concerned with the nature of moral judgment, rather than with questions of right and wrong per se, and that is where the psychology of moral judgment begins as well. For example, Lawrence Kohlberg (Kohlberg, 1969) offered his neo-Piagetian stage theory of moral development, describing the transitions from preconventional to conventional to postconventional reasoning, and Carol Gilligan (Gilligan, 1982) distinguished between rational moral judgments based on justice and relational judgments based on compassion.

Kohlberg's view dominated the textbooks for a long time -- not just because it was virtually the only game in town, but also because it was consistent with the cognitive revolution in psychology of the 1960s and 1970s. But in retrospect, we can see in the Kohlberg-Gilligan debate a foreshadowing of what might be called the affective counterrevolution which rose in the 1980s, and which affected all of psychology, including the psychology of moral judgment.

 

Cognition and Emotion in Psychology

To understand what happened, let us quickly review the relations between cognition and emotion in psychology. At its beginnings, psychology was almost exclusively focused on cognition. The primacy of cognition was implicit in psychology's philosophical roots: Descartes' proposition that reason is the mental faculty which is distinctively human, and the corresponding shift from metaphysics to epistemology as the focus of philosophy; and the emphasis of the British empiricists on the experiential origins of knowledge. Accordingly, the 19th-century psychophysicists and physiological psychologists focused their experiments on sensation and perception, and -- with the exception of Wundt -- had very little to say about emotion. With the behaviorist revolution "psychology lost is mind" (to paraphrase R.S. Woodworth (Woodworth, 1929): stimulus-response theory threw cognition out the window, and emotion went with it -- with the salient exception of W.B. Cannon's construal of emotion in terms of the "flight or fight" reflex. The cognitive revolution underscored the role of thought and knowledge as mediators between environmental stimulus and organismal response, gave a cognitive interpretation of learning in terms of the formation of expectancies, and returned a host of topics to psychology, especially attention, "short-term" memory, and language.

The cognitive viewpoint asserts, first and foremost, that people respond to their mental representation of the stimulus. Behavior is mediated by cognitive states of knowledge, belief, and expectation, acquired through perception, stored in and retrieved from memory, manipulated and transformed through processes of reasoning and problem-solving, and translated into behavior through processes of judgment and decision-making. The cognitive point of view was exemplified by a theory of rational choice in which consciousness was the default option. Some cognitive psychologists (and other cognitive scientists) construed the word cognitive as referring to any internal mental state, including emotional and motivational states. But strictly speaking, cognitive psychology construes emotion as a cognitive construction -- a belief about one's emotion that is a product of a more or less rational analysis of the situation in which one finds oneself. Thus, Schacter and Singer (Schachter & Singer, 1962) argued that emotions were shaped by the person's perception of the situation in which he experienced undifferentiated physiological arousal. Lazarus (Lazarus, 1968) argued that we could control our emotions by changing the way we thought about our situation. Smith and Ellsworth (Smith & Ellsworth, 1985) listed the various types of appraisals that gave rise to particular emotions. And Clore and Ortony (e.g., Ortony, Clore, & Collins, 1988) argued that emotions depended on the cognitive value of an event, with respect to the person's goals ("goals" themselves reflect a cognitive construal of motivation, but that is a topic for another paper). In each theory, cognition comes first, and emotion is determined by the cognition.

The success of the cognitive revolution can be seen in the fact that every university department of psychology has a graduate program devoted to cognitive psychology, but hardly any of them have similar graduate programs devoted to emotion (or motivation, for that matter). Similarly, we have a large number of textbooks devoted to aspects of cognition, but hardly any providing similar coverage of emotion (or motivation, but I digress). Almost half of Henry Gleitman's Psychology, perhaps the best introductory text since James' Principles, is devoted to cognition (8 of 17 chapters and 328 out of 715 pages in the 8th edition of 2011, not counting the material on cognitive development), while motivation and emotion share only a single chapter.

Beginning in the 1980s, the hegemony of cognition was challenged by what I have come to think of as an affective counterrevolution, exemplified by the Zajonc-Lazarus debate (Lazarus, 1981, 1984; Zajonc, 1980, 1984) in the pages of the American Psychologist. The general thrust of this affective counterrevolution was that emotion was at least independent of cognition, if not actually primary. Thus, Zajonc himself argued that "preferences need no inferences" because they could be shaped by "subliminal" stimuli processed outside of conscious awareness. Paul Ekman proposed a set of reflex-like basic emotions that were part of our phylogenetic heritage. Building on the earlier work of Cannon, Bard, and Papez, Paul MacLean (MacLean, 1990), and Joseph LeDoux (LeDoux, 1996), proposed that emotional reactions are controlled by brain structures that are different from those involved in cognitive processing. Accordingly, Jap Panksepp (Panksepp, 1996) argued for a new interdisciplinary affective neuroscience modeled on, but independent of, cognitive neuroscience. Emotion now has its own textbooks (e.g., Niedenthal, Krauth-Gruber, & Ric, 2006), and we can only assume that free-standing graduate groups are not far behind.

 

Threats to Reason in Moral Psychology

What does all this have to do with moral judgment? My point is that the affective counter-revolution, with its insistence on the independence of affect from cognition, and on the dominance of affect over cognition, constitutes a threat to the role of reason in moral psychology. Actually, it is not the only threat. Moral reason is also threatened by the rise of what I have called a "People Are Stupid" school of psychology (Kihlstrom, 2004), which argues that people are fundamentally irrational, and that our thoughts and actions are overwhelmingly subject to unconscious, automatic influences that operate outside phenomenal awareness and voluntary control. And it is threatened by the biologization of psychology in general, everything from behavior genetics and evolutionary psychology to the neuroscientific doctrine of modularity, the general thrust of which is, once again, to limit the role of mind in behavior -- and, indeed, to dispense with psychology itself (Mike Gazzaniga, the founder of cognitive neuroscience, has written that "Psychology is dead, and the only people who don't know it are psychologists" (M.S Gazzaniga, 1998, p. xi).

If you don't believe this to be the case, check out David Brooks' book, The Social Animal (2011; reviewed in Kihlstrom, 2012). Brooks is probably the foremost interpreter of psychological research and theory to the general public, by virtue of his New York Times Op-Ed pieces even more visible than Malcolm Gladwell or Jonah Lehrer, and in this book he is constantly referring to Hume's dictum that reason is a slave to passion. For Brooks, and the psychologists whom he relies on, thought and action is dominated by unconscious processes of emotion, intuition, and automaticity.


Even more to the point is a series of essays commissioned by the John Templeton Foundation in the spring of 2010, as part of its "Big Questions" series. These essays got almost no attention in the professional media -- neither the APA Monitor nor the APS Observer covered them -- despite the fact that this was the first time a Big Question targeted psychology. The Big Question was: "Does moral action depend on reasoning?" Among other authorities, five psychologists were asked to respond, and four of them said, essentially, "No".

28GazzTemp.JPG (109689 bytes) Mike Gazzaniga led off by asserting that "all decision processes... are carried out before one becomes consciously aware of them.
29LehrerTemp.JPG (81029 bytes) Joshua Greene wrote that "moral judgment depends critically on both automatic settings and manual mode".
30GreeneTemp.JPG (101189 bytes) Jonah Lehrer (not a psychologist, strictly speaking, but one of the foremost interpreters of psychology to the general public, and the immediate source for much of Brooks' book) asserted that "moral decisions often depend on... moral emotions" that "are beyond the reach of reason".
31DamasioTemp.JPG (120765 bytes) Anthony Damasio (a cognitive neurologist, if not exactly a psychologist) wrote that "morality is based on social emotions that have their origins in 'prerational' emotional brain systems, neuromodulator molecules..., and genes which have 'early evolutionary vintage'".
32KihlTemp.JPG (109052 bytes) And then there was me -- but we're not there yet.

 

Critique of Moral Intuitionism

06MoralIntuit.JPG (86256 bytes)Each in his own way, these authors reflect a point of view proposed by Greene and Jonathan Haidt known as moral intuitionism (Greene & Haidt, 2002; Haidt, 2007; Greene's position has changed somewhat since then). Greene and Haidt note that morality serves two important functions: at the micro level, it guides our social interaction, while at the macro level it binds groups together. But where does morality come from? Greene and Haidt argue for intuitive or rational primacy in moral judgments. Far from reflecting the operation of human reasoning, thee judgments are the product of evolved brain modules that generate what might be called the yuck factor -- an intuitive, emotional "gut feeling" that certain things are, well, just plain wrong. When we are asked to justify them, the reasons we give for our moral judgments are neither necessary nor sufficient; rather, they are more like post-hoc rationalizations.

In fact, they note, much of the time we can't give any reason at all -- the activity -- for example, brother-sister incest or cannibalism --  in question just seems or feels bad.  In an unpublished manuscript from early in the development of moral intuitionism, they called this condition moral dumbfounding: "the stubborn and puzzled maintenance of a moral judgment without supporting reasons" (Haidt et al., 2000, p. 1; see also Haidt, 2001).

07Mill.JPG (107619
                  bytes)Although moral intuitionism is relatively new as a psychological theory, the general idea is old enough to have been critiqued by John Stuart Mill, in his 1843 treatise which created A System of Logic (see Ryan, 2011). When we rely on intuitions, Mill wrote, there is no need to question prevalent moral judgments, nor any need to explain how our intuitions came to be what they are; nor do we have any means of resolving competing individuals' intuitions. They just are what they are. Mill agreed that intuition played an important role in some fields, such as mathematics, but he thought that a reliance on intuition should not extend to ethics and politics, because it "sanctifies" traditional opinions and provides an intellectual buttress to conservatism.


Indeed, moral intuitionism can be seen as a threat to democracy. How do you debate, how do you compromise, with someone whose moral judgments rely on intuitions? In this respect, I was put in mind of a quote by Heinrich Himmler, commander of the Gestapo in Nazi Germany, who wrote that "In my work for the Fuhrer and the nation I do what my conscience tells me is right and what is common sense" (quoted in the biography by Peter Longerich, 2011).

Of course, it doesn't matter if moral intuitionism is a threat to democracy, if in fact it is true -- that is, if it's a valid scientific theory about how moral judgments are made. Accordingly, it's important to examine the evidentiary base for moral intuitionism, to determine the extent to which it is actually supported by empirical evidence.


Moral Stupefaction

It should also be noted that Haidt and Greene, who initially proposed moral intuitionism, diverge somewhat in their takes on its implications.

Despite these differences in view, both Haidt and Greene argue that moral dumbfounding supports their claim that reasoning plays little or no role in moral judgments, and that, when it comes to moral behavior, we act on feelings rather than for reasons.

At the same time, it should be noted that dumbfounding doesn't necessarily imply that moral intuitionism is correct.  The consequentialism of Joshua Greene noted above, based on the views of Peter Singer, makes that clear.  But in a critique of moral intuitionism, Daniel Jacobson (2012) offers other arguments against intuitionism.  He argues that, in the face of moral dumbfounding, moral psychologists and philosophers like Haidt and Greene have become morally stupified by their own theoriesAs a result of this stupefaction by a narrow-minded theory, they cannot see the good reasons for the judgments that people make.

To make things worse, Jacobson invokes (without using the term) the demand characteristics of the incest-cannibalism experiment.  The situation (incest or cannibalism) is explained to the subjects, they say its wrong, and then the experimenter, acting as a devil's advocate, insists that since "no harm was done", it couldn't be wrong.  At this point, the subject may feel browbeaten by the experimenter, and simply retreat into solipsism.  But, Jacobson argues, harm was done in both instances.

The point here is not so much that Jacobson's arguments against Haidt's theory are correct (they strike me as quite compelling, though because I'm not a trained moral philosopher, I may have missed something about the argument).  The point here is that Haidt's subjects may have had perfectly good reasons to reject incest and cannibalism, despite the fact that "no harm" came to pass in either instance.  But we never got to find this out, because Haidt's experimenters simply rejected any argument that wasn't based on a narrow definition of "harm".

Tamsin Shaw's Critique of Moral Psychology

The philosopher Tamsin Shaw has offered a critique of contemporary moral psychology ("The Psychologists Take Power", 02/25/2016).  Actually, she offers two critiques, one of moral psychology, the other of immoral psychologists The two are related, in her mind.  She asserts that experimental psychologists and neuroscientists claim to have the key to superior, "scientifically correct" moral reasoning; and then impeaches this claim by asserting that psychologists engage in, or at least support, torture. 

Her target, in the latter instance, was a pair of psychologists, Bruce Jessen and James Mitchell, who drew on Martin E.P. Seligman's notions of learned helplessness to devise methods of torture used in the "enhanced interrogation" of al-Queda and other "Islamic" terrorists, earning millions of dollars -- $81 million, to be exact, though their actual contract, terminated prematurely, was for much more than that -- in the process; and the officials at the American Psychological Association who enabled the program by either turning a blind eye to it or reinterpreting the APA ethics code to permit their activity. 
Shaw also implicates Seligman himself in this sordid business, noting that he had his own contract with the military -- $31 million dollars to support his Positive Psychology Center, at the University of Pennsylvania, which in turn participates in enhancing soldiers' resilience in the face of capture and torture through the Pentagon's Survival, Evasion, Resistance, and Escape School (SERE).  Seligman, in an exchange with Shaw in the NYR ("Learned Helplessness and Torture: An Exchange", 04/21/2016), calls her article "defamatory" and denies having anything to do with the Mitchell-Jessen torture program.  [Full disclosure: my PhD is from the University of Pennsylvania, and Seligman was one of my teachers, and his work on learned helplessness was the inspiration for one of my favorite papers, co-authored with Susan Mineka (a Penn classmate), on "Unpredictable and Unavoidable Aversive Events: A New Perspective on Experimental Neurosis" (1978).]
Turning to the former instance, which is more relevant in the current context, Shaw's primary target is the moral intuitionism promoted by Haidt, Greene, and others.  To recapitulate, Haidt and Greene, and others in the moral intuitionist camp, argue that moral judgments are mediated by two different processes: one fast, intuitive, and emotional; the other slow, deliberate, and rational.  However, like other dual-process theorists (such as Kahneman, with his distinction between System 1 and System 2 thinking), they argue that the intuitive process, being faster than the deliberative one, nearly always wins the race.  The fast process evolved in an environment in which threats were up-close and personal, and is unsuited to contemporary life, where the threats are more distant and impersonal.  For those kinds of threats, intuition may be misleading and needs to be corrected by rational deliberation.  These rational deliberations should be premised on a norm of social cooperation, and moral judgments should be validated against the consequences of their ensuing actions -- in particular, whether they avoid conflict.  Shaw's argument against this position isn't completely clear, but she seems to object to (1) the intuitionists' criticism of the role of intuition and emotion; (2) their emphasis on conflict-avoidance; and (3) the implication that psychology, neuroscience, and evolutionary biology can and should supplant philosophy as the means for arriving at judgments about morality.  She also points to the irony of psychologists asserting the superiority of science while aiding and abetting torture.

Haidt and Steven Pinker, who affiliates with the moral intuitionist camp, disputed this picture in another exchange with Shaw (
"Moral Psychology: An Exchange", NYR,  04/07/2016).  They asserted, correctly, that Shaw's linking of moral psychology and torture amounted to little more than guilt by association (some prominent moral intuitionists teach in Seligman's "Positive Psychology" program; Mitchell and Jessen were inspired by Seligman's theory of learned helplessness; therefore the moral intuitionists,not to mention Seligman, are associated with torture; and because of their association with torture, science does not provide us with a "reliable moral compass".  Whew!).  More important, H&P denied that science had any special authority when it comes to moral judgments.  They, quite rightly, distinguished between descriptive theories of how people make moral judgments, which is properly the subject of science, and prescriptive theories of what constitutes morality, which is properly the province of psychology.  In their dual-process theories, they claim merely to be describing how people make moral judgments, not prescribing what a proper judgment should be.

But, frankly, this isn't quite true.  The moral intuitionists do suggest that intuition is often insufficient, and has to be justified, and if necessary corrected, by reasoning; and that this reasoning should be based on norms like social cooperation, leading to something like the biblical Golden Rule or Kant's categorical imperative.  As they write, "Psychology, neuroscience, and evolutionary biology, though they cannot by themselves debunk moral intuitions, are highly relevant to evaluating them".  And also: "while primitive physical revulsion may serve as an early warning signal indicating that some practice calls for moral scrutiny, it is the 'more sophisticated reasoning' that should guide us through times of crisis" -- reasoning based on a norm of social cooperation.  But if science is relevant to the evaluation of moral intuitions, isn't that essentially saying that rationality trumps intuition?  So Shaw has a point, even though she spoils her argument by conflating moral psychology (and moral psychologists) and torture.
Shaw isn't just after the moral intuitionists.  In another article in the NYR, she goes after Daniel Kahneman and Amos Tversky, and their followers, like Cass Sunstein and Richard Thaler, who have argued that public policy should take advantage of the frailties in human reasoning discovered by the "heuristics and biases" approach to judgment and decision-making ("Invisible Manipulators of Your Mind", 04/20/2017).  I discuss her argument in the Lecture Supplements on "Thinking: Reasoning, Judgment, Problem-Solving, and Decision-Making" in my General Psychology course.


The Trolley Problem

09TrolleyProb.JPG (123329 bytes)As far as I can tell, the reference experiment for moral intuitionism is a philosophical conundrum known as the Trolley Problem, originally devised by Phillipa Foot (1967) and popularized by Judith Jarvis Thomson (Yale Law Journal, 1985), among others (cartoon by Trevor Spaulding, New Yorker, 06/26/2017).



10ProbDat.JPG
                  (60311 bytes)Here's the problem with the Trolley Problem: It turns out that many more people think that it's morally justifiable to switch a trolley from one track to another, sacrificing one life to save five, than think it's morally justifiable to push a fat man off a footbridge onto the trolley tracks, killing him but saving those same five lives. The trick, of course, is that both versions of the Trolley Problem involve the same expected outcome -- one life lost, five lives saved. From a rational-choice point of view, it's a no-brainer. So why do people resist the choice And why is there such variance in choice, depending on how equivalent outcomes are framed? The conclusion, therefore, is that rational choice can't account for people's moral judgments. Something else must be involved, and that something else consists of emotional intuitions - -the "yuck factor" generated by a specialized brain module that became part of our phylogenetic equipment over the course of evolutionary time.

But it turns out that there are problems with the Trolley Problem. In the first place, it strikes me that the Trolley Problem lacks ecological validity in Orne's sense (Orne, 1962). It's not at all clear that the Trolley Problem is representative of the kinds of moral dilemmas that confront us in the ordinary course of everyday living. When was the last time you were on a bridge, next to a fat man, with a trolley racing along the tracks below you toward five people tied to the track by some Snidely Whiplash? But of course, that just may be my intuition, and there's no arguing with intuitions.

Moreover, posing the Trolley Problem the way it is posed necessarily inflicts moral dumbfoundedness on subjects, because of the experimenter's insistence that the harm posed by the two alternatives is equal.  It is equal, in the sense that one person dies under either scenario.  But the harm may differ in other respects.  But if a morally stupified experimenter imposes on subjects a narrow definition of harm, it should not be surprising that the subjects act as if they are morally dumbfounded.

More important, note that, in the Trolley Problem, reason is ruled out by experimental fiat. That is, the Trolley Problem has been constructed such that all outcomes are rationally equivalent, and subjects cannot make a choice based on expected outcomes or utilities. They have to do something else. Perhaps, under such circumstances, people do rely on their moral intuitions, or on some other basis for judgment. But it hardly seems correct to conclude, from their responses in this highly constrained situation, that emotion supplants reason in moral judgment.

Nor is there any comparison of effect size. What we'd really like to see, in an experiment such as this, is an experimental manipulation of both emotional and rational factors, so we can determine when emotion dominates reason, under what circumstances, and by how much.

12UMG.JPG (78711
                  bytes)Finally, there is no consideration of a "cognitive" alternative. In this respect, it's interesting to note that a cognitive alternative to moral intuitionism is available. Inspired by Noam Chomsky's notion of a Universal Grammar underlying human language, John Mikhail (Mikhail, 2007) has offered a universal moral grammar that provides a good explanation of responses to various versions of the Trolley Problem in purely cognitive terms -- it's a grammar, after all -- without invoking emotions or intuitions. Mikhail begins, like any good cognitive psychologist, by invoking what he calls "the poverty of the moral stimulus" -- that the situations that demand moral judgment usually do not contain enough information to enable us to make that judgment. People form a mental representation of the situation, and then apply a moral grammar to render a moral judgment. It's all very cognitive -- all very rational.

The bottom line is that there is no good empirical reason to think that emotion and intuition rules moral judgment. Maybe, as in the Trolley Problem, affect and intuition act as a sort of tie-breaker, in those circumstances when rational choice does not suffice. Maybe reason serves to challenge and correct our moral intuitions. Or maybe affect serves as information for cognition. In any case, the neither cognition nor emotion is dominating the other. Rather, it seems that in moral judgment as in other aspects of mental life, cognitive, emotional, and motivational processes work together, and the balance between them varies depending on the situation.

14KantQuote.JPG (61565 bytes)Notice that, in this formulation, emotion is more than a cognitive construction. I'm a cognitive psychologist, but I have always distrusted the idea that emotions are merely cognitive constructions -- that we don't really feel anything, we just think we do. I've always preferred the formulation by Immanuel Kant, who asserted (in The Critique of Judgment, 1791, as paraphrased by Watson) that "there are three absolutely irreducible faculties of mind: knowledge, feeling, and desire". What Kant meant was that neither of these faculties could be reducible to the other(s), as in the cognitive-constructivist account of emotion. Emotion, then, has an existence that is independent of cognition. But just because emotion is not reducible to cognition, that does not mean that cognition and emotion cannot interact. We know that emotion can color perception, memory, and thought; and we know that thinking can generate, and regulate, emotion. We can dispense with arguments about the primacy of either cognition or affect, and get on with the business of discovering how they work, separately and together, and how they each play a role in matters such as moral judgment.


Morality in Robots

One approach to moral judgment is to think about how we might build a moral sense into a robot -- not a pie-in-the-sky proposition, given recent advances, if that's what they are, in robotic warfare, in which drones pick out their own targets. 

One approach was proposed by Isaac Asimov, the science-fiction author, in his "Three Laws of Robotics" (I, Robot, 1950).  These are: (1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; (2) a robot must obey orders given it by human beings except where such orders would conflict with the First Law; (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. 

Now that we actually have robots, with increasing degrees of autonomy, philosophers and other cognitive scientists have taken seriously the problem of how to get robots to make moral judgments, and thus behave morally. 



Does Moral Judgment Require a Judgment At All?

Morality is usually construed as the province of religion, or philosophy, which is where the discussion and debate over "what's moral" gets heated.  But some theorists have argued that religion, and philosophy, are irrelevant, and that science -- particularly neuroscience --  can provide an objective answer to the question of what is moral and what is not.  Not all scientists have made this strong claim.  Stephen Jay Gould, for one, argued that science and religion constitute non-overlapping magisteria (NOMA): science being concerned with matters of truth, and religion being concerned with matters of value (Gould introduced his argument in an essay, "Nonoverlapping Magisteria" published in Natural History (1997), and developed it further in his book, Rocks of Ages, 1999). 

Gould's arguments were opposed by some of The New Atheists, who don't think that religion has a place in any discussion.  For example, the evolutionary biologist Richard Dawkins argued that religion should just shut up when it comes to matters of scientific truth (here he's talking mostly about evolution versus creationism).  And, Sam Harris has argued that science has the final word about questions of value, too.  In the Moral Landscape: How Science Can Determine Human Values (2010), the neuroscientist Sam Harris, has argued that neuroscience can give objective answers to questions of moral value.  He argues that questions of morality come down to "facts about the well-being of conscious creatures", and that science, and particularly neuroscience, is well-placed to determine what those facts are.  All -- all! -- we have to do is to determine the basic components of well-being, and then identify those choices and behaviors that maximize well-being, so measured.  No more cultural relativism.  No more disputes between Jews and Christians, Sunni and Shia, fundamentalists and progressives, etc. And that is something that is well within the capability of neuroscience (which seems to include scientific psychology).

If this sounds like utilitarianism, it is (see "Science Knows Best" by Kwame Anthony Appiah, New York Times Book Review, 10/03/2010).  The difference is in how we determine "the greatest good for the greatest number".  It's no longer a matter of reason and debate, or even of intuition.  It's an objective truth, to be determined by something like brain-imaging (I suppose).  But in any event, if Harris is right, there's no need for a psychology of moral judgment -- or, for that matter, for any kind of judgment at all.  It's really just a matter of physics.

But -- and this is a big "but" -- even if Harris is right, in order for science to objectively determine moral values, there's going to have to be a "well-being" meter in the head, and we're going to have to have a technology that can read it.  And that reading has to be the same for everyone.  And there has to be only one such meter, so as to avoid conflicting "truths" from multiple meters.  Good luck with this.



Automaticity in Social Judgment

The work on heuristics and biases laid the foundation for a broader view that most social cognition, and therefore most social behavior as well, is automatic in nature -- closer to reflexes, taxes, instincts, and conditioned responses than to deliberate, conscious, thoughtful action (Bargh, 1984).

In cognitive psychology, automatic processes are distinguished from controlled processes by a number of features:

  • Inevitable Evocation
  • Incorrigible Execution 
  • Effortlessness
  • No Interference 
These features combine to render automatic processes unconscious in the strict sense of the term:
Some automatic processes are innate; others are acquired by a process sometimes known as proceduralization.

In cognitive psychology, automaticity is classically illustrated by the Stroop color-word effect.  You can't help reading a color word, and that interferes with the task of naming the color in which the word is printed.  The general consensus in the field is that two quite different types of processes contribute to performance on any task:

Social psychology, for its part, has seen the introduction of a large variety of dual-process theories covering various topics, especially in social cognition, all of which are based on the automatic-controlled distinction (or something very much like it.
At the same time, these ostensibly "dual-process" theories in social psychology have emphasized the role of automaticity, almost to the exclusion of conscious control -- a trend that I have labeled the automaticity juggernaut



As John Bargh (1984), one of the earliest and most vigorous proponents of automaticity, has put it:

"As Skinner argued so pointedly, the more we know about the situational causes of psychological phenomena, the less need we have for postulating internal conscious mediating processes to explain these phenomena."

As the quotation from Bargh suggests, the automaticity juggernaut is closely related to Skinnerian functional behaviorism, and to the doctrine of situationism that pervaded classic experimental social psychology.  Strictly speaking, it is not really behaviorism -- because the proponents of automaticity argue that internal cognitive and other mental processes do mediate between stimulus and response after all; it's just that these mental processes operate unconsciously and automatically.

With apologies to Alexander Dubcek and Susan Sontag, I call this stance:

behaviorism with a cognitive face.

As someone who has devoted his entire career to trying to get psychologists to take a non-Freudian view of unconscious mental life seriously, Bargh's work reminds me of the warning in one or two of Aesop's fables: "Be careful what you pray for: You might get it.".

There is little doubt that automaticity plays a role in social cognition and behavior, but there is every reason to doubt that automatic processes overshadow controlled processes at every turn.

  • In the first place, most studies of automaticity in social cognition and behavior are demonstration experiments, which reveal automatic processes in operation but do not compare them to controlled processes.
  • Many demonstrations of automaticity employ loose and controversial operational definitions of the concept.
  • There are very few direct comparisons of the strength of automatic and controlled processes, and most of those that are available show that controlled processes are more influential than automatic ones. 

There's a fuller discussion of automaticity in social cognition in the lecture supplements on Automaticity.

This page last modified 06/27/2017.