Home Introduction Cognitive Psychology Cognitive Perspective Social Perception Social Memory Social Categorization Social Judgment Language Automaticity Self Social Neuropsychology Personality Social Intelligence Development Sociology of Knowledge Social Construction Conclusion Lecture Illustrations Exam Information


Automaticity

Lecture Supplement


Note: Some material in this supplement is taken from my essay on "The Automaticity Juggernaut" (2008).

For more details on automaticity, see the lecture supplement on "Attention and Automaticity"

prepared for my course on "Scientific Approaches to Consciousness".

Also the lecture supplement on "Free Will".


In the beginning, theories of social cognition often had a flavor of cold, rational calculation to them.  You can see this clearly in Anderson's cognitive algebra approach to impression-formation, and Kelley's covariation calculus for causal attribution -- and, indeed, the whole idea of social exchange, with its picture of individuals interacting with each other in such a way as to maximize gains and minimize losses. 

One reaction to this "cold, rational" view of social cognition was to emphasize the importance of emotion.
Another trend was to underscore the automaticity of social cognition and behavior.  Rather than being the product of deliberate, rational thought, these theorists argued that both social cognition and social behavior was triggered automatically by features of the environment.
The concept of automaticity has its roots in cognitive psychology, and in particular in debates over the nature of attention. 


The Evolution of Theories of Attention


William James, in the Principles (1890), gave attention a natural interpretation in terms of consciousness:

Everyone knows what attention is. It is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration, of consciousness are of its essence. It implies withdrawal from some things in order to deal effectively with others, and is a condition which has a real opposite in the confused, dazed, scatterbrained state which in French is called distraction, and Zerstreutheit in German.

In a very real sense, modern scientific research on consciousness began with studies of attention and short-term memory inspired by James' introspective analyses.  Consciousness has a natural interpretation in terms of attention, because attention is the means by which we bring objects into  conscious awareness.  Similarly, maintaining an object in primary (short-term, working) memory is the means by which we maintain an object in conscious awareness after it has disappeared from the stimulus field.  Attention is the pathway to primary memory, and primary memory is the product of attention.

Research on attention faded into the background during the heyday of the behaviorist revolution, but was revived in the context of applied psychological research conducted around World War II, well prior to the outset of the cognitive revolution.  Research on such problems as air traffic control and efficient processing of telephone numbers made two facts clear:

  • attention was limited, in that people could pay attention to only a few things at a time (this had been apparent as well from Woodworth's work on the span of apprehension); and  
  • attention could be controlled, so that consciousness was not necessarily dominated by the most salient stimulus in the sensory field..
There was also a general intuition that attention was the path to primary (short-term) memory, and that both attention and primary memory were closely bound up with consciousness.

Filter Models of Attention

The  earliest psychological theories of attention were based on the idea that attention represents a "bottleneck" in human information processing: by virtue of the bottleneck, some sensory information gets into short-term memory, and the rest is essentially cast aside.  

 

Filter.jpg (50096 bytes)The first formal theory of attention was proposed by Broadbent (1958): aside from its substantive importance, Broadbent's theory is historically significant because it was the first cognitive theory to be presented in the form of a flowchart, with boxes representing information-storage structures, and arrows representing information-manipulation processes.  

 

 

Broadbent's theory, in turn, was based on Cherry's (1953) pioneering experiments on the cocktail party phenomenon.  At a cocktail party, there are lots of conversations going on, and individual guests attend to one and ignore the others.  Cherry argued that attentional selection in such a situation was based on physical attributes of the stimulus, such as location, voice quality, etc.  He then simulated the cocktail-party situation with the dichotic listening procedure, in which two different auditory messages are presented to different ears over earphones; subjects are instructed to repeat one message as it is played, but to ignore the other.  The general finding of these experiments was that people had poor memory for the message presented over the unattended channel: they did not notice certain features, such as switch in language, or a switch from forwards to backwards speech.  However, they did notice other features, such as whether the unattended channel switched between a male and a female voice.

From experiments like this Broadbent concluded that attention serves as a bottleneck, or perhaps more accurately as a filter.  The stimulus environment is exceptionally rich, with lots of events, occurring in lots of different modalities, each with lots of different features and qualities.  All of this sensory information is held in a short-term store, but people can attend to only one "communication channel" at a time: Broadbent's is a model of serial information processing.  Channels are selected for attention on the basis of their physical features, and semantic analysis occurs only after information has passed.  Attentional processing is serial, but people can shift attention flexibly from one channel to another.

Broadbent's model has two important implications for consciousness:

  • Perceptual analysis can be unconscious: if we pay attention to channels on the basis of their physical features, then we must have processed information about these physical features before we pay attention to them.
  • Semantic analysis must be conscious: if semantic analysis is performed after information has passed through the attentional bottleneck, then it cannot be performed unconsciously.
Basically, filter theories of the type proposed by Broadbent entail two important sets of equivalencies:
  • Preattentive = Preconscious = Perceptual;
  • Attentive = Conscious = Semantic.
In other words, preattentive processing is limited to perceptual analysis of the physical features of the stimulus.

MultiStore.jpg (55659 bytes)Broadbent's filter model of attention laid the foundation for the earliest multi-store models of memory, such as those proposed by Waugh & Norman (1965) and Atkinson & Shiffrin (1968).  In these models, stimulus information is briefly held in modality-specific sensory registers.  A limited amount of sensory information is transferred to short-term memory by means of attention, and is maintained in short-term memory by means of rehearsal.  Under certain circumstances, information in short-term memory can be copied into long term memory, and information in long-term memory can, in turn, be copied into short-term memory. 

  

Broadbent's filter model of attention was a good start, but it was not quite right.  For one thing, at cocktail parties our attention may be diverted by the sound of our own name -- a phenomenon confirmed experimentally by Moray (1959) in the dichotic listening situation.  In addition, further dichotic listening experiments by Treisman (1960) showed that subjects could shift their shadowing from ear to ear to follow the meaning of the message; when they caught on to the fact that they were now shadowing the "wrong" ear, they shifted their attention back to the original ear.  But the fact that they shifted their attention at all suggested that they had processed the meaning of the unattended message to at least some extent.  These findings from Moray's and Treisman's experiments meant that there had to be some possibility for preattentive semantic analysis, permitting people to shift their attention in response to the meaning and implications of a stimulus, and not just its physical structure.  

For these reasons, Treisman (1964) modified Broadbent's theory.  In her view, attention is not an absolute filter on information, but rather something more like an attenuator or volume control.  Thus, attention attenuates, rather than prohibits, processing of the unattended channel; but this attenuator can also be tuned to contextual demands.  

At a cocktail party, you pay most attention to the person you're talking to -- attentional selection that is determined by physical attributes such as the person's spatial location.  At the same time, you are also attentive for people talking about you -- thus, attentional selection is open to semantic information.

The bottom line is that attention is not determined by physical attributes, but rather can be deployed depending on the perceiver's goals.  This situation is analogous to signal detection theory, where detection is not merely a function of the intensity of the stimulus, and the physical acuity of the sensory receptors, but is also a function of the observer's expectations and motives.

Like Broadbent's model, Treisman's model is historically important because it is the first truly cognitive theory of attention.  It departs from the image of bottom-up, stimulus-driven information processing, and offers a clear role for top-down influences based on the meaning of the stimulus.  Broadbent (1971) subsequently adopted Treisman's modification.

Treisman's revised filter model has important implications for consciousness.  In her model, preattentive processing is not limited to information about physical structure and other perceptual attributes.  Semantic processing can also occur preattentively, at least to some extent.  The question is: 

How much semantic processing can take place preattentively?

 

Late-Selection Theories of Attention

Treisman's theory altered the properties of the filter/attenuator, but retained its location early in the sequence of human information processing.  The next stage in the evolution of theories of attention was to move the filter to a later stage, and then to abandon the notion of a filter entirely.  

LateSelection.jpg (66070
                                                    bytes)Late-selection theories of attention (Deutsch & Deutsch, 1963; Norman, 1968) held that all sensory input channels are analyzed fully and simultaneously, in parallel.  Once sensory information has been analyzed, then attention is deployed based on the pertinence of analyzed information to ongoing tasks.



  • In early-selection theories, attention is required for the selection of input.
  • In late-selection theories, attention is required for the selection of a response. For this reason, late-selection theories are also sometimes called response-selection theories.
The implications of late-selection theories for consciousness are clear: late-selection theories permit complex semantic analyses to occur preattentively or preconsciously.

LateEarlyCompare.jpg (85107
                                                    bytes)The debate between early- and late-selection theories formed the background for the controversy, which we will discuss later, concerning "subliminal" processing -- i.e., processing of stimuli presented under conditions where they are not detected. The conventional view, consistent with early-selection theories, is that subliminal processing either is not possible, or else is limited to "low level" perceptual analyses. The radical view, consistent with late-selection theories, is that subliminal processing can extend to semantic analyses as well -- because semantic processing, as well as perceptual processing, occurs preattentively.

The debate between early-selection and late-selection theories was very vigorous, and you can still see vestiges of it today.  But, like so many such debates, it seemed to get nowhere (similar endless, fruitless debates killed structuralism; in modern cognitive psychology, similar debates have been conducted over "dual-code" theories of propositional and analog/imagistic representations).

 

Capacity Theories of Attention

Capacity.jpg (60945 bytes)Some theorists cut through the seemingly endless debate between early- and late-selection theories by altering the definition of attention from some kind of filter (with the debate over where the filter is placed) to some kind of mental capacity. In particular, Kahneman (1973) defined attention as mental effort.  In Kahneman's view, the individual's cognitive resources are limited, and vary according to his or her level of arousal.  These resources are allocated to information-processing according to a "policy", which in turn is determined by other influences:


  • Enduring dispositions, innate or over-learned tendencies, which are applied automatically;
  • Momentary intentions, conscious, goal-directed decisions, applied deliberately.
In Kahneman's theory, a person's ability to carry out a particular process depends on the resources required by it.  If the tasks are undemanding, several can be carried out in parallel; if the tasks are demanding, resources are focused on one task at a time, serially. 

Spotlight.jpg (82204 bytes)Another  version of capacity  theory likened attention to a spotlight (Posner et al., 1980; Broadbent, 1982).  In the spotlight metaphor, attention illuminates a portion of the visual field.  This illumination can be spread out broadly, or it can be narrowly focused.  If spread out broadly, attentional "light" can be thrown on a large number of objects, but not too much on any of them.  If focused tightly, attentional "light" can provide a detailed image of a single object, at the expense of all the others.  As the information-processing load increases, the scope of the attentional beam contracts correspondingly.


ZoomLens.jpg (59949 bytes)Similarly, attention can be likened to the zoom lens on a camera (Jonides, 1983; Eriksen & St. James, 1986). As load increases, the lens narrows, so that only a small portion of the field falls on the "film".  But at low loads, a great deal of information can be processed, though there will be some loss of detail.  

 


SplittingBeam.jpg (39958
                                                    bytes)The  spotlight metaphor also raised the question of whether the attentional beam can be split to illuminate two (or more) non contiguous portions of space.

 



Posner's spotlight metaphor has been particularly valuable in illuminating (sorry) various aspects of attentional processing.  For example, Posner has described three different aspects of attention, each associated with its own brain system:

  • Alerting, or achieving and maintaining a state of alertness, controlled by right frontal and parietal areas of the brain, and closely tied to the norepinephrine system.
  • Orienting, or the selection of information from sensory input, associated with areas of the parietal and frontal lobes.
  • Executive Control, or resolving conflict among competing demands and responses, associated with midline frontal and lateral prefrontal areas.  Executive control, in turn, involves three quite different processes:
    • Engaging
    • Disengaging
    • Shifting.

Kahneman's theory laid the foundation for current interest in automaticity: in his view, enduring dispositions influenced allocation policy automatically, outside awareness and voluntary control, while momentary intentions were applied consciously.  

The traditional view, associated with early-selection "filter" theories, was that elementary information-processing functions are preattentive, performed unconsciously (or, perhaps better put, preconsciously), and requiring no attention.  By the same token,complex information-processing functions, including (most) semantic analyses, must be performed post-attentively, or consciously.

The revisionist view, associated with late-selection theories and capacity theories, agreed that elementary information-processing functions were preattentive, performed unconsciously or preconsciously.  But it asserted that complex processes, including semantic analyses, could be performed unconsciously too, so long as they were performed automatically. 


Early Approaches to Automaticity

Early in the history of cognitive psychology, there was a tacit identification of cognition with consciousness.  Elementary processes might be unconscious, in the sense of preattentive, but complex processes must be conscious, in the sense of post-attentive.  But the evolution of attention theories implied that a lot of cognitive processing was, or at least might be, unconscious.

AutomatReading.jpg (88755
                                                    bytes)LaBerge & Samuels (1974; LaBerge, 1975) argued that complex cognitive and motoric skills cannot be executed consciously, because their constituent steps exceed the capacity of attention. Thus, at least some components of skilled performance must be performed automatically and unconsciously. LaBerge &Samuels defined automatic processes as those which permit an event to be immediately processed into long-term memory, even if attention is deployed elsewhere.

 

LaBerge and Samuels illustrated their concept of automaticity with a model of the hierarchical coding of stimulus input in reading. 

  • In reading, the stimulus is simply an array of graphemic information. 
  • The graphemic stimulus must be analyzed to detect lines, angles, curves, and intersections.
  • Particular combinations of these elementary features have to be assembled into letter codes.
  • Particular combinations of letter codes have to be assembled into spelling-pattern codes.
  • Particular combinations of spelling-pattern codes form word codes.
  • Particular combinations of word codes form word-group codes, and so on.
Feature detection is intrinsically automatic, and occurs regardless of the deployment of attention.  All the other levels of reading require conscious effort at early stages, but can become automatized through practice.  After over-learning, the recognition of letters, spelling patterns, words, and word-groups occurs automatically.

StroopControl.jpg (92210
                                                    bytes)The StroopColor.jpg (100610 bytes) automatization of word-reading is illustrated by the Stroop effect.  In the Stroop experiment, subjects are presented with an array of letter strings printed in different colors, and their task is to name the color of the ink in which the string is printed.



  • Sometimes the letters comprise a meaningful word.
  • Sometimes the word is a color name.
  • And sometimes the word names a different color than that of the ink in which the word is printed.

In the Stroop effect, the presence of a word interferes with color-naming -- especially if the word is itself a contradictory color name (but, interestingly, even if the word and the color are congruent).  The "Stroop interference effect" occurs regardless of intention: Clear instructions to ignore the word, and focus attention exclusively on the ink color, do not eliminate the interference effect.  The explanation is that we can't help but read the words -- it occurs automatically, despite our intentions to the contrary.  

StroopAlternate.jpg (66211
                                                      bytes)The Stroop effect comes in many forms, not all of which involve colors and color words.  For example, Stroop interference can be observed when subjects are asked to report the number of elements in a string, and the elements themselves consist of digits rather than symbols that have nothing to do with numerosity.

 

 

Posner & Snyder (1975a, 1975b) contrasted automatic and strategic processing. Automatic processes:

  • occur without intention, 
  • do not give rise to conscious awareness, and 
  • do not interfere with other ongoing cognitive activity.
Schneider & Shiffrin (1977; Shiffrin & Schneider, 1977; Shiffrin & Schneider, 1984) distinguished between automatic and controlled information processing. Their experiments compared performance in "easy" memory-scanning tasks that are highly practiced by subjects, against "harder" novel tasks. Automatic processes entail perception through routines stored in long-term memory, and which require no attentional effort or cognitive capacity. Controlled processing entails novel sequences of processing steps that have not been previously stored in memory, and which require attentional effort or cognitive capacity to be carried out deliberately.

Hasher & Zacks (1979, 1984) distinguished between automatic and effortful processing.   Like Posner & Snyder and Schneider & Shiffrin, they postulated that

  • automatic processes are executed independent of intentions,
  • created no interference with other ongoing processes, and 
  • were unaffected by stress or arousal.
But Hasher & Zacks also postulated other, novel features of automaticity:
  • information encoded automatically is no different from the same information encoded effortfully;
  • automatic processes do not improve with training or feedback; 
  • there are no individual differences in automatic processing, so long as the subject is neurologically intact; 
  • once acquired, automatic processes are invariant across age.
Hasher & Zacks listed frequency and spatial location as attributes of events which are encoded automatically.

Spelke.jpg (61577 bytes) Schneider & Shiffrin argued that while processing specific pieces of information could be automatized, generalized skills could not. In contrast, Spelke, Hirst, & Neisser (1976) demonstrated that a generalized skill (taking dictation while reading at a high level of comprehension) could be automatized, given sufficient practice. In their experiment, subjects (actually paid as work-study students!) performed a divided attention task in which they read prose passages, and simultaneously took dictation for words.  The subjects practiced this task for 17 weeks, 5 sessions per week -- with every session employing different stories and lists.  The result was that reading speed progressively improved, with no decline in comprehension.  Apparently, the subjects automatized the dictation process, so that it no longer interfered with reading for comprehension.

But what about the dictated lists?  Initially, the subjects had very poor memory for the words presented on the dictation task.  On later trials, they showed better memory for the dictation list, and were able to report when the list items contained rhymes or sentences.  They also showed integration errors, such that they remembered lists like The rope broke Spot got free Father chased him as Spot's rope broke.  Integration errors indicate that the subjects processes the meaning of the individual sentences and the relations among the sentences automatically -- ordinarily this would be considered to be a very complex task.

Although Spelke et al. wished to cast doubt on the whole notion of attention as a fixed capacity, they also expanded the boundaries of automaticity by showing that even highly complex skilled performance could be automatized, given enough practice.

The general notion of automaticity has attracted a great deal of interest, based on the widely shared belief that there are cognitive processes which generally share the following properties:

  • inevitable evocation
  • incorrigible execution
  • effortlessness
  • lack of interference
Automatic processes have two sources.
Whatever their origins, automatic processes are unconscious in the strict sense of the term, in that they operate outside phenomenal awareness and voluntary control.  They are not subject to introspection, and can be known only by inference from human performance.  Automatic processes are unconscious in the strict sense of that term.


The Process-Dissociation Procedure

Most work on automaticity assumes that there are two categories of tasks (and, correspondingly, two categories of underlying cognitive processes), automatic and controlled. An alternative view, proposed by Larry Jacoby, is that every task has both automatic and controlled components to it, in varying degree.  Jacoby has developed a process-dissociation procedure, based in turn on the method of opposition, to determine the extent to which performance reflects the operation of controlled and automatic processes.

In a typical example of the method of opposition, subjects might first study a list of words, such as density.  Then they might receive a stem-completion test, in which they are presented with three-letter stems, and asked to complete these stems with the first word that comes to mind.  Some of the stems are "targets" drawn from items of the "old" study list, such as den____.  Other items are new, unstudied "lures" such as nec____.  A typical finding is that subjects are more likely to complete old stems with items from the study list, such as density (as opposed to dentist).  This is known as a priming effect.  Priming is often held to be an automatic consequence of list presentation. 

In order to study the nature of this priming effect, Jacoby contrasts stem-completion performance under two different experimental conditions.

  • In the Inclusion condition, subjects are asked to complete each stem with an item from the previously studied wordlist -- or, failing that, with the first word that comes to mind.  One way to do this is to deliberately retrieve old, studied targets from memory: density.  Alternatively, old, studied targets might appear as stem completions by virtue of automatic priming effects.
  • In the Exclusion condition, by contrast, the subjects are instructed to complete each stem with any legal word except one from the study list.  In order to perform this task properly, subjects have to consciously recognize a stem as "old".  But because of automatic priming, some old stems might be completed with targets anyway, because they slip through unnoticed.  If subjects are doing their best to exclude consciously remembered targets, the only way that targets can appear in the exclusion condition is by virtue of automatic priming.
The process-dissociation procedure is a set of mathematical formulas that allow task performance to be decomposed into its automatic and controlled components.

To begin with, Jacoby assumes that targets can be generated on the Inclusion task either deliberately or automatically.  Thus, 

Inc      = C + A(1 - C), where            

Inc      = the proportion of targets produced on the inclusion task;

C        = the proportion of targets produced through conscious retrieval;

(1 - C) = the proportion of targets not consciously retrieved.

A        = the proportion of target items produced through automatic priming; and 

Notice that Jacoby assumes that any single item is either consciously or automatically generated.  Thus, the pool of items available for automatic generation is 1 - C, or the proportion of items not generated through conscious retrieval.  

Because not all items available for automatic generation are actually generated automatically, the value (1 - C) has to be multiplied by the strength of the automatic process, A.

Similarly, Jacoby assumes that targets can appear on the Exclusion task only by virtue of automatic priming.  Thus,

Exc    = A(1 - C).

By simple algebra, then,

C       = Inc - Exc; this is the estimate of conscious component of performance.          

And

A      = Exc / (1 - C); this is the estimate of the automatic component of performance.

 

SchoenfieldAgeDiff.jpg
                                                    (50718 bytes)To see how the process-dissociation procedure works in practice, consider the matter of age differences in memory.  It is widely known that old people have poorer memories than the young.  Moreover, it turns out that age differences in memory are greatest on tests of free recall; recognition testing often abolishes the age difference entirely.

 


The question is why this is so, and there are lots of theories.  One, based on Mandler's (1980) two-process theory of recognition, is that recall requires active retrieval of trace information from memory.  By contrast, recognition can be mediated by two quite different processes:

  • retrieval, in which an item is consciously remembered, as in recall;
  • familiarity, in which a test item simply "rings a bell", or seems familiar.
In Mandler's theory, retrieval requires deliberate, conscious recollection.  But familiarity is the product of an automatic process closely related to priming.  

One explanation of the age difference in memory, then, is that aging impairs retrieval but spares familiarity.  This will produce a decrement in recall among the elderly, but not in recognition -- not so long as the elderly rely on the automatic familiarity process, anyway.  

JacobyAgeDiff.jpg (38494
                                                    bytes) To determine whether age differences in memory reflected age differences in the controlled or automatic components of memory processing, Jacoby and his colleagues put young and old subjects in the stem-completion task described above, under both Inclusion and Exclusion conditions.  In the Inclusion condition, the subjects were instructed to complete each stem with an item from the studied wordlist -- or, failing that, with the first word that came to mind.  In the Exclusion condition, they were instructed to complete each stem with the first word that came to mind, provided that it was not a item from the studied wordlist.


  • Young subjects produced 76% of target items on the Inclusion task, but only 26% of targets on the Exclusion task.  
  • Old subjects, by contrast, produced fewer targets (55%) on the Inclusion task, and more targets (39%) on the Exclusion task.
In other words, the elderly performed more poorly than the young in two different ways:
  • they failed to consciously recall items in the Inclusion condition;
  • they failed to consciously exclude items in the Exclusion condition.
JacobyAgeDiffPDP.jpg (64790
                                                  bytes)Transforming the percentages into proportions, and applying the formulas above, we get the following estimates of controlled and automatic processing for young and old subjects, respectively:

 

 

Group Controlled Automatic
Young .44 .46
Old .16 .46

 

JacobyAgeComponents1.jpg
                                                    (39317 bytes)In other words, the age difference in memory performance is due entirely to the conscious, strategic component (where the old have lower values than the young).  There are no age differences in the automatic component (as Hasher & Zacks would predict).



The process-dissociation procedure has been challenged in terms of its underlying assumptions -- for example, if automatic processes are components of controlled processes, then automatic and controlled processes are not strictly independent of each other, as Jacoby's formulas assume.  However, the process-dissociation procedure has been widely embraced as a means for isolating, and measuring, the automatic and controlled components of performance on any task, whether nonsocial or social in nature.


Automaticity in Social Cognition and Behavior

Traditionally, automaticity has been studied in the relatively sterile confines of the cognitive laboratory, but automaticity can also be observed in the "real world".  Most such observations fall into the province of social psychology, which from a cognitive point of view is the study of mind in action: how percepts, memories, and beliefs translate into actual interpersonal behavior. 

In its early days, cognitive social psychology tacitly focused on deliberate, conscious thought, as represented by such topics as impression formation (person perception), attribution theory, person memory, impression management (strategic self-presentation), and social exchange.  Partly as a reaction to what seemed (to some) to be a cold, ultra-rational view of social interaction, some social psychologists have begun to offer theoretical alternatives. 

  • For example, some, like Robert Zajonc, have argued that emotional behavior is essentially independent of cognition -- thus, not incidentally, laying the foundation for the emergence of affective neuroscience as a field of inquiry separate from cognitive neuroscience.
  • Others, many others, have argued that automatic processes play a major, perhaps dominant role in social behavior (e.g., Richard Nisbett & Timothy Wilson; Leonard Berkowitz; Patricia Devine; John Bargh; and Daniel Wegner). Devine's research on automatic priming of racial stereotypes was discussed in the lecture supplements on Social Categorization.
  • The notion of automaticity also underlies the Implicit Association Test, a procedure intended to elicit unconscious, automatic attitudes and other associations -- also discussed in the lectures on Social Categorization.

An early illustration of automaticity concerned behavior at the photocopy machine (Langer et al., 1978).  Subjects making photocopies tended to allow an experimental confederate to interrupt their work, even if the confederate failed to give a good reason.  They just automatically complied with a social request.



The current emphasis on automaticity in social behavior is, actually, a revival of a traditional, precognitive position in social psychology which held that social thought and action is constrained by environmental influences -- a situationism exemplified by Stanley Milgram's studies of obedience to authority, and Stanley Schachter's experiments on emotion.  (I think that it is not a coincidence that Langer was a Milgram student).

Within cognitive psychology, it has now become commonplace to assume that many aspects of human performance are mediated by some combination of automatic and controlled processes.  That was true in social psychology, as well, giving rise to a number of "dual-process" theories of various aspects of personality and social interaction.



An excellent example of a dual-process theory are the "two systems" in judgment and decision-making proposed by Daniel Kahneman, recipient of the Nobel Prize in economics (and former UCB psychology professor).  "System 1" is automatic, fast, and unconscious, and is involved in "heuristic" reasons and aspects of "hot cognition" involving emotion, stereotypes, prejudice, and the like.  "System 2" is controlled, slow, and conscious; it is the "algorithmic reasoning of "cold" cognition, logical and systematic.  Many tasks set the two systems on a race to task-completion.  But because System 1 is faster than System 2, it tends to win out.


Similarly, some social psychologists have begun to assert that social cognition and behavior is dominated by automatic, unconscious processes, such that controlled, conscious processes play little role in behavior -- as in the following quotes:

99Bargh.JPG (123465
                                          bytes) 100Bargh.JPG (136135 bytes) 101Bargh.JPG (94975 bytes) 103Bargh.JPG (95480 bytes)

The upshot of all this has been the emergence of what I have called the automaticity juggernaut (Kihlstrom, 2008) -- the wholesale embrace, by large number of social psychologists, of the following propositions.

  • Human social behavior is largely automatized. 
    • Our conscious percepts, thoughts, goals, and feelings are mostly irrelevant to what we do.
    • Rather, our experience, thought, and action is automatically triggered by preconscious analyses of the situation.
  • Consciousness is an afterthought. 
    • The role of consciousness is to give plausible or acceptable reasons for behavior that is actually triggered automatically and unconsciously.
  • Not to put too fine a point on it, we are zombies after all.   
    • Not because zombies are conscious too, as some philosophers have argued.
    • But because consciousness is epiphenomenal.
104Naturalization.JPG (80125 bytes)While it may be disconcerting to learn that our social cognition and behavior is largely reflexive, closer to a conditioned response than anything else, Bargh has reassured us that it is in the nature of science to dethrone human nature.  At the turn of the 20th century, Freud pointed out that just as Copernicus informed us that the Earth is not the center of the universe, and Darwin informed us that Man is just another animal, he (Freud) informed us that Man is fundamentally irrational.  Bargh seeks to correct Freud: Man is not necessarily irrational, but our behavior is largely reflexive. We are automatons after all.



Carl Sagan's "Great Demotions"

Freud wasn't the first to deflate the conceit that humans have a special place in nature, and he wasn't the last.  Darwin did it before him, and Carl Sagan -- he of the original Cosmos television series, and his "billions and billions" of galaxies -- did it afterward.  Here's Sagan's take, from the sequel to Cosmos, entitled :Pale Blue Dot: A Vision of the Human Future in Space (1994).
  • We do not live in a privileged reference frame.
    • Our planet is not the center of the Universe. 
    • Nor is our Sun.
    • Nor is our Milky Way galaxy
    • The Sun is not the only star with planets.
    • Earth has not been in the Universe since the beginning. 
      • If modern cosmologists are to be believed, our Universe isn't even the center of the Universe!
  • We are not the product of a special creation. 
    • We are not different in kind from other animals.
  • We are probably not the only intelligent beings in the Universe.
  • "The principle of mediocrity seems to apply to all our circumstances."



Getting the Juggernaut Started

The concept of automaticity was an important advance in cognitive theory, as it offered a resolution of the dispute between early- and late-selection theories of attention (Pashler, 1998). According to the early-selection view, pre-attentive, preconscious processing was limited to analyses of the physical features of a stimulus; in theory, analysis of meaning required the conscious deployment of attention. According to the late-selection view, even meaning analyses were conducted preattentively. Automaticity theory permitted complex, semantic analyses to be carried out preattentively, and thus preconsciously, so long as they were automatized -- for example, through extensive practice. In later developments, automaticity became detached from attention theory, and was re-interpreted in terms of memory (J. R. Anderson, 1992; G. D. Logan, 1988). In addition, cognitive psychologists began to develop experimental paradigms, such as the process-dissociation procedure (L. L. Jacoby, 1991), by which they could estimate the contributions of automatic and controlled processes to task performance.

Following its embrace by cognitive psychology, the concept of automaticity quickly spread to other domains, particularly personality and social psychology. For example, Nisbett and Wilson (1977) clearly had automaticity in mind when they argued that we are consciously aware of the contents of our minds, such as beliefs and attitudes, but unaware of the processes that generated those contents: "We have no direct access to higher-order mental processes such as those involved in evaluation, judgment, problem solving, and the initiation of behavior."

Similarly, Langer asserted that most social interactions are unreflective and mindless, following highly learned, habitual scripts that require very little conscious attention and deliberation:

[M]indlessness may indeed be the most common mode of social interaction" (E. Langer, Blank, & Chanowitz, 1978).

"Unless forced to engage in conscious thought, one prefers the mode of interacting with one's environment in a state of relative mindlessness.... This may be the case, because thinking is effortful and often just not necessary" (E. J. Langer, 1978).

Along these lines, Taylor and Fiske (1978) argued that people are "cognitive misers" laboring under limited cognitive capacity, and preferring "top of the head" judgments to reasoned, thoughtful appraisals. Smith and Miller (1978) were perhaps the first to explicitly invoke the concept of automaticity, as it was then emerging in cognitive psychology, in a commentary on the Nisbett/Wilson paper. From their point of view, limitations on introspective access occurred because salient social stimuli are processed, and responded to, automatically.

Thereafter, a number of social psychologists explicitly referred to the concept of automaticity in designing and interpreting experiments on attitudes and social judgments. For example, Higgins (1981) distinguished between two sources of automatic priming effects on social judgments, chronic and temporary. Bargh (1982) showed that presentation of self-relevant adjectives over the unattended channel in a dichotic listening task could disrupt shadowing performance, after the matter of the "cocktail-party phenomenon"; and that parafoveal presentation of hostile trait adjectives could bias interpretation of the "Donald story" used in studies of impression formation and person memory (Bargh & Pietromonaco, 1982). By the end of the 1980s, the concept of automaticity had been applied across a large number of domains in personality and social psychology, including prejudice, the self-concept, emotion, trait ascriptions, and ruminative thought. A landmark volume edited by Uleman and Bargh (1989) contained chapters detailing the role of automatic, unintended thoughts in a variety of domains, including the activation of self-beliefs and ruminations in anxiety and depression; the influence of feelings on thought and behavior; the ascription of personality traits and the formation of characterological impressions; heuristic information processing in persuasion; and ironic rebound effects.


The Automaticity of Everyday Life

For example, John Bargh (1984) famously argued that




As Skinner argued so pointedly, the more we know about the situational causes of psychological phenomena, the less need we have for postulating internal conscious mediating processes to explain these phenomena.

To illustrate Bargh's position, consider an experiment by Bargh et al. (1996).  In the experiment, subjects performed a "scrambled sentences" task in which they were given a jumble of words and asked to arrange them into complete sentences.

  • In the Rude condition, many of the words were semantically related to "rudeness", such as aggressively, rude, bother, disturb, and intrude.
  • In the Polite condition, many of the words were semantically related to "politeness", such as respect, honor considerate, appreciate, and patiently.
  • In the Neutral condition, none of the words were related to either rudeness or politeness: examples include exercising, flawlessly, occasionally, rapidly, and gleefully.

The purpose of this cover task was to prime the concepts of rudeness and politeness.

BarghInterruptions.jpg (41378 bytes)At the end of the experiment, the subjects emerged from the lab room to find the experimenter engaged in a conversation with another person, who was actually an experimental confederate.  The conversation continued for up to 10 minutes, all the while the experimenter assiduously ignored the waiting subject.  The main result was that subjects in the Rude condition were more likely to interrupt the experimenter than were subjects in the Polite condition, with subjects in the Neutral condition falling somewhere in between.

 

Bargh's interpretation of this experiment is that reading "rude" words automatically primed the subjects to interpret the experimenter's behavior as rude, and thus more likely to behave rudely in turn, interrupting his conversation.  


In Bargh's view, most social behavior is preattentive and automatic in nature.  It occurs in response to an environmental trigger, in a manner analogous to priming, independent of the person's conscious intentions, beliefs, attitudes, and choices, and also independent of the person's deployment of attention. 

For Bargh, automaticity in social behavior begins with a preconscious analysis of the situation. 

  • Environmental or social-influence effects on perception an evaluation are independent of conscious processing.
  • In social perception, the inference of personality traits from behavioral observation, stereotyping, and the processing of self-referential information all occur preconsciously.
  • At the perception-behavior interface, a principle of ideomotor action means that thoughts about a particular behavior will automatically lead to the behavior itself.  Thus, stereotyped social perceptions will lead to hostility and prejudice.Evaluation effects such as the mere exposure effect, and particularly the "subliminal" mere exposure effect, occur automatically. 
  • Attitudes are automatically activated by the mere presence of the corresponding object -- at least if the attitude in question is a strong one.  And evaluations can be primed by prior encounters with positively or negatively valenced stimuli.
  • At the evaluation-behavior interface, again, positive or negative evaluations automatically trigger approach or avoidance behavior. 
Bargh's auto-motive model of social behavior contradicts the traditional model of automatic skill execution, in which we consciously select our intended behaviors, and then those behaviors are automatically executed.  This is because Bargh also automates the selection phase, so that environmental stimuli can activate choices preconsciously.  In the auto-motive model, goals and motives can be automatically invoked by environmental events -- and, once activated, operate automatically outside conscious awareness and conscious control.

In his analysis of social behavior, Bargh advocates social ignition over social cognition.  In his view, automaticity pervades everyday life, and conscious awareness is largely an after-the-fact rationalization of what we have thought and done.  In his view, the earlier (if tacit) emphasis on consciousness in social behavior is a holdover from an earlier embrace of serial processing in models of attention and cognition.  But now, he asserts, cognitive psychology emphasizes parallel processing -- as in McClelland and Rumelhart's connectionist "parallel distributed processing" models of cognition.  Bargh is a cognitive social psychologist, but he no longer equates cognition with consciousness:

  • Conscious cognition doesn't necessarily mediate emotion, motivation, or behavior.
  • Emotional and motivational processes operate in parallel with, and independently of, cognitive processes.
  • Most cognitive processing is preconscious anyway. 
So does any role remain for consciousness?  Clearly Bargh thinks there's not much.  In one paper he evokes the image of "consciousness riding into the sunset".  Of course, when cowboys rode off into the sunset, it was after they killed the outlaw and kissed the girl.  But that's not Bargh's scenario.  In his view, consciousness is not necessary for evaluation, judgment, or behavior.
  • Consciousness may be necessary for the development of preconscious, automatic processes -- which, as in the memory-based models of automaticity, start out as conscious and deliberate and end up automatized through practice.
  • Consciousness may be necessary for behavior in non-routine situations -- but that just begs the question of how much behavior is actually non-routine. 
  • Consciousness is necessary for the correction of "bad habits".  But because conscious awareness logically precedes conscious control, we must become aware of these bad habits before we can correct them.  But because bad habits run off automatically, it is not easy to achieve this awareness.
  • And consciousness is necessary for the inhibition of automatic behavior.  There is lots of mental processing, all going on automatically, and in parallel, all the time; but people must deal with objects in the real world one at a time.  So, as Bargh puts it, consciousness may "connect a parallel mind with a serial world". 
But for Bargh, the important determinants of behavior operate automatically and unconsciously, independent of phenomenal awareness and free of conscious control.


The Automaticity Juggernaut Gains Momentum

After 1989, the concept of automaticity proliferated rapidly through personality and social psychology (Bargh, 1994). A PsycInfo search reveals that prior to 1975, the terms automatic or automaticity had appeared in the abstracts of only 29 articles published in personality and social psychology journals -- and most of these had to do with automatic writing and other aspects of spiritualism. Another six were added by 1980; in the 1980s, there were 40 such articles; and in the 1990s, 115 (for comprehensive coverage of these studies, see D.M. Wegner & Bargh, 1998). By 2006, the new millennium had added more than 181 new papers -- a geometric increase of interest in automaticity, as opposed to the almost perfectly linear increase in the total number of articles published over the same span of time.

Of course, the concept of automaticity gained popularity in its home territory of cognitive psychology, as well -- but with a difference. Cognitive psychologists have maintained a distinction between automatic and controlled processes, and have spent a great deal of effort in assessing their differential contributions to task performance -- as in the process dissociation paradigm (e.g., L. L. Jacoby, 1991). At first, social psychologists followed suit, resulting in a number of "dual-process" theories of attitudes, persuasion, and the like, which described the interplay between automatic and controlled processes (e.g., Chaiken & Trope, 1999). Fairly quickly, however, this balanced perspective began to be replaced by a more single-minded focus on automaticity. For example, Gilbert (1989) argued for the benefits of "thinking lightly about others". And Bargh (2000,p. 938) argued that even intentionally controlled behavior was ultimately automatic in nature, "controlled and determined" by "automatically operating processes". Thus, rather than taking a balanced view of the differential roles of automatic and controlled processing in social interaction, some social psychologists seem to have embraced a view of social thought and action as almost exclusively automatic in nature.

This evolutionary development can be clearly seen in the work of John Bargh, who has been one of the foremost proponents of the concept of automaticity within social psychology. In 1984, writing on "The Limits of Automaticity", Bargh was critical of Langer's position that social interaction proceeded mindlessly:

"A better summary of the mindlessness studies would be that... when people exert little conscious effort in examining their environment they are at the mercy of automatically-produced (sic) interpretations..... Automatic effects are... typically limited to the perceptual stage of processing. There is no evidence... that social behavior is often, or even sometimes, automatically determined (Bargh, 1984, pp. 35-36).

But only five years, later, his position had shifted considerably, as in the editorial introduction to Unintended Thought:

"As most social psychological models implicitly assumed the role of deliberate, calculated, conscious, and intentional thought, the degree to which unintended [automatic] thought did occur in naturalistic social settings became of critical importance.... Langer (1978) emphatically rejected the assumption of deliberate, conscious, thought as typically underlying social behavior.... Our own research programs have followed in this tradition..." (Bargh & Uleman, 1989, pp. xiv-xv).

And in his own contribution to that volume:

Is this to say that one is usually not in control of one's own judgments and behavior? If by "control" over responses is meant the ability to override preconsciously suggested choices, then the answer is that one can exert such control in most cases.... But if by "control" is meant the actual exercise of that ability, then the question remains open.... My own hunch is that control over automatic processes is not usually exercised.... [I]t would appear that only the illusion of full control is possible, as the actual formation of a judgment or decision.... A fitting metaphor for the influence of automatic input on judgment, decisions, and behavior is that of the ambitious royal advisor upon whom a relatively weak king relies heavily for wisdom and guidance (pp. 39-40).

Only one year later, Bargh took a further step, asserting that automaticity pervades the information processing system, such that automatically evoked mental representations automatically generate corresponding motives, which in turn automatically generate corresponding behaviors (Bargh, 1990; Bargh & Gollwitzer, 1994). Thus, merely reading words related to rudeness or politeness can affect whether a subject will interrupt the experimenter's conversation, while reading words related to the elderly stereotype will lead subjects to walk more slowly down the hall (Bargh, Chen, & Burrows, 1996) (see also Ferguson & Bargh, 2004).

In a chapter describing "The Automaticity of Everyday Life" (1997), Bargh continued to expand the role of automatic processes:

"[T]he more we know about the situational causes of psychological phenomena, the less need we have for postulating internal conscious mediating processes to explain these phenomena.... [I]t is hard to escape the forecast that as knowledge progresses regarding psychological phenomena, there will be less of a role played by free will or conscious choice in accounting for them.... That trend has already begun..., and it can do nothing but continue (Bargh, 1997a, p. 1).

Later in the same chapter, Bargh asked, "Is Consciousness Riding into the Sunset?":

"Automaticity pervades everyday life, playing an important role in creating the psychological situation from which subjective experience and subsequent conscious and intentional processes originate... (p. 50).

Actually, in the typical Western, the hero rides into the sunset only after rescuing the sheriff, vanquishing the villain, and kissing the girl -- a pretty good situation. The image Bargh really seems to have in mind is of the sun setting on consciousness -- or, perhaps, consciousness on an ice floe, like the elderly Eskimo, floating out to sea. But just in case the reader missed the message, Bargh quickly repeats it:

"I emphatically push the point that automatic, nonconscious processes pervade all aspects of mental and social life, in order to overcome what I consider dominant, even implicit, assumptions to the contrary (p. 52).

In response to criticism that he might have overestimated the role of automatic processes in social interaction, Bargh (1997b) initially conceded that his "insinuation" that "conscious involvement is... entirely absent" from social interaction might have been "more tactical than sincere" (p. 231). Nevertheless, at the end of that same paper, he reasserted the overwhelming dominance of unconscious automaticity over conscious control:

Bloodied but unbowed, I gamely concede that the commentators did push me back from a position of 100% automaticity -- but only to an Ivory soap bar degree of purity in my beliefs about the degree of automaticity in our psychological reactions from moment to moment (p. 246).

For those who are too young to get the reference, the implication is that social cognition and behavior is 99.44% automatic.

Thus, it no surprise that Bargh has continued to assert "The Unbearable Automaticity of Being":




[M]ost of a person's everyday life is determined not by their conscious intentions and deliberate choices but by mental processes that are put into motion by features of the environment and that operate outside of conscious awareness and guidance (Bargh & Chartrand, 1999, p. 462).

A 2006 summary of Bargh's view was simply entitled "The Automaticity of Social Life" (Bargh & Williams, 2006) -- not the more modest "Automaticity in Social Life", which might be appropriate if automaticity were just one aspect of what goes on, but rather the sweeping implication that social life is automatic. Our impressions to the contrary are apparently illusions of control based on the high memorablity of those occasions, roughly 0.56% of the total, when we actually have it and exercise it.

In a 2012 paper, Bargh reasserted that two forms of automaticity pervade social cognition:





At the same time, Bargh seemed to admit to a softening of his views, allowing that social-psychological processes involve "a complex interplay between both controlled (conscious) and automatic processes (p. 601).



But a 2013 paper paper by Huang and Bargh asserted the automaticity principle.  Beginning with doubts about conscious control, and asserting the power of situational influences and the limits of introspective access.  While acknowledging that "dual-process" models allowed for conscious control as well as automaticity, they argued that unconscious processes were the predominant influence on how a person perceives the world and how that person behaves in response. 

And a 2014 paper seemed to contain a full-throated reassertion of the power of automatic processes.





Jumping on the Juggernaut

Bargh is not alone in believing that automatic processes dominate experience, thought, and action, and relegating deliberate, conscious activity to the margins.

For example, Wegner and Schneider (1989) described a "war of the ghosts in the mind's machine" between automatic and controlled processes, they also suggested that the former tended to win out over the latter:

"When we want to brush our teeth or hop on one foot, we can usually do so; when we want to control our minds, we may find that nothing works as it should.... Even William James, that champion of all things mental, warned that consciousness has the potential to make psychology no more than a tumbling-ground for whimsies" (p. 288).

So great was their enthusiasm for unconscious, automatic processes that these authors actually misquoted James. Here he is in full, criticizing von Hartmann (1868/1931) precisely for taking the position advocated by Wegner and Schneider -- that unconscious processes rule the universe:

"[T]he distinction between the unconscious and the conscious being of the mental state is the sovereign means for believing what one likes in psychology, and of turning what might become a science into a tumbling-ground for whimsies (James, 1890/1980, p. 163, emphasis original).

Given that this passage occurs in the context of James' 10-point critique of the notion of unconscious thought, it is clear that James considered unconscious processes, not conscious ones, to be the "tumbling-ground for whimsies".

Nevertheless, Wegner published a book entitled The Illusion of Conscious Will, whose argument he summarized as follows:

[T]he real causal mechanisms underlying behavior are never present in consciousness. Rather, the engines of causation operate without revealing themselves to us and so may be unconscious mechanisms of mind. Much of the recent research suggesting a fundamental role for automatic processes in everyday behavior (Bargh, 1997) can be understood in this light. The real causes of human action are unconscious, so it is not surprising that behavior could often arise -- as in automaticity experiments -- without the person's having conscious insight into its causation" (D.M. Wegner, 2002, p. 97) ( see also D.M. Wegner & Wheatley, 1999).

Wegner's book included a diagram depicting an "actual causal path" between the "unconscious cause of thought" and "thought", and another between the "unconscious cause of action" and "action", but only an "apparent causal path" between thought and action.

Similarly, Wilson has suggested that conscious processing may be maladaptive because it interferes with unconscious processes that are more closely tuned to the actual state of affairs in the outside world:

"...Freud's view of the unconscious was far too limited. When he said... that consciousness is the tip of the mental iceberg, he was short of the mark by quite a bit -- it may be more the size of a snowball on top of that iceberg. The mind operates most efficiently by relegating a good deal of high-level, sophisticated thinking to the unconscious.... The adaptive unconscious does an excellent job of sizing up the world, warning people of danger, setting goals, and initiating action in a sophisticated and efficient manner. It is a necessary and extensive part of a highly efficient mind (2002, pp. 6-7) (for a critique, see Kihlstrom, 2004b).

The automaticity juggernaut has ranged well beyond the confines of academic psychology. Summarizing much of this research and theory, Sandra Blakeslee, a science correspondent for the New York Times, informed her readers that "in navigating the world and deciding what is rewarding, humans are closer to zombies than sentient beings much of the time" (February 19, 2002). More recently, and drawing largely on Gilbert's and Wilson's work, Malcolm Gladwell, a staff writer for the New Yorker, has written a trade book, Blink, touting the virtues of "thinking without thinking" (Gladwell, 2005).

"The part of our brain that leaps to conclusions... is called the adaptive unconscious, and the study of this kind of decision making is one of the most important new fields in psychology. The adaptive unconscious is not to be confused with the unconscious described by Sigmund Freud, which was a dark and murky place filled with desires and memories and fantasies that were too disturbing for us to think about consciously. This new notion of the adaptive unconscious is thought of, instead, as a kind of giant computer that quickly and quietly processes a lot of the data we need in order to keep functioning as human beings (p. 11).

As this chapter was being finished, Gladwell's book had been on the New York Times non-fiction best-seller list for almost 18 months, attesting to the popularity of the concept of automaticity. It has also drawn a stern retort by Malcolm LeGault, entitled Think: Why Critical Decisions Can't Be Made in the Blink of an Eye:

"Predictably, as if filling a growing market niche, a new-age, feel-good pop psychology/philosophy has sprung up to bolster the view that understanding gleaned from logic and critical analysis is not all that it's cracked up to be.... In Blink, Mr. Gladwell argues that our minds possess a subconscious power to take in large amounts of information and sensory data and correctly size up a situation, solve a problem, and so on, without the heavy, imposing hand of formal thought (p. 8).

Gladwell's book has also inspired a parody from the pseudonymous Noah Tall, entitled Blank: The Power of Not Actually Thinking At All:

The part of our brain that leaps to conclusions that are reached without any thinking involved is called the leapative concluder or, in some circles, the concussive unconscious, because the unexpected hunches that suddenly slam into the brain of those who are receptive to unexpected hunches often feel exactly like being hit on the head by a heavy iron frying pan with a nonstick cooking surface.... The only reason humans have survived as long as we have despite our forgetfulness, laziness, and downright stupidity is because that tiny frying pan in our head hits us upside the unconscious when our conscious is goofing off (Tall, 2006, pp. 7-8).

 

The Third-and-a-Half Discontinuity?

Experimental evidence indicates that automatic processes play some role, under some conditions, in social cognition and behavior. On this much we can agree. But what might be called the Doctrine of Automaticity goes way beyond such restricted conclusions to assert that automatic processes pervade human experience, thought, and action; conscious awareness is largely an afterthought; and conscious control is an illusion. Humans are, in this view, a special class of zombies, virtual automatons who are conscious, as La Mettrie had argued, but for whom consciousness plays little or no functional role in thought and action. The purpose of consciousness is to erect personal theories about why things happen as they do, and why we do what we do. But, on this view, consciousness is largely irrelevant to what actually goes on. Bargh puts the point concisely:

"As Skinner argued so pointedly, the more we know about the situational causes of psychological phenomena, the less need we have for postulating internal conscious mediating processes to explain these phenomena (Bargh, 1997a p. 1).

Of course, the progress of science will by its very nature correct popular misunderstandings of how the world works, and occasionally reveal surprising, even unpleasant, truths about ourselves. Sigmund Freud famously situated himself in line with Copernicus, who taught us that Earth is not at the center of the universe, and Darwin, who taught us that humans are creatures of nature just like any other. For Freud, the third blow against "human megalomania" was his discovery (as he claimed it was) that conscious experience, thought, and action was determined by unconscious, primitive drives:

[H]uman megalomania will have suffered its third and most wounding blow from the psychological research of the present time which seeks to prove to the ego that it is not even master in its own house, but must content itself with scanty information of what is going on unconsciously in the mind (Freud, 1915-1917/1961-1963, p. 285) (see also Bruner, 1958).

Bargh has explicitly situated himself in this line of scientific progress, substituting for Freud's irrational "monsters from the Id" a view of humans as operating not necessarily irrationally, but whether rational or not, operating mostly on automatic pilot, uninfluenced by conscious deliberation: "[W]e are not as conscious, or as free, as we thought we were" (Bargh, 1997a, p. 52). Henceforth, we must live with "the unbearable automaticity of being" (Bargh & Chartrand, 1999).



Like Bargh, Wegner and Smart (1997) also replaced Freud's third discontinuity, substituting automaticity for irrationality. For the record, there also seems to be a fourth discontinuity, between humans and machines, which some visionaries, like Mazlish (1993) and Kurzweil (1999) see as being erased by advances in artificial intelligence. Of course, the idea that humans are simply machines -- if machines made of meat -- is entirely consonant with the idea that human experience, thought, and action are the product of unconscious processes operating automatically.

It would be one thing if the Doctrine of Automaticity were backed by sound scientific evidence. Then, we would have no choice but to shrug our shoulders, cast off our sentimental beliefs in conscious control, and free will, and find some way to bear "the unbearable automaticity of being", just as we have learned to live with the knowledge that the Earth is not the center of the universe, and that humans are not the products of Special Creation. But in fact, the Doctrine of Automaticity is not true -- or, at least, it is not backed by sound scientific evidence. There are at least three reasons for thinking that the Third Discontinuity, at least the one erased by Bargh and Wegner (never mind Freud) is not quite ready to be expunged.

The first reason, paradoxically, is that the theoretical underpinnings of the concept of automaticity have begun to unravel (G.D. Logan, 1997; Moors & DeHouwer, 2006; Pashler, 1998). In particular, the resource theories of attention on which the concept was originally based have come into question. For example, there does not seem to be a single pool of attentional resources. Nor does even extensive practice with a task render its performance effortless. There is even some data that suggests that attentional capacity is not limited -- at least, that its limits are very wide indeed. As noted earlier, alternative theories of automaticity have been proposed, particularly based on memory rather than attention memory. These revisionist theories preserve the legitimacy of the concept of automaticity, but tend to undercut the various features by which automatic processes are recognized. So, for example, in Anderson's (1992) proceduralization view, automatic processes are engaged only when an appropriate cue is presented in the context of a particular goal state; and in Logan's (2002) instance-based theory, automatic processes are only evoked if the subject has the appropriate mental set. Nor, once evoked, do processes proceed to conclusion in a ballistic fashion.

One response to this state of affairs is to abandon the assumption that the distinction between automatic and controlled processes is a qualitative, all-or-none matter; rather, it is argued, automaticity varies by degrees (Bargh, 1989, 1994). This response is fine, and almost certainly correct, but it has the unfortunate consequence of making it difficult to know precisely when a process is automatic, and when it is not. What happens, for example, if a process seems to run off unintentionally, but nevertheless consumes attentional capacity? And, of course, the concession that some tasks are performed more or less automatically undercuts the fundamental message of "the automaticity of social life" (Bargh & Williams, 2006).

Moreover, it should be noted that the shift to a continuous view of automaticity has been accompanied by a certain slippage in the operationalization of the concept in psychological experiments. For example, in his earliest research Bargh employed a dichotic listening task (Bargh, 1982) or parafoveal presentation (Bargh & Pietromonaco, 1982) in an effort to conform to a relatively strict operational definition of automaticity. Similarly, Fazio et al. (1986) and Devine (Devine, 1989) employed extremely short prime-target intervals, in an attempt to prevent their subjects from employing controlled processes. But in more recent work, such strictures are often abandoned. For example, Bargh and his colleagues have presented words in subjects' clear view, and asked them to pronounce them (Bargh, Chaiken, Raymond, & Hymes, 1996), or to assemble them into sentences (J. A. Bargh et al., 1996) -- tasks that would seem to involve conscious processing. Granted, in these cases the subjects were not specifically instructed to process the relevance of the words to certain attitudes and stereotypes, thus approximating the unintentional nature of automatic processing. But this reliance on only a single feature is a considerable departure from the concept of automaticity as it was originally set out in cognitive psychology.

In fact, within social psychology the concept of automaticity seems to be invoked whenever subjects engage in processing that is incidental to the manifest task set for them by the experimenter -- whether this is shadowing text, detecting visual stimuli, pronouncing words, or assembling sentences. But just because something is done incidentally does not necessarily mean that it has been performed unintentionally, much less automatically. In many situations, subjects may have plenty of processing capacity left over, after the manifest task has been performed; and they may use some of it, quite deliberately, to perform other tasks that interest them -- such as critically analyzing the experiment's cover story, or speculating about the experimenter's true hypotheses (Orne, 1962, 1973).

Most critically, the social-psychological literature on automaticity rarely contains any actual comparison of the strength of automatic and controlled processes. These were features of some of the earliest experiments on automaticity: in studies already described, for example, Fazio et al. (1986), and Devine (1989) also employed relatively long prime-target intervals in their experiments, in an attempt to compare the effects automatic and controlled processing. Within cognitive psychology, there has been considerable interest in developing techniques such as the process-dissociation procedure (PDP) (L. L. Jacoby, 1991) to directly compare the contributions of automatic and controlled processes to task performance. For example, Jacoby and his colleagues (1997) showed convincingly that successful recognition was mediated mostly by controlled retrieval in young subjects, but mostly by automatic familiarity in the elderly. The PDP has its critics (e.g., Curran & Hintzman, 1995), but the point is that cognitive psychologists tend to assume that both automatic and controlled processes contribute to task performance, and try to disentangle them. By contrast, an increasingly popular view within social psychology is that automatic processes dominate, and controlled processes are largely irrelevant.



The Illusion of Conscious Will

Bargh is a leader of the automaticity movement within cognitive psychology, but he has a large number of compatriots and followers.

WegnerWill.jpg (75469 bytes)Chief among these was Daniel Wegner (now deceased), who provocatively proposed that conscious will is an illusion, and plays no causal role in either thought or behavior.  Just so he won't be misunderstood, Wegner presents a diagram of the relations between conscious and unconscious thought and action.



  • There is an "actual causal path" from unconscious causes of action to actions.
  • There is an actual causal path from unconscious causes of thought to conscious thoughts.
  • There may be an actual causal path linking unconscious causes of action and unconscious causes of thought, and this path may be bidirectional.
  • But the "apparent causal path" between conscious thoughts and conscious actions is only an apparent, illusory one.  
    • The actual determinants of both thought and action are unconscious.
    • Conscious thoughts have nothing to do with conscious action, except by happenstance.

Here's what I had to say about this idea in Behavioral and Brain Sciences (2004):

In his Meditations of 1641, Descartes asserted that consciousness, including free will, sharply distinguished man from beast, and thus initiated the modern philosophical and scientific study of the mind.  As time passed, however, philosophers of a more materialist bent began denying this distinction, most visibly Julien Offray de la Mettrie, whose Man a Machine (1748) claimed that humans were conscious automata, and Shadworth Holloway Hodgson, whose The Theory of Practice (1870) introduced the term epiphenomenalism.  Although materialist monism was highly attractive to those who would make a science of psychology, William James, in his Principles of Psychology (James, 1890/1980, p. 141), dismissed "the automaton-theory" as "an unwarrantable impertinence in the present state of psychology (italics original). 

James was clearly committed to a causal role for consciousness, and thus for free will, but his statement implied a willingness to alter his view, as warranted, as psychology advanced.  And, indeed, the behaviorist revolution carried with it a resurgence in the automaton theory, reflected in Watson's emphasis on conditioned reflexes and Skinner's emphasis on stimulus control (Tolman's purposivist interpretation of learning was an exception).  On the other hand, the cognitive revolution implied an acceptance of James' functionalist view: the primary reason to be interested in beliefs, expectations, and mental representations is that they have some causal impact on what we do.  In fact, modern cognitive psychology accepts a distinction between automatic and controlled mental processes (e.g., Logan, 1997; Shiffrin & Schneider, 1984): automatic processes are inevitably evoked following the presentation of some cue, incorrigibly executed, consume little or no cognitive capacity, and are strictly unconscious; controlled processes lack these properties, and are -- although many scientific psychologists do not like to use the term -- reflections of "conscious will".

To many of us, this seems to be a perfectly reasonable compromise, but Wegner's book appears to be a reassertion of the automaton-theory in pure form.  Its very first chapter argues that "It usually seems that we consciously will our voluntary actions, but this is an illusion"  (Wegner, 2002, p. 1) as well.  Just to make his point clear, Wegner offers (Figure 3.1, p. 68) a diagram showing an "actual causal path" between an unconscious cause of action and conscious action, and another "actual causal path" between an unconscious cause of thought and conscious thought, but only an "apparent causal path" (italics original) -- the experience of conscious will -- between conscious thought and conscious action.  And he concludes with Albert Einstein's image of a self-conscious but deluded moon, blithely convinced that it is moving of its own accord.  In Wegner's view, apparently, we are conscious automata after all. 

Wegner musters a great deal of evidence to support his claim that our experiences of voluntary and involuntary action are illusory, including an entire chapter devoted to hypnosis.  In fact, Wegner goes so far as to note that "hypnosis has been implicated in many of the curiosities of will we have discussed" (p. 272).  Certainly it is true that hypnotic subjects often feel that they have lost control over their percepts, memories, and behaviors.  This quasi-automatic character of hypnotic experiences, bordering on compulsion, even has a special name: the classic suggestion effect (Weitzenhoffer, 1974).  However, I think that Wegner's interpretation of this effect is off the mark.  In my experience, hypnotized subjects do not experience a "transfer of control to someone else" (p. 271) -- namely, the hypnotist.  Rather, they typically experience the phenomena of hypnosis as happening by themselves.  This experience of involuntariness is what distinguishes a hypnotic hallucination from a simple mental image, and posthypnotic amnesia from simple thought suppression.  The experience of involuntariness is not the same as the transfer of control.  Hypnotized subjects claim their involuntary behavior as their own, even as they experience it as involuntary -- which is why it can persist when the suggestion is canceled, in contrast to behavior under the control of an experimenter's verbal reinforcement (Bowers, 1966; Bowers, 1975;  see also Nace & Orne, 1970).

Of course, this nonconscious involvement (Shor, 1959, 1962) is illusory.  As Shor (Shor, 1979) noted, "A hypnotized subject is not a will-less automaton.  The hypnotist does not crawl inside a subject's body and take control of his brain and muscles".  Even posthypnotic suggestion, the classical exemplar of hypnotic automaticity, lacks the qualities associated with the technical definition of automaticity....

Although there are a few dissenters (Kirsch & Lynn, 1997, 1998a, 1998b; Woody & Bowers, 1994; Woody & Sadler, 1998), most theorists of hypnosis, whatever their other disagreements, agree that the experience of involuntariness in response to hypnotic suggestions is in some sense illusory.... 

In fact, most of the other phenomena described at length by Wegner, such as the Chevreul pendulum, automatic writing, the Ouija board, and even facilitated communication, have this quality: behavior that is experienced by the individual as involuntary is actually voluntary in nature.  Documenting this illusion would make for an interesting book, as indeed it has (Spitz, 1997).  But Wegner puts this evidence to a different rhetorical use -- he tries to convince us, by citing examples of illusory involuntary behavior, that our experience of voluntary behavior is illusory as well.  Logically, of course, this does not follow.  Of course, there exist illusions of control as well (Alloy, Albright, Abramson, & Dykman, 1989), but even these do not justify the strong conclusion that all experiences of voluntariness are illusory. 

Given that the evidence for an illusion of voluntariness is weak, the rationale for Wegner's claim must be found elsewhere -- in theory, or perhaps in ideology.  In this respect, Wegner's book can be viewed in the context of a trend in contemporary social psychology that I have come to call the automaticity juggernaut: the widespread embrace of the view that, even with respect to complex social cognition and behavior, we are conscious automatons whose experiences, thoughts, and actions are controlled by environmental stimuli -- just like Skinner said they were (Bargh, 1997; Bargh & Chartrand, 1999; Bargh & Ferguson, 2000; Wegner & Bargh, 1998).  The idea that the experience of conscious will is illusory follows naturally from this emphasis on automaticity, which has its roots in the situationism that has infected social psychology almost from its beginnings as an experimental science (Kihlstrom, 2003).  But based on the evidence mustered by Wegner, the "illusion of conscious will" seems now, as it did to James more than a century ago, to be an "unwarrantable impertinence". 

 

The Automaticity Juggernaut -- Or, Are We All Zombies After All?

Automaticity is all the rage in contemporary social psychology, as increasing numbers of social psychologists adopt some combination of the following views:

  • Human social behavior is largely automatized.
    • Our conscious percepts, goals, and emotions are largely irrelevant to what we do.
    • Our conscious percepts, goals, and emotions are automatically triggered by preconscious analysis of environmental stimuli.
  • Consciousness is an afterthought, by which we give plausible or acceptable reasons for our behavior.
  • We are all zombies after all -- not, as Daniel Dennett would have it, because zombies are conscious too, but because consciousness really is epiphenomenal, and plays no causal role in our behavior.
Proponents of automaticity like Bargh do not consider behavior to be caused by the objectively described situation, as Skinner would have had it.  Rather, like all cognitive social psychologists, they place emphasis on the person's internal mental representation of the situation, and the cognitive processes by which it is constructed.  While it is axiomatic in cognitive social psychology that behavior is a function of the perceived situation, as opposed to the situation as objectively described, Bargh and his confreres argue that the mental representation of the situation is itself constructed automatically and preconsciously. Thus, Bargh and other automaticity theorists are able to maintain a superficial allegiance to cognitivism, while at the same time harkening back to the radical situationism of Skinner, and of earlier, pre-cognitive social psychology, and embracing the conscious inessentialism implicit in the work of Dennett. 

With apologies to Alexander Dubcek (1968), and Susan Sontag (1982) I call this

 behaviorism with a cognitive face.

One has to wonder: We had a cognitive revolution for this -- to be told that Skinner had it right after all?

Reading the social-psychological literature on automaticity, one might almost wonder why we bothered to have a cognitive revolution in the first place.

To some extent, the "automaticity juggernaut" within contemporary social psychology exemplifies the ambivalence with which the idea of consciousness is held in cognitive psychology and cognitive science (as in Flanagan's conscious shyness).  In addition, it seems to reflect a sincere belief, on the part of some social psychologists (including Bargh himself), that a truly scientific explanation of behavior must be deterministic, and leaves no room for anything like conscious will.   The implication is that consciousness, including conscious will, is epiphenomenal -- that is, it does not play any causal role in the world of particles and fields of force.  This view is exemplified by the "steamwhistle analogy" offered by T.H. Huxley, Darwin's cousin (and "bulldog" defender of the theory of evolution by natural selection).

But there are also other sources of the automaticity juggernaut.

Naturalization.jpg (97623
                                                    bytes)Still and all, one shakes one's head at the zeal with which some proponents of automaticity have jumped on the automaticity juggernaut.  There is a kind of schadenfreude, I think, when someone like Bargh takes it as his "sad duty" to report that a fondly held view of human nature is, in fact, a myth.  And that is precisely what he does.  In one of his papers, Bargh takes up Freud's account of his contribution to the "naturalization" of human life.  According to Freud,



  • Copernicus demonstrated that Earth was not the center of the Universe.
  • Darwin demonstrated that Man was just another animal.
  • Freud, for his part, sorrowfully demonstrated that Man is irrational.
Bargh picks up this theme of naturalization, accepting the assertions of Copernicus and Darwin, but substituting his own formulation for Freud's:
  • Social behavior is reflexive, automatic, and unconscious. 


Applying the Process-Dissociation Procedure

Which is where Jacoby's process-dissociation procedure (PDP) comes in.  Although still somewhat controversial, the PDP remains the most widely used means of estimating the relative contributions of automatic and controlled processes in task performance.  Although widely used in cognitive psychology, the PDP has been rarely used in social psychology.  But when it has been used, the results have been revealing -- and reassuring.



Among the earliest applications of the PDP concerned the false fame effect.  Jacoby and his colleagues (1989) asked subjects to study a list of non-famous names, such as Sebastian Weisdorf, followed by a memory test.  One day later, they were given a long list of names, some clearly famous and others not, and asked simply to judge which of them was the name of a famous person.  Among the non-famous names on the list were names from the list that had been memorized 24 hours earlier.  The chief finding of the experiment was that the previously studied non-famous names were now judged to be famous.  Jacoby argued, reasonably, that the initial study session primed the non-famous names, so that when they appeared on the later fame-judgment task they "rang a bell", and that this increased familiarity was interpreted as evidence of fame.  And, of course, the priming was interpreted as automatic in nature.

But was it?  In a later study, Jennings and Jacoby (1993) applied the PDP to the false fame effect.  The experiment was run as before, except that this time there was both an Inclusion and an Exclusion task (we won't get bogged down in the details of how the Method of Opposition was actually implemented).  Moreover, the entire study was run under three experimental conditions: first, they compared full attention (no distraction) to divided attention (where the subjects performed a distracting task while they made fame judgments, in order to restrict cognitive resources; second, they repeated the full-attention condition with a group of elderly subjects, to compare with the college students.  

The results were revealing.  In the full-attention condition, which was comparable to the original demonstration of the false fame effect, controlled processing was more influential than unconscious, automatic processing.  The role of conscious, controlled processing was diminished in the divided-attention condition (naturally, because of the more limited cognitive resources available), and among the elderly (because of age-related declines in cognitive capacity).  But even under these conditions, automatic processing did not dominate conscious processing.  It would be more accurate to say simply that the false fame effect was produced by a mix of conscious and unconscious processing.


Similar findings were obtained in an experiment on spontaneous trait inferences.  Uleman and his colleagues have performed a number of experiments where subjects studied the photographs of strangers, each of which was paired with a simple behavioral description, such as Jane gave a dollar to the beggar).  Two days later, they were presented with a larger set of set of photos, including some of the photos studied previously, and asked to make judgments about the personalities of the people depicted.  The finding was that targets depicted in the old photos tended to receive trait attributions in line with the behavioral descriptions that had accompanied the photos studied on the first day.  For example, subjects tended to describe "Jane" as kind.  The interpretation was that, in line with what is called the fundamental attribution error, subjects automatically attribute behavior to a person's personality traits and other internal characteristics, rather than to the situation or some other external factor.  So, presentation of the behaviors automatically primed traits, which were then attached to the people depicted in the photos.  When the same photo appeared the next day, the subject automatically retrieved the corresponding trait information from memory.

But did it?  to his credit, Uleman and his colleagues (2005) applied the PDP to their experimental paradigm, running subjects under both Inclusion and Exclusion instructions.  In one condition, the subjects made trait judgments immediately after studying the photographs; in other conditions, the trait judgments were delayed by 20 minutes, or 2 days (as in the initial experiment).  In the "immediate" condition, in fact, conscious processing proved to be more influential than automatic processing.  The role of conscious processing was reduced after a 20-minute or 2-day delay, reflecting the time-related decline of conscious recollection.  But even in these conditions, task performance was mediated by a pretty even mix of conscious and unconscious processing.


Perhaps the most revealing of these PDP studies is a pair of experiments by Payne and his colleagues concerning the weapon bias.  In these experiments, subjects are shown a brief video clip, and they have to judge whether the person in the clip is holding a weapon (a knife or a gun, for example) or a tool (like a wrench or a TV remote).  Before performing this task, however, the (white) subjects are shown the face of a black or white person.  The finding is that the white subjects are more likely to misidentify tools as guns when they have been primed by a black face.  Moreover, they were faster to correctly identify the object as a gun after they had been primed with a black face, and faster to correctly identify the object as a tool after they had been primed with a white face.  The interpretation is that the presentation of the faces automatically primed racial stereotypes in these white college students, including the stereotypical association of black people with crime and violence; and that this stereotype automatically biased their weapon judgments.  But did it?  It depends.

In his initial experiment, Payne (2001) applied the MOP in his standard experimental paradigm, in which there was no deadline, and subjects could take their time making their judgments.  In this case, controlled, conscious processing proved to be more important than automatic processing.




In a later study, Payne and his colleagues imposed a deadline on the subjects, forcing them to make their decision, weapon or tool, within 500 msec of the video.  This was done to more closely simulate the situation of a police officer who must make a split-second decision about whether an object in a suspect's hand is a weapon.  Under these circumstances, where subjects had to make very rapid decisions, automatic processes dominated; but there was still a healthy amount of controlled processing in the mix.



Applying the QUAD Model to the IAT

A technique similar to the process-dissociation procedure has been used to separate automatic and controlled components of performance on the Implicit Association Test (IAT), devised by Greenwald, Banaji, and their colleagues.  As described in the lectures on Social Categorization, the IAT makes use of stimulus-response (in)compatibility to assess people's unconscious, or at least unacknowledged, attitudes toward things both mundane (like insects vs. flowers) or monumental (like Blacks vs. Whites or Koreans vs. Japanese).  The general idea is that subjects' performance on the IAT is influenced by automatic associations between concepts, such as flowers-pleasant or insects-unpleasant. Seeing an insect automatically activates the associated attitude bad, while seeing a flower automatically activates the association good, and these automatically activated associations then lead automatically to prejudicial behavior toward the attitude objects.

At least, that's the theory.  In fact, Greenwald and Banaji offer little direct evidence that automatic processes underlie performance on the IAT.  Mostly, they show only that scores on the IAT are (in their view) relatively poorly correlated with conscious attitudes, as measured by techniques such as an attitude thermometer.  Recently, Jeff Sherman and his colleagues at UCSD have proposed a QUAD model for the analysis of automatic and controlled components of task performance.

Here's an example of how the QUAD model can be applied to analyze performance on the Black-White version of the IAT.  It's complicated -- a lot more complicated than the process-dissociation procedure.




And when Conway et al. performed the QUAD analysis, employing white students at UC Davis, they found that controlled processing dominated performance under standard conditions, with no time limitations on response. The C and D parameters were both much larger than the AC and G parameters.



Beer et al. performed a similar study, with White students at the University of Texas.  Here the C parameter was considerably reduced, but it was still greater than the AC parameter.  And the C parameter was also very strong. 



These figures reveal a little secret about the IAT.  Although bias is supposed to be revealed by errors in IAT performance, in fact, performance is by and large error-free.  That is, most subjects correctly identify flowers as flowers and Blacks as Blacks and unpleasant words as unpleasant words, despite whatever unconscious associations they may harbor.

So, the bottom line is that automaticity plays some role in social cognition and behavior, as we would expect -- because automaticity plays some role in everything we do.  The relative impact of automaticity is increased under conditions that militate against conscious processing, such as distraction (which consumes attentional resources), long retention intervals (which induce forgetting) or short response windows (which preclude conscious processes from coming into play).   And even under ordinary circumstances, there is enough "mindlessness" in the ordinary course of everyday living to convince us that there's something to this automaticity business. 

But the assertion that "humans are closer to zombies than sentient beings much of the time" (Blakeslee, 2002) is wide of the mark.  There's just no evidence to support any such belief.


Critique of the Critique of Conscious Will

The social-psychological emphasis on automaticity underlies yet another threat to reason in moral psychology -- namely, a critique of the concept of conscious will itself. You don't have to think about things too hard to understand that the very concept of moral judgment depends on the freedom of the will. Neither concept applies in the natural world of planets and continents and lions and gazelles, where events are completely determined by events that went before. Moral judgment only applies when the target of the judgment has a real choice -- the freedom to choose among alternatives, and whose choices make a difference to its behavior. The problem of free will, of course, is that we understand that we are physical entities: specifically, the brain is the physical basis of mind; and the brain, as a physical system, is not exempt from the physical laws that determine everything else that goes on in the universe; and so neither are our thoughts and actions. So the problem of free will is simply this: how do we reconcile our conscious experience of freedom of the will with the sheer and simple fact that we are physical entities existing in a universe that consists of particles acting in fields of force?

Philosophers have debated this problem for a long time -- at least so long as materialism began to challenge Cartesian dualism. Those who are compatibilists argue that the experience of free will is compatible with physical determinism, while incompatibilists argue that it is not, and we must reconcile ourselves to the fact that we are not, in fact, free to choose what to do and what to think. Those incompatibilists who have read a little physics may make a further distinction between the clockwork determinism of classical Newtonian physics and the pinball determinism of quantum theory, maybe invoking Heisenberg's observer effect and uncertainty principle (they're apparently not the same thing) as well, but injecting randomness and uncertainty into a physical system is not the same as giving it free will, so the problem remains where it was.

Psychologists too have entered the fray: those of a certain age, such as mine, will remember the debate between Carl Rogers and B.F. Skinner over the control of human behavior (Rogers & Skinner, 1956; Wann, 1964). These days, many psychologists, for their part, appear to come down on the side of incompatibilism, arguing essentially that free will is an illusion -- a necessary illusion, if we are to live in a society governed by laws, but an illusion nonetheless. 

16WegBook.JPG
                                                          (106343
                                                          bytes)As 17WegFig.JPG (71951 bytes)a case in point, consider The Illusion of Conscious Will, in which Dan Wegner (2002) invokes the concept of automaticity and asserts that "the real causal mechanisms underlying behavior are never present in consciousness" (p. 97). Just to make his meaning clear, be presents the reader with a diagram contrasting the "apparent causal path" between thought and action with the "actual causal path" connecting action to an "unconscious cause of action".


18GazzBook.JPG
                                                          (99685 bytes)More 19GazzSnake.JPG (133218 bytes)recently, Mike Gazzaniga (2011, pp. 105-106) has picked up on the theme, writing that the "illusion" of free will is so powerful that "we all believe we are agents... acting willfully and with purpose", when in fact "we are evolved entities that work like a Swiss clock" (no pinball determinism for him!). To illustrate his point, he recounted an instance in which, while walking in the desert, he jumped in fright at a rattlesnake: he "did not make a conscious decision to jump and then consciously execute it" -- that was a confabulation, "a fictitious account of a past event; rather "the real reason I jumped was an automatic nonconscious reaction to the fear response set into play by the amygdala" (pp. 76ff).

Similarly, Sam Harris (2012), a neuroscientist who burst on the scene with a vigorous critique of religion, has weighed in with a critique of free will, arguing, like Wegner, that free will is simply an illusion. "Our wills are simply not of our own making. Thoughts and intentions emerge from background causes of which we are unaware and over which we exert no conscious control."

This argument isn't just inside baseball. In its March 23, 2012 issue, the Chronicle of Higher Education published a forum entitled "Free Will is an Illusion", with a contribution by Mike Gazzaniga; the May 13, 2012 issue of the New York Times carried an Op-Ed piece by James Atlas entitled "The Amygdala Made Me Do It"; and the May-June 2012 issue of Scientific American Mind featured a cover story by Christopher Koch detailing "How Physics and Biology Dictate Your 'Free' Will". These aren't the only examples, so something's happening here. What we might call psychological incompatibilism is beginning to creep into popular culture. Which, like moral intuitionism, is OK if it's true. The question is: Is it true?

20RPOrig.JPG
                                                          (78791 bytes) Wegner, Gazzaniga, and Harris are inspired, in large part, by a famous experiment performed by the late Benjamin Libet, a neurophysiologist, involving a signal known as the readiness potential (Libet, Gleason, Wright, & Pearl, 1983). When someone makes a voluntary movement, an event-related potential appears in the EEG about 600 milliseconds beforehand. 

 


21LibetExp.JPG
                                                          (80344 bytes)Libet added to this experimental setup what he called the "clock-time" method. Subjects viewed a light which revolved around a circle at a rate of approximately 1 per 2.5 seconds; they were instructed to move their fingers anytime they wanted, but to use the clock to note the time of their first awareness of the wish to act. 

 


22RPType2.JPG
                                                          (63194 bytes)Libet discovered that the awareness of wish preceded the act by about 200 msec -- not much of a surprise there. But he also discovered that the readiness potential preceded the awareness of the wish by about 350 msec (200 + 350 = c. 600 msec). So there is a second type of readiness potential, which Libet characterized as a predecisional negative shift. Libet concluded that the brain decides to move before the person is aware of the decision, which manifests itself as a conscious wish to move. Put another way, behavior is instigated unconsciously (Wegner's "unconscious cause of action"), conscious awareness occurs later, as a sort of afterthought, and conscious control serves only as a veto over something that is already happening. In other words, conscious will really is an illusion, and we are nothing more than particles acting in fields of force after all.

Libet's observation of a predecisional negative shift has been replicated in other laboratories, but that does not mean that his experiment is immune to criticism and his conclusions are correct (for extended discussions of Libet's work, including replies and rejoinders, see (Banks & Pockett, 2007; Libet, 1985, 2002, 2006). In the first place, there's a lot of variability around those means, and the time intervals are such that that the gap between the predecisional negative shift and the readiness potential could be closer to zero. And there are lot of sources of error, including error in determining the onset of the readiness potential, and error in determining the onset of the conscious wish (as for the latter, think about keeping track of a light that is rotating around a clockface once ever 2.5 seconds). Still, that difference is unlikely to be exactly zero, and so the problem doesn't go away.

At a different level, Libet's experiment has been criticized on the grounds of ecological validity. The action involved, moving one's finger, is completely inconsequential, and shouldn't be glibly equated with choosing where to go to college, or whom to marry, or even whether to buy Cheerios or Product 19 -- much less whether to throw a fat man off a bridge to stop a runaway trolley. The way the experiment is set up, the important decision has already been made -- that is, to participate in an experiment in which one is to raise one's finger while watching a clock. And it's made out of view of the EEG apparatus. I find this argument fairly persuasive. But still, there's that nagging possibility that, if we recorded the EEG all the time, in vivo, we'd observe the same predecisional negative shift before that decision was made, too.

26MillerExp.JPG
                                                          (63232 bytes)More recently, though, Jeff Miller and his colleagues found a way to address this critique. They noted that the subjects' movements are not truly spontaneous, for the simple reason that they must also watch the clock while making them. They compared the readiness potential under two conditions. In one, the standard Libet paradigm, subjects were instructed to watch the clock while moving their fingers, and report their decision time. In the other, they were instructed to ignore the clock, and not asked for any reports. Subjects in both conditions still made the "spontaneous" decision whether, and when, to move their fingers. But Miller et al. observed the predecisional negative shift only when subjects also had to watch the clock and report their decision time. They concluded that Libet's predecisional negative shift was wholly an artifact of the attention paid to the clock. It does not indicate the unconscious initiation of ostensibly "voluntary" behavior, nor does it show that "conscious will" is illusory. Maybe it is, but the Libet experiment doesn't show it.

The Miller experiment is important enough that we'd like to see it replicated in another laboratory, though I want to stress that there's no reason to think that there's anything wrong with it. When they did what Libet did, they got what Libet got. When they altered the instructions, but retained voluntary movements, Libet's effect disappeared completely -- not just a little, but completely. The ramifications are pretty clear. This doesn't mean that the problem of free will has been resolved in favor of compatibilism, though it does suggest that compatibilism deserves serious consideration. Personally, I like the implication of a paper by John Searle, titled "Free Will as a Problem in Neurobiology" (Searle, 2001). We all experience free will, and there's no reason, in the Libet experiment or any other study, to think that this is an illusion. It may well be a problem for neurobiology, but it's a problem for them to solve. I don't lose any sleep over it. But if free will is not an illusion, and we really do have a meaningful degree of voluntary control over our experience, thought, and action, that moral judgment is secure from this threat as well. We should be willing to make moral judgments, using all the information -- rational and intuitive -- that we have available to us.

 

Free Will, Within Limits

And that's where I came down on my response to the Templeton Foundation's Big Question: "Does Moral Action Depend on Reasoning?". We are currently in the midst of a retreat from, or perhaps even a revolt against or an assault on reason. Some of this is politically motivated, but some is aided and abetted by psychologists who, for whatever motive, seek to emphasize emotion over cognition, the unconscious over the conscious, the automatic over the controlled, brain modules over general intelligence, and the situation over the person (not to mention the person-situation interaction).

Moral intuitionism represents a fusion of automaticity and emotion, and like the literature that comprises the "automaticity juggernaut" (Kihlstrom, 2008) it relies mostly on demonstration experiments that reveal that gut feelings can play a role in moral judgments. There is no reason to generalize their findings to what people do in the ordinary course of everyday living. As I wrote in the Templeton essay (pp. 37-38):

Freedom of the will is real, but that does not mean that we are totally free. Human experience, thought, and action are constrained by a variety of factors, including our evolutionary heritage, law and custom, overt social influences, and a range of more subtle social cues [like demand characteristics]. But within those limits we are free to do what we want, and especially to think what we want, and we are able to reason our way to moral judgments and action.

*****

It is easy to contrive thought experiments in which moral reasoning seems to fail us.... When, in (thankfully) rare circumstances, moral reasoning fails us, we must rely on our intuitions, emotional responses, or some other basis for action. But that does not mean that we do not reason about the moral dilemmas that we face in the ordinary course of everyday living -- or that we reason poorly, or that we rely excessively on heuristic shortcuts, or that reasoning is infected by a host of biases and errors. It only means that moral reasoning is more complex and nuanced than a simple calculation of comparative utilities. Moral reasoning typically occurs under conditions of uncertainty... where there are no easy algorithms to follow. If a judgment takes place under conditions of certainty, where the application of a straightforward algorithm will do the job, it is probably not a moral judgment to begin with/

If you believe in God, then human rationality is a gift from God, and it would be a sin not to use it as the basis for moral judgment and behavior. If you do not believe in God, then human rationality is a gift of evolution, and not to use it would be a crime against nature.


The Allure of Automaticity

It is one thing to assert that automatic processes play a role in social interactions, along with controlled processes, with the proviso that some automatic processes are more automatic than others. It is another thing entirely to embrace and promote the idea that automatic processes dominate human experience, thought, and action to the virtual exclusion of everything else. Although there is plenty of evidence that automatic processes play some role in social cognition and behavior, as they probably do in almost every aspect of human performance, nothing in any experimental demonstration of automaticity demands such a sweeping inference.

So why are some social psychologists inclined to take this further, empirically unjustified, and logically unnecessary, step? Perhaps, if the step is not motivated by empirical data, then it is motivated by something closer to the a priori and quasi-metaphysical reasons criticized by James more than a century ago.

Partly, the enthusiasm for automaticity seems to reflect a reaction against the "cognitive revolution" in social psychology, with its (tacit) view of social interaction as mediated by conscious, deliberate, rational thought -- as reflected, for example, in balance theory (Heider, 1946, 1958), cognitive consistency theory (Festinger, 1957) (see also Abelson et al., 1968), cognitive algebra (N. H. Anderson, 1974), and early formulations of attribution theory (Kelley, 1967). It is also probably not an accident that social psychologists' interest in automaticity began to develop at roughly the same time as the "affective counterrevolution" emerged in social psychology, with its view of affective states as automatically generated by environmental stimuli, independent of cognitive analysis (Zajonc, 1980, 1984). In fact, Zaconc (1999) has explicitly connected the two themes of automaticity and emotion.

Then, too, the biologization of social psychology may contribute to a reduced role for conscious control in theories of social interaction. To the extent that the reasons for particular patterns of social interaction are to be found in "selfish" genes whose only goal is their own reproduction (Dawkins, 1976), there seems to be little room for the kind of conscious, deliberate, thought that we commonly associate with human intelligence. So too, if social interaction is driven by mental and behavioral instincts that we share with our nonhuman ancestors (Barkow, Cosmides, & Tooby, 1992; Buss, 1999). Finally, social neuroscience (Cacioppo, Berntson, & McClintock, 2000) can, unless we are careful, veer into a reductionism that leaves conscious thought and other aspects of commonsense "folk psychology" entirely out of the explanation of behavior (Churchland, 1986).

Although each trend entails risks, both the emergence of an affective psychology paralleling cognitive psychology, and an interest in the neural and other biological underpinnings of social interaction should be seen as positive developments within social psychology. But there also seems to be a darker side to the current interest in automaticity. Currently, mainstream social psychology is characterized by a focus on judgment error, normative violations, and other aspects of social misbehavior (Krueger & Funder, 2003). While it may be true (or at least arguable) that science learns more from counter-intuitive findings that undercut common-sense "folk psychology", it is also true that this emphasis on the negative can degenerate into what might be called a "People Are Stupid" school of psychology, (Kihlstrom, 2004a). That is, as we go about the ordinary course of everyday living, we do not think very hard about anything, and rely on biases, heuristics, and other processes that lead us into judgmental error (e.g., Nisbett & Ross, 1980; Ross, 1977) (see also Gilovich, 1991). In this view, the evidence for irrationality consists not just in demonstrations of various heuristics and biases in judgment, because some of these might merely be evidence of bounded rationality (Simon, 1957), but also evidence of unconscious, automatic processes. It is not just that we do not think too hard about things; we also do not pay too much attention to what is going on around us, or to what we are doing (Gilbert & Gill, 2000). Nor do we know too much about why we do what we do (Nisbett & Wilson, 1977; T. D. Wilson & Stone, 1985; W. R. Wilson, 1979). Thought and behavior just happens, automatically, in response to environmental stimuli, and our belief that we control what we think and do amounts to little more than an illusion, an after-the-fact rationale. In fact, our attempts to consciously control our experience, thought, and action typically backfire (D.M. Wegner, 1989), and we would be better off if we relied on automatic processes (T. D. Wilson, 2002).

Also on the dark side is a long-standing, but again largely unspoken, alliance between social psychology and behaviorism (Zimbardo, 1999). Just as Watson (Watson, 1913, 1919) and Skinner (Skinner, 1938, 1953, 1977, 1990) viewed behavior as under the control of environmental stimuli, so social psychology has historically been defined as concerned with the influence of the social situation on the individual's experience, thought, and action. Floyd Allport (F. H. Allport, 1924), in his pioneering text on social psychology, adopted an expressly behavioristic stance, interpreting social behavior either as the response to the stimulus of another person's behavior or as a stimulus to another person's response. The behaviorist emphasis on the situation was codified by Gordon Allport 30 years later (G. W. Allport, 1954), when he defined social psychology as the study of "how the thought, feeling, and behavior of individuals are influenced by the actual, imagined, or implied presence of other human beings" (p. 1).

We can see the behaviorist emphasis on social behavior as response to environmental stimuli in the "Four As" of social psychology -- aggression, altruism, attitude change, and attraction; in the classic studies of social facilitation and other aspects of social impact, conformity, persuasion; and elsewhere on almost any randomly selected page of a typical social psychology textbook. The doctrine of situationism is so firmly entrenched in social psychology that Ross and Nisbet (Ross & Nisbett, 1991) identified "the principle of situationism" as the first leg of "the tripod on which social psychology rests" (p. 8). While the cognitive perspective in social psychology that emerged in the 1960s often stressed the importance of the perceived situation, in fact many of the classic studies in the field made little or no reference to the internal cognitive processes by which individuals constructed the mental representations of the situation that actually governed their behavior.

As Berkowitz and Devine (1995) have noted, all of this classic literature can be reinterpreted in terms of the automatic elicitation of feelings, thoughts, and actions by environmental stimuli. Wegner and Bargh (D.M. Wegner & Bargh, 1998) agree:

Classic social psychology... makes people appear to be automatons. The situational influences on behavior investigated in these [classic] studies were (a) unintended on the part of the individual, (b) not something of which the person was aware, (c) a response to the situation occurring before the individual had a chance to reflect on what to do (i.e., efficient) or (d) difficult to control or inhibit even when the person is cognizant of the influence. As it happens, these are characteristics of automatic psychological processes, not of conscious control, and comprise a handy working definition of automaticity (p. 447).

Of course, it should be noted that these classic experiments were all conducted before the concept of automaticity emerged in cognitive psychology. Therefore, we do not really know whether the effects they yielded were unintended, unaware, efficient, or difficult to inhibit.

A recent overview of social psychology intended for neuroscientists made the connection between situationism and automaticity even clearer:

"If a social psychologist was going to be marooned on a deserted island and could only take one principle of social psychology with him it would undoubtedly be 'the power of the situation'. All of the most classic studies in the early days of social psychology demonstrated that situations can exert a powerful force over the actions of individuals....

"If the power of the situation is the first principle of social psychology, a second principle is that people are largely unaware of the influence of situations on behavior, whether it is their own or someone else's behavior (Lieberman, 2005, p. 746).

The reason that people are blind to situational influences is that situational influences operate automatically and unconsciously.

Bargh himself has clearly connected behaviorism, situationism, and automaticity with the problem of free will (Bargh, 1997a, p. 1):

Now, as the purview of social psychology is precisely to discover those situational causes of thinking, feeling, and acting in the real or implied presence of other people..., it is hard to escape the forecast that as knowledge progresses regarding psychological phenomena, there will be less of a role played by free will or conscious choice in accounting for them. In other words, because of social psychology's natural focus on the situational determinants of thinking, feeling, and doing, it is inevitable that social psychological phenomena will be found to be automatic in nature."

The automaticity juggernaut is not strictly a return to stimulus-response behaviorism, because it agrees that cognitive processes mediate between stimulus and response. Thus it is able to maintain a superficial allegiance to cognitivism while harkening back to a radical situationism. If the cognitive processes underlying interpersonal relations behavior are automatically triggered by environmental cues, then behavior is determined by the environment; if social behavior is not absolutely automatic, at least not too much thought has gone into it. Inspired by the late Susan Sontag, we can think of this as behaviorism with a cognitive face.

 

Are We Automatons After All?

While the cognitive revolution made the study of consciousness respectable again (E. R. Hilgard, 1980), the topic of consciousness has always made some psychologists (and other cognitive scientists) nervous, resulting in what Flanagan (Flanagan, 1992) has dubbed conscious shyness. In part, conscious shyness reflects a kind of positivist reserve, itself a holdover from behaviorism, which prefers behavior over self-reports as the data for psychology; in part, it reflects a strategic preference for approaching consciousness obliquely, through studies of perception, memory, and the like that do not expressly evoke the concept of consciousness. But there is more too it than that. In Flanagan's view, conscious inessentialism, or the idea that conscious awareness and control is not necessary for many aspects of cognition, feeds the epiphenomenalist suspicion that consciousness plays no causal role in behavior after all. In this view, we may be conscious zombies, but we are zombies nonetheless.

By embracing the concept of automaticity, we can admit that we have consciousness, and even search for its neural correlates, without also admitting that consciousness has anything to do with causing our behavior. As noted earlier, Wegner has vigorously argued that conscious control is an illusion, and that our conscious intentions are previews of action, not the causes of it (D.M. Wegner, 2002). As he puts it:

"This is the way it needs to be for progress in the explanation of human psychology. The agent self cannot be a real entity that causes actions, but only a virtual entity, an apparent mental causer" (D.M. Wegner, 2005, p. 23).

This quote makes it clear that the automaticity juggernaut is fueled by pre-theoretical ideological commitments, rather than any empirical findings -- not just to the doctrine of situationism, or to the behaviorist viewpoint, but to a particular view of what science is, and what kinds of explanations a scientific theory allows.

Indeed, epiphenomenalism, in turn, links to a perennial problem for psychology, and indeed for all the social sciences, which is the question of free will and determinism (Rogers & Skinner, 1956). To some theorists, the idea that consciousness actually plays a causal role in behavior seems to violate the fundamental assumption of the scientific enterprise -- that every event has a physical cause, and that human -- or, for that matter, superhuman -- agency has no place in scientific explanation. Given the choice between adhering to the assumption of determinism and taking consciousness seriously, some scientists choose the former, construing thought and action as automatic and consciousness as epiphenomenal, without causal efficacy. Thus, Bargh and Ferguson (2000) write that automaticity succeeded where behaviorism failed, solving the problem of free will by showing how behavior could be determined by the stimulus environment after all:

[T]he same higher mental processes that have traditionally served as quintessential examples of choice and free will -- such as goal pursuit, judgment, and interpersonal behavior -- have been shown recently to occur in the absence of conscious choice or guidance. It would seem, therefore, that the mid-century failure of behaviorism to demonstrate the determinism of complex higher order human behavior and mental processes occurred not because those processes were not determined but rather because behaviorists denied the existence of the necessary intraindividual, psychological explanatory mechanisms... mediating between the environment and those higher processes....

[T]he failure of behaviorism in no way constituted the failure of determinism. We... present the case for the determinism of higher mental processes by reviewing the evidence showing that these processes, as well as complex forms of social behavior over time, can occur automatically, triggered by environmental events and without an intervening act of conscious will or subsequent conscious guidance (p. 926).

One is tempted to ask whether we really had a cognitive revolution in psychology for this -- to learn that Skinner had it right after all; that we really are all under the control of environmental events, and that all he missed was the wiring diagram that connects stimulus with response.

For Wegner (2002), as for Bargh and Ferguson (2000), it seems that automaticity is the key to the scientific status of psychology itself. Automaticity does more than demystify unconscious mental life: it permits us to bypass the will (Bargh, 2005), and allow psychology to adopt the pinball determinism of classical physics. Bargh, Wegner, and others, faced with an apparent conflict between free will and determinism, choose determinism, and automaticity is a means for doing just that. At the same time, this may be a false choice. Certainly there is nothing in the scientific evidence concerning the role of automatic processes in social behavior that would compel us to choose automaticity over control.

As Searle (1992; 1999; 2000; 2001a; 2001b) has argued, whenever we are confronted by a choice between two equally compelling beliefs, such as our experience of free will and our scientific commitment to determinism, it is likely that the choice has been poorly framed to begin with. Perhaps we need to jettison the notion of free will as a sentimental component of folk psychology that must be abandoned in the face of the progress of science. Or perhaps the proper stance is to accept the experience of conscious will as valid, and try to explain how free will can enter into the causal scheme of things in a material world of neurons, synapses, and neurotransmitters. The choice is ours to make: our choice will determine whether we will have a science of the mind worth having.


This page last revised 10/23/2015.