We have argued that attempts to shore up the doctrine of traits with more sophisticated methodologies generally failed in the attempt, and that the psychometric approach failed to produce a genuinely scientific approach to personality. What to do now? One possibility is simply to give up, concluding that human thought and action is random haphazard, and inherently unpredictable. This point of view seems to have its proponents among certain philosophers and other writers of a romantic or mystical persuasion, is rejected outright by scientific psychology. The fundamental assumption of any scientist is that events in the material universe obey laws that can be discovered by disciplined, rational thought -- and this is no less true for the world of the mind than it is for the world of atomic particles.
[Add something here on those who take Heisenberg's Uncertainty Principle and Godel's Proof as reasons to suppose that the laws governing human thought and action are unknowable, if not nonexistent.] The question is: where do we look for lawfulness and predictability?
One argument, which became popular long before the critique reached full force, was that traditional psychologists made a fundamental error in trying to understand human thought and action in terms of "invisible entities". Recall that traits, like motives and defenses, are hypothetical constructs which cannot (yet) be directly observed and whose existence must therefore be inferred. Adopting classical physics and chemistry as their models, some psychologists argued that the science should no longer attempt to study these objects -- indeed, they were likened to unicorns, the ether, and phlogiston -- but rather to focus its efforts on predicting publicly observable behavior. Instead of relying on intrapsychic factors for this task, they turned instead to variables that were external to the person, in the physical and social situation in which behavior takes place.
This viewpoint can be described as the doctrine of situationism. In its various forms, the doctrine of situationism in personality entails a radical shift in how personality is conceptualized. Historically, personality psychologists have focused on variables such as traits that are internal to the person in their attempt to account for human uniqueness. By contrast, the situationist analysis focuses attention on variables that are external to the person. As some have noted (Carson), radical situationism in personality results in a paradox, in that it ignores people when dealing with individual differences in personality.
Given the situationist assumption that the important causal variables are external to the organism, the question naturally arises as to the nature of these variables. In other words, what are the attributes or dimensions along which situations can be compared. In general, there have arisen three major answers to this question. One,environmental psychology, deals with the effects of the physical environment, broadly construed, on experience, thought, and action. A second,experimental social psychology, deals with the effects of the social environment -- the effects on the individual of the presence and behavior of others. A third,behaviorism, is a systematic position that emphasizes the role of stimuli, reinforcements, and learning processes in shaping and maintaining behavior.
The World Outside
Let us begin our analysis of environmental effects with some basic distinctions. First, we may define the environment as the persistent, general context in which behavior occurs. The planet Earth is one such environment, the Moon is another; The United States is one environment, Antarctica another; downtown Chicago is one environment, Jackson Mississippi another. Obviously, environments can be more or less broadly construed. However, they seem clearly distinct from situations, which are the specific, momentary contexts in which psychological functioning occurs. A classroom is one situation, a synagogue is another; even within the classroom, the situation is different when it is used for a lecture than when it is used to show a movie; the synagogue is a different situation when it is used for a wedding than when it is used for a funeral. Finally,stimuli are the elements that comprise the immediate situational context.
In general, situational analyses focus on the situation and its constituent stimuli. These can be approached at different levels of analysis. At the macrolevel, we can examine the effects on people of urban versus rural environments, skyscrapers versus shopping malls, and Eastern versus Western sociocultural norms. At the microlevel, we can analyze the effects of the presence versus absence of other people, high versus low ambient temperature, or the presence versus absence of specific people like parents or teachers. At each of these levels, the analysis can focus on physical variables such as climate, structure, or lighting -- these are the subject of a relatively new field,environmental psychology. Alternatively, analysis can focus on social variables such as norms, values, goals, and expectations -- these are the central topics in traditional social psychology.
Conceptualizing the Situation
Although situationism represents a radical shift in focus for personality psychology, in some respects investigators of environmental effects face the same kinds of problems that confront their more person- centered colleagues. Particularly, what variables are important to an situational analysis? What are the features or attributes that distinguish one situation to another? What are the dimensions along which situations can be compared? How can situations be categorized most meaningfully? Answering these questions, has led to the development of type and trait theories of situations, roughly paralleling type and trait theories of people. In both cases, the goal has been to develop a meaningful taxonomy of situations and their constituent stimuli (Frederickson, 1972; Moos, 1973; Tversky & Hemenway, 1983).
Environments in General
The problem was well stated by Sells (1963), who made a direct analogy between the analysis of persons and the analysis of situations:
The most obvious need in evaluating the manifold encounter or organism and environment is ... a taxonomic, dimensional analysis of stimulus variables comparable to the trait systems that have been developed for individual difference variables.... Sells' own system, based on an earlier analysis by Sherif and Sherif (1956), required detailed descriptions of variables ranging from the weather to the socioeconomic status of the people who inhabit the situation. MORE ON SELLS' SYSTEM HERE.
Another classification scheme was proposed by Krause (1970), who argued that social behavior occurred in one of seven broad classes of situations. (a) Joint working situations involve two or more people working toward a mutually shared goal, under a promise of compensation. (b)Trading situations involve the participants in conflict rather than cooperation. This conflict is resolved through compromise and exchange. (c)Fighting situations also involve conflict, but it is settled without compromise. (d) Sponsored teaching involves the formal roles of teacher and learner, and the goal is the modification of some aspect of the learner's behavior. (e) Serving involves people satisfying the needs of others, in return for compensation. (f)Self-disclosure involves the mutual revelation of attitudes and opinions. (g)Playing involves mimicking the other types of situations. However, this mimicry is done simply for the pleasure of the performance. EXPAND, WITH EXAMPLES.
Sells originally construed these categories of situations as types. As much be obvious, however, any such attempt leads to the problems of partial and combined expression characteristic of other typological schemes. While it is is possible to find clear examples of sponsored teaching or serving situations, other situations are less clear. EXAMPLE. Moreover, some situations seem to involve combinations of types. EXAMPLE. Perhaps it is better to consider these as continuous dimensions on which situations vary, in a manner directly analogous to personality traits. Thus, any situation would be considered to have each of the seven properties, to some degree. Note, however, that Sells classified situations in purely psychosocial terms: the situations are classified in terms of the goals and expectations of the persons who reside in them. Absent from this analysis are the sorts of categories that would be considered important by environmental psychologists.
Another, more comprehensive scheme has been proposed by Moos (1973). Rather than classifying environments per se, he classifies the features and attributes that environments possess. These categories are not independent of each other -- rather, they are overlapping and interrelated in various ways. (a)Ecological dimensions have to do with geographical and meteorological aspects of the situation, as well as with architecture and physical design. Relevant variables include atmospheric pressure, ambient temperature, population density, the height of a a building, and the color of its interior walls. (b)Behavior settings are categories of situations defined by the actions that characteristically occur in them. Thus, seminar rooms are places where intellectual talk occurs; food is eaten in dining rooms, parties are thrown in living rooms, people sleep and make love in bedrooms. (c) Dimensions of organizational structure include the ratio of staff to clients, salary levels, and the degree of hierarchical control. (d) Also relevant are the personal and behavioral characteristics of the inhabitants of the milieu -- their typical level of intelligence, socioeconomic status, or educational achievement. (e)Psychosocial characteristics of the situation and organizational climate include the amount of nurturance offered the inhabitants, the pressure on them to conform, and the standards imposed for achievement. (f) Finally, environments can be classified in terms of their functional reinforcements -- the consequences of behaviors performed in the situation.
Moos' scheme is hierarchical, in that each of the categories described above can be further divided into meaningful subcategories. His own research has focused on the climate of such organizations as psychiatric wards and community mental health centers, prisons and military installations, college dormitories and elementary classrooms. He proposes that there are three subcategories of climate. (a)Relationship dimensions pertain to the amount of involvement of the inhabitants with each other, personal support, and emotional expressiveness.Personal development has to do with the direction in which inhabitants are changed by the milieu.System maintenance and change has to do with the amount of order and organization imposed by the milieu, the clarity of its structure, the degree of control it has over its inhabitants, the amount of pressure on them to work.
Although the taxonomic exercise is inspired by the search for personality traits, there are important differences between Moos' approach and that of, say, Cattell or Eysenck. While personality trait theorists (except for Allport) believe that each individual has some standing on every trait dimension, Moos argues that particular dimensions may simply be irrelevant in particular environments. In his view, the relationship and system dimensions appear to be quite similar across various situations. All can be rated in terms of the extent to which the inhabitants support each other, for example. By contrast, the applicable dimensions of personal development may be quite different from one situation to another.
GIVE AN EXAMPLE HERE OF MOOS' ANALYSIS
The College Environment
Given the important role in psychological research of the college student, it is not surprising that colleges and universities have been the focus of many situational analyses. For example, Pace (1968) developed a dimensional scheme for classifying colleges and universities, based on students' ratings of their schools on specially designed questionnaires. A factor analysis of data derived from a sample of 50 colleges and universities yielded five dimensions: practicality, community, awareness, propriety, and scholarship. MORE HERE.
A similar analysis was performed by Astin (1962), using information derived from college catalogs and other public sources. This research also yielded five factors, but these were quite different from those obtained by Pace: affluence, size, masculinity, homogeneity of offerings, and technical emphasis. MORE HERE.
A rather different approach was taken by Rock et al. (Rock, Baird, & Linn, 1972), based on a sample of student ratings of 95 colleges. Their analysis was directed towards types of colleges, of which they found (coincidentally) five. MORE HERE.
Environmental Influences on Social Behavior
From a situational perspective, one way to establish the regularities in psychological functioning is to relate people's thoughts, moods, and actions to features of the external environment. This idea has a long history in sociology and anthropology. Huntington (1915), for example, argued that such factors as topography, rainfall, and temperature shaped the nature of whole cultures and societies. EXAMPLE HERE.
The Behaviorist Analysis of Social Behavior
The Influence of the Social Situation
Even informal analysis suggests that there are many aspects of human social behavior that are amenable to description in behavioristic terms. The fear and trembling that most of us experience upon entering the dentist's office seems to be a clear example of classical conditioning. Our morning rush to adjust the clock radio after the station comes on, but before the alarm goes off, seems roughly parallel to avoidance learning. Wishing for money, rather than for the things it will buy, appears to be a case of secondary reinforcement. Slot machines, which permit an occasional win, seem to attract high rates of silver-dollar-inserting behavior by virtue of their variable-ratio schedule of reinforcement, while the occasional pop quiz places the studying behavior of students in calculus on a variable-interval schedule. Formal experiments, too, indicate that human behavior can be placed under the control of classical and instrumental contingencies, as indicated earlier. Such considerations suggest that a thoroughly situational account of human behavior is a plausible one.
Conformity and Compliance
Any doubt concerning situational influences on human behavior should be erased by the outcomes of certain classic experiments in social psychology. Consider, for example, an experiment by Asch (l95l) in which subjects were asked to perform a simple perceptual task: indicating which of three lines matched a standard in terms of length. When tested in isolation, as might be expected, the college student subjects made few errors. Somewhat surprising results, however, were obtained when the subjects were tested in groups. Composed of themselves and several confederates of the experimenter. The experiment was arranged so that all subjects announced these decisions out loud, in turn, with the subject going last. On critical trials, the confederates adopted an incorrect consensus, unanimously giving a wrong response. Under these circumstances, the subjects conformed to the group response on an average of 32% of the trials. When there was just a single dissenter from the group (i.e., a confederate who announced the objectively correct judgment) conformity on the part of the subjects dropped to 6% of the trials. Similar effects had been shown earlier by Sherif (l935). He made use of the autokinetic effect, a perceptual phenomenon in which a stationary point of light appears to move. Subjects who were naive to this effect were ushered into a darkened room and told that their task was to estimate how far a light moved. This is a difficult judgment to make, and subjects tested individually differed widely among themselves. In other conditions, an experimental confederate was told to announce judgments that were systematically higher or lower than those of the subject. The result was that on subsequent trials the subjects shifted these estimates in the direction of those of the confederate. Asch made a distinction between conformity at the level of behavior, in which the conforming individual nonetheless believes that the majority is wrong, and conformity at the level of belief, where social pressure actually results in a change in perception. For present purposes, the important point is that in both ambiguous and unambiguous cases a situational factor -- whether people have an ally, or whether they are given a basis for self-reference -- is a powerful determinant of their behavior.
Another situational effect can be observed in comparisons between group and individual decision-making. Storer (l96l) asked his subjects to decide a "life dilemma" in which some degree of promise was coupled with some degree of risk. For example, the choice might be between staying in a secure job as a gas-station attendant in a rural area, and trying one's luck as a novelist in the big city. Subjects were polled in isolation, and again following a group discussion. The finding was that groups were more likely to take the risk than the individuals comprising them were, a phenomenon known as the risky shift. As it happens, Storer studied responses to a set of problems in which most individuals are risky in their private judgments. Later research, employing a wider range of problems, revealed an interesting pattern. For problems where individuals are generally conservative in private, group discussion leads them to be even more conservative; for other problems, where individuals are already inclined to take the risk, group discussion renders them even more risky. So the phenomena is better termed group polarization (Moscovici & Zavalloni, l969). Apparently the shift, in either direction, reflects social comparison (Myers, ref.). The shift, whether towards risk or safety, occurs only when there is already a fair amount of consensus on the decision the group discussion makes this clear. Apparently people start out by underestimating the degree to which other people share their beliefs. When they find out to the contrary, they try to distinguish themselves from the rest by adopting an even more extreme position. Everybody else does the same thing, of course, so that the average response of the group becomes even more extreme. This influence of group norms is not the only operative factor, but it dramatically illustrates the effect that the situation has on behavior.
[Add something here on majority and minority effects in jury decision- making based on Hastie.]
Altruism and Aggression
Prosocial behavior provides a fourth illustration of situational influence. Consider the case of Kitty Genovese, a woman who was accosted as she returned home from work after midnight, and was beaten and stabbed repeatedly for more than half an hour. Despite her pleas for help, none of the 38 neighbors who are known to have heard her cries went to assist her or even so much as notified the police. People refuse to become involved in such incidents for a variety of reasons. For example, they may be deterred by the potential risk to their own safety, or they may not be prepared to meet the demands of the emergency. Somewhat paradoxically, however, the presence of others can inhibit helping behavior. Latane and Darley (l970) continued a situation in which an emergency arose: as subjects filled out a questionnaire in a laboratory room, smoke suddenly poured in through an air vent. When subjects were seated alone in the room, 75% reported the incident to an authority within five minutes; when seated with two others, however, the corresponding figure was 25%. In a similar experiment involving danger to others rather than to oneself, Latane and Rodin (l969) observed the reactions of subjects to a staged incident in which a confederate fell, injured her ankle, and cried for help. When subjects were waiting alone, 70% moved to help; when two strangers waited together, the figure dropped to 40%; when confederates went out of their way to ignore the "victim", the rate of helping dropped to less than l0%. In still another experiment, Darley and Batson (l973) arranged for seminary students to encounter an emergency on their way to an appointment. When the subjects were early, more than 60% stopped to help; when they were late, helping dropped to l0%. The importance of the appointment also affects helping behavior (Batson, Cochran, Biederman, Blosser, Ryan, & Vogt, l978). So does the sex of the victim, her attractiveness, race, and similarity in both appearance and attitudes.
While group members often inhibit helping, they can also facilitate it. People are more likely to help a stranded motorist, contribute to a charity, or donate blood if they have just seen someone else do so (Bryan & Test, l977; Rushton & Campbell, l977). But the presence of a group and the characteristics of the victim are not the only influence on helping behavior. For every Kitty Genovese who goes unaided, there may be a Raul Wallenberg, the Swedish diplomat who risked -- and very likely lost -- his own life in an effort to rescue hundreds of Hungarian Jews from Nazi persecution during World War II. It matters, for example, whether the situation is unambiguous. When the victim is blind there is more helping than when he is drunk, but the number of people in the group of bystanders has no effect on helping behavior (Piliavin, Rodin, & Piliavin, l969). Moreover, when the bystanders are able to examine each other's reactions, there is more helping than when the number of a group are isolated from each other. Of course, these are situational influences too, if of a different sort, and that is the point.
Finally, consider the case of aggression and other forms of antisocial behavior. There are, obviously, many instances in which aggression is reinforced. The robber who injures others so that he may more easily steal their valuables needs an economic rather than a psychological explanation. So, too, do those who destroy property in urban riots engendered by systematic violations of human and civil rights. At a psychological level, aggression frequently appears as a response to a variety of conditions, including pain and frustration. In the case of shock-induced aggression animals who receive non-contingent aversive stimuli such as free shocks will often suppress any ongoing operant behavior (Estes & Skinner, l94l), but they will also attack their cage-mates, including conspecifics and even inanimate objects (Azrin, Hutchinson, & Kabe, l967; Aazrin, Hutchinson, & Sellers, l964). This form of aggression does not diminish over repeated trials, suggesting that it might be reflexive in nature. It is easy to think of human cases where an annoyance also leads to anger and attack. In a similar vein, certain schedules of reinforcement, particularly extinction and fixed-ratio (where the organism must respond at a steady, high rate in order to get rewarded), seems to induce aggression. The instigator here appears to be frustration rather than annoyance. In fact, Dollard, Miller, and their colleagues (Dollard, Doob, Miller, Mowrer, & Sears, l939) long ago proposed three laws relating frustration to aggression: (a) aggression is always a consequence of frustration; (b) every act of aggression is instigated by frustration; and (c) every frustration produces a tendency toward aggression. This early statement of the frustration- aggression hypothesis proved to be much too strong, and has had to be modified (Berkowitz, l962, l969, l978). While very intense punishment or frustration almost invariably produces aggression, under more moderate circumstances the response is moderated by other factors. Studies of humans, for example, show that aggression is more likely following arbitrary or capricious frustration, than by a denial of rewards accompanied by a plausible justification or excuse. Moreover, frustration is more likely to induce aggression in children who have been previously exposed to adults who displayed aggressive behavior, than those who did not. In fact, modeling can induce aggression even in the absence of pain or frustration (Bandura, l973). Moreover, exposure to environmental cues related to aggression -- guns in laboratory rooms, murder and mayhem on television and the cinema -- appear to increase the likelihood of aggression and other antisocial behaviors in observers (Berkowitz, l983). As was the case with frustration, cue-induced aggression is substantially modified by other factors such as the interpretation given to the cue, and of course the opportunity to aggress. The point, however, remains: aggression is not a reflexive response to certain types of stimuli, but rather is affected by a host of environmental factors.
The list could go on and on. Several decades of experimental social psychology has demonstrated the effects of situational variables on attraction, prejudice, and other beliefs, cooperation and competition, dominance and subordination, etc. Each of these actions -- conformity, risk-taking, altruism, and aggression -- are associated with trait adjectives that denote stable individual differences in social behavior. Yet the sorts of studies just reviewed, and there are dozens of other domains that yield comparable results, indicate that the environment also exercises a degree of control over these behaviors. Whether environmental influences are more, or less, powerful than dispositional ones is not the issue at present. The point is that the situation does affect behavior. And as it happens, few if any social psychologists would admit to favoring the kind of S-R behaviorism promulgated by Skinner. But taken in the context of those studies reviewed in Part II, which show that dispositional effects are weaker than the trait concept would lead us to expect, these experiments appeared to offer powerful evidence favoring a situationist point of view.
The Behaviorist Analysis
The problems that troubled the field of personality in the l950s and l970s are reminiscent of those that beset the root of psychology at the turn of the century. At that time, scientific psychology was dominated by structuralism, an approach advocated by Wilhelm Wundt at The University of Leipzig. The job of psychology, according to the structuralists, was to analyze mental processes into their component parts. The analogy was to chemistry, so that complex mental processes such as perception, memory, and thought were held to be compounds built up of situations, images, and feelings. These elements could be studied by introspection by which carefully trained observers reported on their experiences when presented with various stimuli or performing various tasks. Those assumptions, and techniques led to a morass of problems similar to those which would later confront personality psychologists. Different laboratories obtained contradictory results when studying stimuli or tasks that were ostensibly the same. And even within the same laboratory, different observers gave irreconcilable reports. The upshot was that the structuralists failed to reach any sort of consensus as to how many elementary sensations, images, and feelings there were, much less how best they might be described. When no reliable conclusions could be drawn from its methods, the infant science appeared to be stillborn.
In the midst of this crisis, a radical solution was proposed by John B. Watson in his Psychology from The Standpoint of a Behaviorist, which appeared in l9l7. As we have already alluded, he argued that psychology, if it is to be a true science, must be based only on what is publicly observable and verifiable: physical stimuli in the environment, and the organism's muscular and glandular responses to them. He further proposed that all references to percepts, memories, thoughts, and emotions be abandoned, and these mentalistic constructs be reconceptualized in terms of implicit or covert stimuli and responses, joined into an associative chain linking explicit stimuli at one end with overt responses on the other. The result was a thoroughly mechanistic approach to thought and action, again following a kind of chemical model in which the atomic particles are stimuli and responses rather than mental events. Most research within the behaviorist framework was performed on animals, where it was possible to exercise tight control over the organism's history of environmental stimuli. The cognitive processes of humans were deemed to be irrelevant; and psychology, which had been defined by William James (l890, p. l) as "The science of mental life" was now redefined as "a science of human behavior" (Skinner, l953, p. l). It's emphasis was on psychophysics -- the relationship between the physical properties of a stimulus and its accompanying sensory experience -- and especially on learning -- the formation and loss of associations between stimuli and responses as a result of experience. Two forms of learning were acknowledged, which have come to be known as classical and instrumental conditioning.
One fundamental form of learning was discovered accidentally by a Russian physiologist, Ivan P. Pavlov. He was studying the digestive system, work that was eventually to win him the Nobel Prize, employing dogs as subjects. His procedure was to introduce dry meat powder into the dog's mouth,, and record the operation of the salivary reflex. Initially the dogs salivated only when the powder was presented to them. Shortly, however, the dogs began to salivate before the powder was presented: first at the sight of the meat powder, later at the sight of the experimenter, and even later at the sound of the experimenter outside the laboratory room. In other words, the dogs were salivating to events associated with food, as well as to food itself. Apparently, events acquired the ability to evoke reflexes, that previously did not have the power to do so. Pavlov characterized these reflexes as "psychic", as opposed to physiological, because the idea of the stimulus evokes the response. He began the deliberate study of psychic reflexes in about l899, and recognition of the priority of his discovery and research is preserved in the name we give to his phenomenon:classical conditioning.
The basic vocabulary of classical conditioning consists of four terms. An unconditioned stimulus (US) is any stimulus that reliably evokes a response in the absence of any learning experiences; an unconditioned response (UR) is the response to a US, and it is usually an innate physiological reflex. By contrast, a conditioned stimulus (CS) is one that does not by itself evoke any special response; a conditioned response (CR) is the response evoked by the CS after many pairings with the US, and usually resembles the UR in some way (Figure l0.l).
A description of the major phenomena of classical conditioning begins with acquisition, the process by which the CS acquires the power to evoke a CR. Pavlov thought that the mechanism for this was the repeated pairing of the CS with some US, a process known as reinforcement. The strength of the CR can be measured either by its magnitude (in Pavlov's case, for example, the number of drops of saliva) or the conditioned probability that a CR will follow presentation of the CS--p(CR/CS). In the typical case, the CR builds strength slowly.Extinction is the process by which a CS loses its previous power to evoke a CR, after repeated presentation of the CS in the absence of the US. In contrast to acquisition, extinction typically occurs quite rapidly. If the organism is allowed a period of inactivity after extinction occurred,spontaneous recovery may occur: presentation of the CS evoke the CR, and the magnitude of the CR increases with the length of the post-extinction interval. If reinforced presentations of the CS are resumed after extinction, the CS regains the power to evoke the CR. Reacquisition typically proceeds faster than the original learning, a phenomenon known as savings in relearning. In simple extinction, unreinforced presentations of the CS are discontinued immediately after the CS disappears -- that is, reaches zero magnitude. In extinction below zero extinction trials are continued for some time after the CR has lost the power to evoke the CR. In such an instance both spontaneous recovery and savings are diminished, although they still occur. Those two phenomena indicate that that extinction is not merely the passive loss of the CR; apparently, the CS-CR association is retained but actively inhibited.
Once a CR has been established, a number of additional phenomena may occur (Figure l0.2). For example, it may be evoked by stimuli that are similar but not identical to the original CS, a process known as generalization. The magnitude of the CR depends on the degree of similarity, measured along some dimension, and the function relating response magnitude to similarity for the generalization gradient.Discrimination learning provides a check on generalization. If some stimulus along the generalization gradient is reinforced by being paired with the US, the CR will be maintained; if another stimulus (a CS-) remains unreinforced, the generalized CR to that stimulus will extinguish. In the case of higher-order conditioning, a new neutral stimulus, CS2, can be paired repeatedly with an established conditioned stimulus, CSl, with the result that CS2 also acquires power to evoke the CR. A similar extension of learning, known as sensory preconditioning, can occur even before the establishment of the original CR. In this case the two neutral stimuli, CSl, and CS2, are paired; Then, after reinforced pairings of CSl with some US, both CSl and CS2 come to evoke a CR. By means of generalization, discrimination, higher-order conditioning, and sensory preconditioning, stimuli come to excite and inhibit CRs even though they may not have been directly associated with a US.
By means of classical conditioning processes, then, reflexive responses come under the control of environmental events. The phenomena of classical conditioning are ubiquitous, occurring in organisms as simple as sea slugs (Kandel, ref.) and as complex as adult human beings. Pavlov felt that all learning represented classical conditioning. Even then this view was considered too extreme, though many theorists of a behavioral persuasion agreed that the laws of classical conditioning were the laws of acquired motivation, and in particular governed our emotional lives. Just a little reflection reveals how classical conditioning may be involved in many of our joys and fears, preferences and aversions. As we will see in chapter XX, affect is somewhat more complicated than classical conditioning would suggest. Even so, it is clear that something like classical conditioning is important in our everyday lives.
At about the same time as Pavlov was beginning to study classical conditioning in Russia, the American psychologist, Thorndike initiated the investigation of another form of learning. His apparatus was a cage with a door that was rigged to a latch that could be operated from inside. The initial response of animals confined to these "puzzle boxes" was agitation -- particularly so if the animal was a hungry cat, and food was placed outside the cage. Eventually the animal would accidentally trip the latch and escape through the open door. Over successive trials, Thorndike observed that the latency of the response -- the time it took the animal to escape, once confined -- decreased. He argued that this form of learning was motivated by rewards and punishments: if a response was rewarded, it would occur more reliably and more readily; if it went unrewarded or was actually punished, it would drop out of the animal's repertoire. Thorndike's research led him to formulate a set of eight laws of learning, of which two are of primary importance for our present purposes. According to the Law of Exercise. the association between stimulus and response was strengthened by practice, and weakened by disuse. According to the Law of Effect, responses to a stimulus which are rewarded were strengthened, while unrewarded responses were weakened. These laws were challenged, revised, and sometimes discarded over the succeeding years, but the fundamental principle remained intact: adaptive behavior is learned through the experience of success and failure. Because the organism actively operates on the environment (in contrast to classical conditioning, where the environment operates on the organism), and because the organism's responses are effective in achieving some desirable state of affairs, thus far, learning is known as operant or instrumental conditioning.
Beginning in the l930s, B.F. Skinner took up the close study of instrumental conditioning. He refined Thorndike's apparatus into what has become universally known (to Skinner's dismay) as the Skinner Box. This is a chamber containing a variety of means for presenting CSs and reinforcements, and for making CRs. In the simplest case, a pigeon can obtain food by pecking at a lighted key. In Phase I, the organism is placed in the Skinner box and allowed to move freely. Occasionally, it will peck at the key, which is not yet wired to the hopper. This yields a base rate of the CR in the absence of reinforcement. In Phase II, the key is connected to the apparatus for delivering food, so that after every peck a pellet drops into the hopper. This results in an increase in response rate over the baseline. Finally, in Phase III the hopper is disconnected once again, and performance of the CR reverts to baseline.
Skinner (l948) presented a powerful demonstration of the Law of Effect in his superstition experiment. Pigeons were placed in individual chambers where they immediately began to do whatever it is that pigeons do in such circumstances. Every l5 seconds, a food pellet was dropped into the hopper, regardless of what the bird did. Despite the fact that there was no actual connection between any response and reinforcement, each animal developed a different, but a characteristic, pattern of behavior: one flapped its wings, another hopped around the chamber, and another swung its head from side to side. Skinner argued that for each animal, the behavior had been emitted just before the onset of the first reinforcement; thus strengthened, it began to occur more frequently, increasing the likelihood that it would occur again just prior to a subsequent reinforcement. Successive coincidental pairings of this sort led to the persistent performance of the new response. The point is that according to the Law of Effect, a response is strengthened anytime it is followed by reward, even if the connection between the two is wholly illusory.
The basic vocabulary of instrumental conditioning is adapted from that of classical conditioning. A reinforcement is any environmental stimulus which increases the probability of a response.Positive reinforcers are presented following the response (as in appetitive conditioning, where an organism learns a response in order to obtain food), while negative reinforcers are removed following the response (as in aversive conditioning, where an organism learns a response in order to escape foot shock). In instrumental conditioning, the CR is the behavior which is strengthened by reinforcement; strength is measured in terms of either response rates or conditional probabilities. Similarly, the CS is the stimulus which leads to the performance of the CR. Sometimes, the CS is just the experimental situation itself; in other cases, there is a special signal indicating whether a particular CR will be reinforced. However, there is no reference to the concepts of US and UR, because the behaviors in question are not physiological reflexes.
The major phenomena of instrumental conditioning also parallel those of classical conditioning: acquisition of a response via reinforcement; extinction when reinforcement is terminated; generalization of a response to new stimuli along a gradient of similarity; and discrimination learning, where the organism learns to perform the CR only in the presence of particular stimuli. In addition, there is a new concept: the schedule of reinforcement, representing the contingent relationship between the emission of a response and the delivery of reinforcement (Figure l0.) In continuous reinforcement, the reinforcer is presented after every CR; in partial reinforcement, by contrast, the probability of reinforcement, given that the CR has occurred, is less than l (but still greater than 0). Partial reinforcement retards acquisition, but increases resistance to extinction. There are also more complicated arrangements of intermittent reinforcement: In fixed-ratio (FR) schedules, reinforcement is delivered only after the organism has made a predetermined number of CRs: for example, in a FR 5 schedule, an animal might get food after every fifth bar press. In fixed-interval (FI) schedules, the reinforcement is delivered after the first CR which follows a predetermined interval of time since the last reinforced CR; in a FI 2 schedule, for example, an organism can make as many CRs as it wants, but no food will be given for 2 minutes. There are also variable-ratio (VR) and variable-interval (VI) schedules, in which the schedule of reinforcement varies somewhat from trial to trial, but averages same ratio as interval. Each of these schedules produces its own characteristic pattern of responding (see Figure 9 X). FR and VR schedules are typically associated with high response rates; interestingly, in FR schedules the organism tends to pause after every reinforcement, but this is not observed in VR schedules. FI schedules produce a scallop-shaped curve, so that response rate is low immediately following reinforcement and gradually increases as the end of the interval approaches. Like VR schedules, VI schedules produce very high, stable rates of responding. Two other common schedules involve the differential reinforcement of low rates (DRL) and the differential reinforcement of high rates (DRH). In these situations, reinforcement is delivered only if there is a long (l-DRL) a brief (for DRH) interval between successive CRs. Other schedules of reinforcement represent variations and combinations of these basic patterns.
By means of instrumental conditioning procedures in general, and schedules of reinforcement in particular, voluntary responses come under the control of environmental events. The phenomena of instrumental conditioning, like those of the classical case, are ubiquitous. Thorndike, Skinner, and others argued that most adaptive behaviors -- even language (Skinner ref) -- were instances of instrumental conditioning. Again, this claim is too extreme. Nevertheless, the laws of instrumental conditioning do appear to account for much of the acquisition, maintenance, and less of both adaptive and maladaptive voluntary behavior. Instrumental conditioning seems to lie at the heart of much of our habitual behavior, as well as behaviors performed under various incentive conditions.
Classical and Instrumental Conditioning Combined
Table l0.l presents a comparison of classical and instrumental conditioning. The boundaries are admittedly not as firm as they are presented here (Hilgard). For example, in biofeedback training, in which electronic equipment is used to monitor the functioning of the internal organs, it appears possible to achieve some degree of voluntary control over reflexive responses. This point is controversial (Orne & Watson, etc.), and to discuss the matter thoroughly would take us beyond the scope of this book. For present purposes, we will be satisfied with the conclusion that classical and instrumental conditioning represent two somewhat different forms of learning. Nevertheless, most examples of learning appear to involve combinations of classical and instrumental conditioning.
Consider, for example, the case of secondary reinforcement. While so far we have confined our discussion to such reinforcement as food and water, stimuli can acquire power to control behavior even though they do not, themselves, meet any obvious biological needs. These previously neutral stimuli become reinforcing by virtue of having been repeatedly paired with some primary reinforcer. Thus, rats trained to run a maze to reach food in a white goal box will continue to run to a white box, at least for a while, even though it is empty. In this way, a reinforcement useful in instrumental conditioning is established via a classical conditioning procedure.
Another example can be seen in avoidance learning (Rescorla & Solomon, ref.). In a typical experiment, an organism is presented with a signal such as a tone, followed by an aversive stimulus such as foot shock. In contrast to classical conditioning, however, the animal is permitted to escape and avoid the shock. Early in training, of course, the animal is quite agitated, and this releases a variety of behavior. If one of these is followed by termination of shock (a negative reinforcer), the probability of that response will be increased. Over successive trials, response latency decreases until the animal escapes virtually as soon as the shock is presented. Then, the animal begins to make the response before shock is due, thus preventing its occurrence altogether. Gradually the latency of this avoidance response diminishes, until it is reliably made immediately after presentation of the signal.
Once established, avoidance responses are notoriously difficult to extinguish. However, avoidance poses a problem for instrumental conditioning theory because it is never actually followed by a negative reinforcer, and thus should not be strengthened. Something else must be involved. The classic solution is two-process learning theory (Mowrer, l947), which argues that through classical conditioning an anticipatory fear response comes to be associated to the tone. While escape is reinforced by termination of the shock, avoidance is reinforced by termination of the tone, and thus the fear CR which has become attached to it. Fear reduction, then, serves at the negative reinforcer for avoidance.
Punishment presents a somewhat similar puzzle (Solomon, ref.) Recall that in the vocabulary of instrumental conditioning, a positive reinforcer is one whose delivery increases the probability of a response, while a negative reinforcer is one whose termination increases response probability. Punishment, however, involves the delivery of a stimulus that ordinarily would be negatively reinforcing, an event which decreases the probability that some behavior will occur. In a typical experiment, some instrumental response will be shaped up via positive reinforcement, such as food; once established, that response will also be followed by an aversive stimulus, such as shock. Under these conditions the instrumental response will be suppressed. Suppression is greatest when the punishment is intense, delivered immediately after the target behavior, and signaled in advance. The punished behavior will not, however, be eliminated from the organism's repertoire. Performance of the punished behavior will be determined by the relative costs and benefits, and even lower animals are quite ingenious in finding ways to reduce the impact of punishment (see Figure 8).
Again, two-process theory appears to offer an explanation of these effects. Through classical conditioning, anticipatory fear comes to be associated with the stimulus properties of the punished response. Thus, suppression of the response leads to escape from fear, (a negative reinforcer). As the escape response moves further back in time, the behavior appears to disappear entirely.
[Get a reference to Mineka's work in here.]
Conditioning in Human Animals
Although most research within the behaviorist movement focused on animals, these investigators were not at all reluctant to generalize the principles of behavior to humans. Behavior is behavior, after all, and these principles were held to possess wide generalizability across species. Nor, upon occasion, were the behaviorists reluctant to study humans with behavioral techniques developed in laboratory studies of lower animals. The first such attempt was by Watson himself, and is now known as the case of Little Albert (Watson & Raynor, l9l9; Samelson, l980).
Albert B. was a 9-month old infant, the child of a hospital aide, when he was brought into the laboratory. Extensive testing to a variety of objects, including a live rat, rabbit, dog, and monkeys, masks with hair, cotton wool, and wooden blocks showed that none of them elicited a fear response from the child. In fact, the child appears to have been remarkably placid, and had never been observed in a state of fear or rage. Among those tested, the only stimulus which was effective in this regard was the loud, ringing noise created when a steel bar was struck by a hammer. At the age of ll months, conditioning trials were initiated in which the baby was placed on a laboratory table and the bar was struck from a place outside his field of vision, whenever he touched the rat. After only two such trials, and an interval of one week, Albert hesitated to touch the rat when before he had been eager to do so. After five more trials, "the instant the rat was shown the baby began to cry. Almost instantly he turned sharply to the left, fell over on one side, raised himself on all fours and began to crawl away so rapidly that he was caught with difficulty before reaching the edge of the table" (W & R, l920, p. 3). Subsequent testing revealed that Albert also showed signs of fear in the presence of a rabbit and a dog, as well as a sealskin coat, a Santa Claus mask, cotton wadding. At one point, Watson "put his head down to see if Albert would play with his hair. Albert was completely negative" (p. 7). Interestingly, when two other observers (who presumably had not been present during the initial testing and conditioning trials) followed suit, Albert "immediately began to play with his hair" (p. 7). No such fear response was given when Albert was presented with the wooden blocks. Clearly Albert had acquired a highly generalized, though still discriminating, fear of the rat and other furry objects on the basis of his experience (it is not known whether the rat acquired a fear of Albert). The broader implication is also clear: at least in principle, humans can acquire emotional response through classical conditioning.
More systematic research by later generations of experimenters were consistent with these conclusions. [describe these classics] The results of the experiments, when taken to the extreme, suggested to many behaviorists that individual differences in personality were the product of individual differences in learning history -- that a person's abilities, interests, desires, preferences, and aversions were all a product of his or her idiosyncratic history of reinforcement contingencies. To the extent that all individuals were subject to the same contingencies, they were all alike; to the extent that their learning histories differed, they were unique. An early expression, this point of view may be found in Watson's (ref.) dictum:
Give me a dozen healthy infants, well-formed, and my own specified world to bring them up in, and I'll guarantee to take any one at random and train him to become any type of specialist I might select -- doctor, lawyer, artist, merchant-chief and, yes, even beggar-man and thief, regardless of his talents, penchants, tendencies, abilities, vocation, and race of his ancestors
And, somewhat later, by Thorndike (l93l, p. XX):
If I attempt to analyze a man's mind, I find connections of various strength between (a) situations, elements of situations, and compounds of situations, and (b) responses, readiness to respond, facilitations, inhibitions, directions of response. If all these could be completely inventoried, telling what the man would think and do and what would satisfy and annoy him, in every conceivable situation, it seems to be that nothing would be left out.
Watson never had the opportunity to perform his experiment, though after leaving psychology for a lucrative career in advertising he did say that it was the only problem that would lure him to return to the field (Samelson, l980). Nor did Thorndike ever get to test his hypothesis. However, such statements paved the way for an analysis of personality in strict behavioral terms, with an emphasis on learning.
One such analysis was discussed in Chapter 9: Dollard and Miller's (l950) attempt to combine the psychoanalytic approach of Freud with the behavioristic approach of Hull (l943). The Dollard-Miller theory was enormously influential, but its reliance on hypothetical internal drive states made it unpalatable to behaviorists of a more radical bent, such as B.F. Skinner (l944, l950). Skinner, returning to Watson's first principles, argued strongly that a proper explanation of behavior could not rest on hypothesized mental or biological states, for the same reason that they themselves required explanation in terms of the external conditions that gave rise to them. Although many behaviorists embraced a version of S- R theory in which mental associations formed a bridge between external stimuli and responses elicited by them, Skinner did not. If a functional analysis revealed lawful relations between behavior and environmental variables, then internal associative links were unnecessary.
The Doctrine of Situationism
Situationism did not begin with Skinner: Allport (l937) makes reference to the doctrine in his discussion of evidence such as that of Hartshorne and May (l928) on consistency. And strictly speaking, Skinner is not a personality theorist. He made his academic reputation as an investigator of animal learning, but he was also interested in the wider view, and so has consistently applied behavioral tenets to the analysis of human behavior, as in his analysis of language in Verbal Behavior (l957), and of human society, as in his utopian novel Walden Two (l948),Science and Human Behavior (l953), and Beyond Freedom and Dignity (l97l). His view, put concisely, is that whatever we may believe to be the case, our behavior is controlled by variables which exist in the world outside ourselves: the stimuli in the immediate environment, and our individual histories of previous encounters with similar circumstances.
Skinner (l953) makes his intentions clear by equating the relation between cause and effect with the relationship between the independent variable manipulated by experimenters and the dependent variables measured by them. How manipulations of independent variables lead to change in dependent variables is unknown: behavioral science can track only the functional relationships between them. Such laws relating to outputs are the basis of prediction and control, and are sufficient explanations of behavior. Obviously, Skinner recognizes that certain neural events precede observable behavior, and he does not assert that these are wholly uninteresting. However, he notes that these events are themselves ultimately preceded by events that take place outside the organism -- which therefore comprise the ultimate cause of behavior. Because external events can be controlled now, and neural events may never be controlled, behavioral analysis showed focus on the former rather than the latter. "The objection to inner states is not that they do not exist, but that they are not relevant in a functional analysis. We cannot account for the behavior of any system while staying wholly inside it; eventually we must turn to forces operating upon the organism from without" (p. 35).
Drives, States, and Traits
Skinner (l953) applied similar considerations to this analyses of motivational drives, emotional states, and behavioral traits. In the case of drives and states, he argues that the causal chains have three links: the behavior, the preceding internal event(s), and the external event(s) that preceded them. Causal analysis is not complete until the first is functionally related to the last; and when this is accomplished, the second is unnecessary, excess baggage. Drives are induced by environmental conditions that prevent the organism from engaging in certain behaviors -- thirst, drinking; hunger, eating -- and are reduced by conditions that permit them. When such behaviors are unconstrained, Skinner notes that they show periodic increases and decreases in the rate at which they are performed. When a behavior is prevented from occurring, the probability is increased that the behavior will occur on some subsequent occasion when the constraints are lifted; and in the meantime, the organism will engage in other, related behaviors which are not presented. Emotional behaviors do not show periodicity, but they can be elicited by particular environmental conditions. "A sudden loud noise often induces 'fear'. Continual physical restraint or other interference with behavior may generate 'rage'. Failure to receive an accustomed reinforcement is a special case of restraint which generates a kind of rage called 'frustration'... The condition which the layman calls loneliness... appears to be a mild form of frustration due to the interruption, or established sequence of responses which have been positively reinforced by the social environment" (pp. l64-l65). A similar analysis is offered for traits. While not denying the fact of individual differences, Skinner denies conventional accounts, their nature: "Some differences are due to the differences in the independent variables [i.e., environmental conditions] to which people are exposed. Although we may be struck by the effect on behavior, the original individuality lies outside the organism" (pp. l95-l96). While admitting some role for hereditary and developmental factors, Skinner argues that individual differences in behaviors are caused by individual differences in deprivation, and reinforcement histories. Are there, then, any individual differences which may be properly called traits, as true properties of the person? Skinner lists two: differences in the ability to perceive discriminative stimuli present in the environment, and differences in the rate at which behavior responds to changing circumstances. In both cases, however, the variables ultimately controlling behavior are extrinsic rather than intrinsic to the organism. Traits, like motives and emotions, are effects rather than causes.
It is important to note that Skinner's objections to such intrapsychic determinants of behavior are pragmatic as well as conceptual. Throughout his career, Skinner has not been as interested in explaining behavior as in controlling it. A piece of behavior is understood, from his point of view, only when it has been built up from nothing by application of particular reinforcement schedules, shaping procedures, response chaining, discriminative stimuli, conditioned reinforcement, and the like; and then, once conditioned, reduced to nothing again by means of extinction. The problem with trait and similar constructs, as with brain processes, lies not so much in their status as effects rather than causes, as in the fact that they are not amenable to direct manipulation and control. Therefore, Skinner prefers to focus his attention on those variables which can be manipulated, demonstrably enter into lawful relationships with behavior, and thus find the basis for a technology of behavioral engineering and control.
The Locus of Control
Prediction is the test of any scientific theory, but Skinner emphasizes the possibility, and desirability of exerting external control over behavior. But what about self-control? It is common, in ordinary discourse as in the technical language of conventional theories, to think of men and women as making deliberate choices, of determining for themselves what opinions they will hold or what course of action they will take -- in short of people as controlling themselves. In fact, as we will note later, the assumption of self-control and self-determination lies at the very heart of the psychometric and psychodynamic views of personality discussed in Parts II and III* of this book. For Skinner, the notion of self-control reflects an analysis of human behavior that is at best incomplete, at worst illusory: "When a man controls himself ... he is behaving.... His behavior in so doing is a proper object of analysis, and eventually it must be accounted for with variables lying outside the individual himself" (l953, Pp. 228-229). A variety of self-control strategies are available. Some, make a particular response impossible, or eliminate the occasion for it: physical restraint, moving out of the situation in which the behavior takes place, eliminating such a situation, and suicide. Other techniques involving changing the stimulus to which the behavior is a response: extinction and satiation, avoiding temptation, and removing stimuli that elicit undesirable emotional responses may serve as examples. Comparable forms of behavior may be used to increase rather than decrease certain forms of behavior: supplying physical aids in the form of equipment, presenting stimuli that elicit particular behaviors, and depriving oneself, for instance. Skinner finds the concepts of self- reinforcement, self-extinction, and self-punishment quite difficult. In part, this is because self-control is not really control: if a student promises to treat herself to a movie if she finishes her term paper, there is nothing to prevent her from dropping her schoolwork at any moment. Thus, Skinner questions whether self-control is as effective in changing behavior as control imposed from the outside. In the final analysis, Skinner argues, self-control succeeds, when it does, only because the environment reinforces such behavior.
The topic of "self-control" naturally raises the concept of the "self", which Skinner understands to be closely related to the concept of personality. "The self is most commonly used as a hypothetical cause of action.... The same facts are commonly expressed in terms of 'personalities' (l953, pp. 283-284). As might be anticipated, Skinner dismisses both concepts as explanatory fictions, and ultimately misleading. "So long as external variables go unnoticed or are ignored, their function is assigned to an originating agent within the organism. If we cannot show what is responsible for a man's behavior, we say that he himself is responsible for it" (p. 283). Skinner proposes that such concepts are merely a shorthand for a unified system of responses -- that is, a set of responses that tend to occur together, in response to some set of stimuli. These response systems come and go like any single responses -- with changes in discriminative stimuli, deprivation and satiation, emotional stimuli, and psychoactive drugs. In any event, the responses are elicited by environmental stimuli, and causal responsibility is properly located there. Behavior, is consistent so long as the environment is consistent, and change when stimulus conditions change. Conflict occurs when variables eliciting incompatible behaviors are simultaneously present in the environment. Self-reports of the reasons for behavior have no privileged status. When they are accurate, an outside observer could have said the same thing. And in any event, such reports are themselves under the control of environmental variables, as when a psychoanalyst reinforces his patient's discovery of a long-hidden Oedipus Complex. As is the case with behavioral consistency, an ostensible fact supporting the concept of self- control, Skinner apparently believes that reports of self-knowledge are greatly exaggerated. Repression occurs in this analysis, because the failure to observe or remember certain forms of punished behavior is reinforcing. One remains aware of symbols of repressed behaviors, however, because these are responses that attain reinforcement while avoiding punishment.
What applies to the behavior of individuals applies with equal force to the behavior of individuals in groups. Just as he rejects all sorts of intrapsychic determinants of behavior, so Skinner rejects the notion of determinants by "social forces". He points out that an important class of reinforcements includes those which require the presence and activity of other people: And there are social stimuli as well as social reinforcements: such stimuli become important when contingencies of social reinforcement become associated with them. A smile has no meaning except in terms of the consequences of our behavior in response to it: it signifies friendliness because approach rather than avoidance has been reinforced. For this reason, the meaning of a particular social stimulus can vary widely from culture to culture. Skinner's analysis of leadership (l953, p. 306) is revealing: the followers are controlled by the behavior of the leader, while the leader is controlled by variables in the environment. Moreover the behavior of the leader is also controlled by the behavior of the followers: one can only lead where others will follow. Many social episodes are controlled in this reciprocal fashion: Skinner's analysis of a simple request for a cigarette (pp. 307-308) involves no less than four interchanges among two people and their environments.
Skinner's emphasis on control is unpalatable to many readers because it contradicts a common belief that people have free will which allows them to act autonomously, directing their own behavior regardless of situational demands and constraints. But for Skinner, "we all control, and we are all controlled" (l953, p. 438). Control by the environment is a fact of life of which we deny at our peril; and to refuse to accept the responsibility for controlling others is to place that control in other hands. From Skinner's point of view, there is little to fear in such a state of affairs: abuses of control will eventually correct themselves, because they will not achieve their desired ends. Here again Skinner reveals his pragmatic approach: "...although we may object to slavery...because it is "incompatible with our conception of the dignity of man, an alternative consideration in the design of culture might be that slavery reduces the effectiveness of those who are enslaved and has serious effects upon other members of the group" (l953, pp. 444-445). In addition, despotic control (as, for example, an oppressive government) can be countered by other social institutions (as business, the churches, and the press); and by individuals who have been taught to understand their own value. (Here, of course, Skinner slips into a quandary, because it is just such institutions and self-concepts that totalitarian regimes prohibit to form). Still, Skinner holds on to his control theme.
The hypothesis that man is not free is essential to the application of the scientific method to the study of human behavior. The free inner man who is held responsible for the behavior of the external biological organism is only a prescientific substitute for the kinds of causes which are discovered in the course of a scientific analysis. All these alternative causes lie outside the individual. The biological substratum itself is determined by prior events in a genetic process. Other important events are found in the nonsocial environment and in the culture of the individual in the broadest possible sense. These are the things which make the individual behave as he does. For then he is not responsible, and for them it is useless to praise or blame him. It does not matter that the individual may take it upon himself to control the variables of which his own behavior is a function or, in a broader sense, to engage in the design of his own culture. He does this only because he is the product of a culture which generates self-control or cultural design as a mode of behavior. The environment determines the individual even when he alters that environment (l953, pp. 447-448)
In this passage, and elsewhere in his more recent interpretive writings (l97l, l974) and his three volumes of autobiography (l9xx, l9xx, l983) Skinner has restated his position almost without alteration. In so doing, he has set forth what might be called the Doctrine of Situationism. This is the point of view that the important causal factors in behavior, including social behavior in natural environments, resides in the external environment rather than in the organism itself (Bowers, l973; Harvi & Secord, l972). Behaviors are acquired, maintained in the repertoire, elicited or omitted on any particular occasion, and extinguished depending on the contingencies of stimulation and reinforcement. Individual differences in behavior simply represent variations in reinforcement history. It assumes that a satisfactory explanation of the causes of behavior is provided by a description of the environmental conditions that are associated with it -- in other words, a functional relationship between stimulus and response. Finally, it asserts that the cause of a particular response is to be identified with the stimuli and reinforcements of which it is a function. From an experimental point of view, therefore, the causes of behavior, measured by some dependent variable, is to be found in the independent variables manipulated by the experimenter. In the natural world, these independent variables have their counterparts in environmental events that are not susceptible to direct control.
The Influence of Behavior Theory
Skinner was not the first situationist, and while he is arguably the most visible and diligent proponent of the situational point of view, there have been others as well. Skinner predicted his analysis of human personality with data from the animal learning laboratory, bolstered with anecdotes and speculations on the human case. At about the same time however, other psychologists -- inspired by Hull, Skinner, and others -- began to apply behaviorist principles to the understanding, and treatment of the symptoms and syndromes of psychopathology (for reviews see Wilson & Franks, l982; Yates, l970, l98x). Systematic desensitization and implosion therapy, two procedures derived from classical conditioning, proved successful in the treatment of phobias, obsessions and compulsions, and other anxiety-based disorders; classical fear conditioning apparently resulted in the reversal of unwanted patterns of alcohol and drug abuse, and undesired sexual preferences; and instrumental conditioning procedures led to dramatic improvement in the ability of schizophrenics and other chronic mental patients to take care of themselves, and to move out of the back ward and into the community.
The success of these and other behavioral interventions showed that these and other difficulties could be controlled by environmental events, and supported the notion that they had their origins in their victims' histories of environmental stimuli and reinforcement. Logically, of course, a theory of etiology is not proved by the success of a particular treatment. Aphasics can learn to use language again, but their speech difficulties reflect brain damage rather than forgetting. Moreover, it is the case that the pragmatic concerns of the behavior therapists often overwhelmed their theoretical concerns: the origins of mental illness did not matter so much as the success of behavioral treatments in coping with it. Nevertheless, what seemed to work for psychopathology was often applied to normal personality.
Such approaches to personality were largely presented in an informal manner, however, and there are no theorists who have lent their names to the Doctrine of Situationism as strongly as Allport, Guilford, Cattell, Eysenck, and others lent theirs to the Doctrine of Traits, in as strongly as Freud, Murray, and McClelland advocated the Doctrine of Motives. In part, the reason may lie in the old army adage, "If it ain't broke, don't fix it". The behaviorists, as pragmatists, were concerned with practical application, and so they were not very interested in the adaptive behavior that is characteristic of "normal" personality and adjustment. Nothing needs changing in normal individuals, so attention was focused where behavior modification seemed necessary and desirable.
Behavior Therapy and Behavior Modification
An expressionism which is more compatible with behaviorist tenets was within the context of the behavior theory movement which developed within clinical psychology after World War II. Prior to this time, under the influence of both psychoanalysts and organic medicine, mental illnesses were considered to be caused by factors residing in the affected individuals themselves -- an unresolved Oedipus complex or fixation at the anal stage, perhaps; alternatively, an unfortunate genetic endowment or a hormonal abnormality. While acknowledging the importance of biological factors in some cases -- the organic brain syndromes, schizophrenia, and certain forms of depression are obvious examples (Davison & Neale, ref.) -- others appeared to have their origins in environmental events (as in Little Albert's phobia for furry objects); and even those disorders whose origin were in biological abnormalities appeared to be substantially modifiable through application of learning theory and other behavior principles.
Interestingly, Yates (l970) shows that many of the reasons for the emergence of this movement parallel the sources of dissatisfaction with psychometric and psychodynamic approaches to normal personality. For example, mental illness -- like traits, motives, and defenses -- are held to reside within the individual as permanent dispositions. However, diagnostic classification, as an exercise in typology, was notoriously unreliable (Cantor & Genero, l983; Zubin, l967), and do not lead to valid predictions of actual behavior. While the term "behavior therapy" was coined by Skinner and his associates (Lindsley, Skinner, & Solomon, l953), and was also picked up by others working within the tradition of Hullian S- R theory (Eysenck, l959; Lazarus, l958). Other influences have been noted by Kramer l97l, l982).
Systematic Desensitization and Flooding
Although behavior theory had antecedents reaching back to Pavlov's dogs and Watson's Little Albert, its history probably begins with Mowrer and Mowrer (l938) and Wolpe (l958). The Mowrers employed a variant of classical conditioning to treat enuresis in young children (see Figure l0.5). Before this time, bed wetting was generally defined as an organic disease which could be cured by drugs, diets, and a variety of extremely invasive physical procedures. Their alternative was to have the child sleep on a pad which is electrified so that the first drops of urine close a circuit and activate a bell which awakens the child. Eventually the child awakens to the feeling of a distended bladder, before any micturation occurs. Later, normal maturation leads to an increase in sphincter control, and the conditioning technique is no longer needed. The procedure can also be conceptualized in terms of avoidance learning (Lovibond, l963). In any event, the technique is very effective, and illustrates the behavioral approach to abnormal psychological functioning.
Even more dramatic, and certainly more influential, was Wolpe's (l958) introduction of a behavioral treatment for phobias and other anxiety- related disorders. Since the time of Freud, such problems were thought to be deeply rooted in defenses against sexual and aggressive impulses (Fenichel, l945). However, the behaviorists had other ideas. In fact, Wolpe and Rachman (l960) inspired by Watson and Raynor (l9l9), offered a highly plausible alternative account, based on fear conditioning, for one of Freud's own phobia cases. Wolpe's treatment involved setting up a hierarchy of feared objects and situations. Beginning with the least anxiety-provoking of them all, the patient imagines the stimulus while performing relaxation exercises. After anxiety was successfully reduced at one level, the patient would proceed on to the next, until the entire hierarchy had been traversed. This technique was called systematic desensitization, and seemed to involve a kind of counter-conditioning in which the S-R bond between the phobic stimulus and the anxiety response is weakened, and a new one is established between the stimulus and some response that is incompatible with anxiety. As it happens, none of the components in classic systematic desensitization -- relaxation and working progressively through the hierarchy -- is necessary for fear-reduction. In fact, a technique known as flooding is just as effective with specific phobias, and more so with obsessions, compulsions, and nonspecific fears such as agoraphobia. In the procedure, the patient is forcibly exposed to the most fear-evoking stimulus, and confined until the anxiety disappears. (Precautions must be taken, of course, to insure that no harmful effects accrue to the patient during exposure, and that the patient is not released until the fear has completely subsided. Otherwise the treatment will make the disorder worse.) Although the mechanisms of both procedures are not completely understood, their common end product appears to be the extinction of fear.
Punishment and the Token Economy
Instrumental as well as classical conditioning procedures have also been employed in the treatment of psychopathology. Punishment, intensely and consistently applied, has proved useful in controlling a wide variety of antisocial and undesirable activities. For example, Tyler and Brown (l967)showed that the frequency of misbehavior by institutionalized teenage boys dropped off markedly when it was answered by isolation from the group ("time out"), but not when followed by verbal reprimand. Similarly, alcohol abuse has been treated with aversive conditioning procedures including emetic drugs (which yields nausea and vomiting when they interact with alcohol) and even paralysis of the respiratory system induced by injections of curare. [Something here on smoking, emphasizing Baker's work.] In the technique known as covert sensitization, in which the patient vividly imagines the unpleasant consequences of engaging in the proscribed activity (Cantela, l966), a treatment which is surprisingly effective.
Following Skinner's emphasis on the use of positive reinforcement as opposed to punishment, many behavior therapists opted for instrumental conditioning techniques, as exemplified by the token economy (Ayllon & Agrim, l968). In this procedure, initially employed with chronic schizophrenics and other "back ward" psychiatric patients, involved the offering of tokens, contingent on the performance of certain target behaviors, that could be exchanged for desirable commodities. Previously, such patients were deemed to be untreatable, and were confined to a therapeutic regimen consisting largely of medication and custodial care. Under such circumstances, it is not surprising that the behavior of these individuals deteriorated even more, in a sort of vicious cycle. The institution of contingent secondary reinforcers, however, resulted in dramatic improvements in the ability of these people to dress, groom, and feed themselves, engage in various group endeavors, and diminish levels of aggression and other undesirable activities. In many cases, the patients moved out of the hospital and returned to the community. Token economies function like real economies; with the pattern of earning, saving, and spending tokens are a complex function of wages and prices; and the variety of economies that have been explored range widely from capitalism to socialism (Winkler, l97l).
The word "ability" in the preceding paragraph is controversial and quite possibly misleading. From a behaviorist point of view, these individuals possessed the abilities in question all along, but simply had not been reinforced for performing them. When the reinforcement contingencies were reinstated, the behaviors were performed once again. Findings such as these are apparently inconsistent with the notion that some intrapsychic factor such as mental illness prevented these patients from engaging in adaptive behaviors. Rather, control over them appeared to reside in environmental reinforcement contingencies. In the hands of some radical theorists, the success of behavior therapy led to a theory of etiology in environmental terms as well. Thus, for example, depression was sometimes characterized as caused by the loss of previously effective reinforcers. (Ferster, ref; Lazarus, l968). It was, of course, only a small step from a behavioral analysis of psychopathology (Ullman & Krasner, l969) to a behavioral analysis of personality (Krasner & Ullman, l973). That is, the factors determining individual differences in behavior, within the normal range as well as outside it, were to be found in the discriminative stimuli and reinforcers present in the environment -- the Doctrine of Situationism.
Critique of Behaviorism
After the point of view was enunciated by Watson, behaviorism swept psychology like a fresh breeze. It freed psychology from the quagmire created by the assumptions of structuralism and the method of introspection. It fostered the development of empirical laws which permitted the prediction and control of behavior from knowledge of current and prior environmental conditions. The methods of functional behaviorism -- shaping, reinforcement, and the like -- were successful in getting nonhuman animals to engage in a variety of complicated behaviors seemingly requiring human intelligence (see Skinner, ref.; Kendler & Kendler, ref.). And these same methods were also successfully applied to humans, both normal and abnormal, in order to change their behaviors. Understanding how people can change also forms a basis for understanding why people do what they do in the first place. In the face of the apparent failure of the psychometric and psychodynamic perspectives, it is not particularly surprising that a behaviorist, situationist account of personality and individual differences seemed so appealing. The behaviorist account of personality, both normal and abnormal, had other positive features as well. By drawing attention to the potential for change over and above stability and consistency, behaviorism promoted a vision of individuals who were not trapped in rigid patterns of behavior determined for once and for all by their genetic endowment and child-rearing. Rather, the vision was of people capable of responding flexibly to changing circumstances. These implications were largely responsible for Skinner's selection as The l9xx "Humanist of the Year" by the American Humanist Association; ironically, despite his consistent assertion that there is no essential difference between human and nonhuman animals
Still, by placing all control in the hands of the environment, as it were, Skinner and the behaviorists may have substituted one tragic view of human nature for another -- a view of people being buffeted about by environmental forces, like leaves in a strong wind. For lurking behind the Doctrine of Situationism is the concept of the empty organism. The idea is that behavior can be analyzed, and understood, solely in terms of the stimuli to which it is a response. Nothing need be known about the internal workings of the organism, with respect to either biological or cognitive structures and processes. Skinner, in fact, described the organism as a "black box" intervening between stimuli and responses, and argued that we need not look inside it -- indeed, that we should not look inside because that would distract our attention from environmental variables which were of utmost importance. It should be noted that behaviorists were not entirely unanimous on this point. Skinner was following Watson on this point, but Hull, Mowrer, Spence, Dollard and Miller, Eysenck, and other neobehaviorists explicitly acknowledged the importance of considering arousal (drive) levels and other intraorganismic factors in understanding both human and nonhuman behavior (Eysenck, l980, l982; Spence & Kendler, l97l). But Hull-Spence learning theory has its own problems (e.g., Gleitman, Nachmias, & Neisser, l954), and does not really serve to motivate a rejection of the radical behaviorism of Watson and Skinner. Rather the reasons must be found elsewhere. As it happens, a wide variety of experiments indicate that the organism is full rather than empty, and that we are obligated to look inside the "black box" if we are to understand what it is that organisms do.
Biological Constraints on Learning
There is ample evidence for certain stimulus - response associations that appear to be prewired in organisms. Rather than being acquired through learning, they appear to be part of our biological endowment. For example, humans -- like most other organisms show reflexive behaviors in response to certain stimuli: a knee-jerk to a tap on the patellar tendon, blinking when a puff of air is blown into the eye, sneezing when an irritant is introduced into the nose, righting when turned upside down. Human infants show a rooting reflex when their cheeks are stroked lightly: they open their mouths, turn toward the object, and -- if they make contact with it -- close their mouths around it and begin to suck (Figure l0.6A). Infants show reflexes of withdrawal as well as approach: for example, when a bitter-tasting substance is placed into the mouth, the lips will pucker and the face will grimace (Figure l0.6B). Reflex actions are automatic: they always occur in response to an effective stimulus, and they appear the first time that the stimulus is applied, even in newborns, without any opportunity for learning.
The existence of reflexes does not, of course, contradict the behaviorist account of action. After all, Pavlov showed how reflexive activities could be brought under environmental control; and Skinner acknowledged classical conditioning as a primitive form of learning. One reflex observed in the human infant, however, is particularly instructive. If the child is held upright and moved along the floor, it will make coordinated stepping motions with its legs; similarly, when its toes meet the risers of a stairway, it will lift its legs as if to climb (Figure l0.6C). This stepping reflex suggests that children do not learn to walk. They already have this behavior in their repertoire, but walking must wait until the development of the skeletal musculature allows the child to support itself.
Another class of actions poses a greater challenge. These are the instincts or fixed action patterns: complex, stereotyped, and rigidly organized patterns of behavior that are inborn, unlearned, and universal within a given species. For example, consider food-begging in herring-gull chicks (Tinbergen, ref.). The hungry chick pecks at an adult's bill, whereupon the adult will regurgitate partially digested food which the chick can eat. The pecking directed at a dark spot on the lower mandible of the adult's bill. When there is no contrasting spot, or the spot is placed elsewhere pecking is diminished (Figure 7). The pecking requires no learning, and occurs even in the absence of reinforcement. It is like a reflex, except that it is more discriminating and involves several muscle groups rather than just one.
Instincts can be explicitly social in nature. In the case of imprinting, a duckling or gosling will follow the first moving object that it sees within a particular period of time immediately after hatching (Lorenz ref.) Usually, of course, this is its mother; however, if the object is a wooden decoy, a human, or even a block of wood, imprinting will still occur (Figure l0.8). The young animal will follow the imprinted object even under adverse circumstances, and will emit distress calls when the object is removed from its environment. Some instincts require the careful coordination of two individuals. For example, the male stickleback, when it is ready to mate, displays a red belly (Figure l0.9).. It establishes a territory by fighting off other fish -- but only males who also display a red belly and who enter the territory in a "head-down" fight posture. After clearing his territory of rivals, the male builds a nest and entices a female into it -- but, again, only a female with a swollen abdomen, who enters the territory in a "head-up" receptive posture. The female will enter the nest, but only if a red-bellied male performs a zig- zag "dance" in the water; then she will spawn, but only if stimulated in her hind parts. After the female leaves, the male fertilizes the eggs, and cares for them and the young minnows. Red bellies, swollen abdomens, threat and receptive postures, dances, and stimulation are the critical features: the fish will go through these mating rituals even if their opposite numbers are only wooden models.
No learning is involved, and the phenomena such as imprinting occurs strongly even when the object cannot provide any nurturance -- as in the case of decoys and blocks. To say that the infant follows its mother because it is reinforcing to do so -- which is about all that a radical behaviorist can say about imprinting -- is to beg the question.
Instinctual behavior, which is species-specific and universal within a species, appears to be governed by innate learning mechanisms. Red bellies, swollen abdomens, threat and receptive postures, and dances are the critical stimuli for mating behavior in the stickleback; for imprinting, it is motion; for food-begging it is a contrasting spot on the bill. No learning is involved, although the behavior can be progressively refined -- much like a child "learning" to walk becomes steadier and more coordinated. The behavior is elicited the first time that the releaser is presented, and continues even when the object does not respond - as in the case of imprinting on decoys or blocks that cannot serve as a source of nurturance and protection. To say that the infant follows its mother because it is reinforcing for it to do so -- which is about all that a radical behaviorist can say about imprinting -- is to beg the question.
Preparedness, Autoshaping, and Species-Specific Defenses
Of course, it would take relatively little modification of behaviorist doctrine to account for instincts. For example, Skinner has explicitly stated that the organism's genetic history -- including the evolutionary history of the species of which it is a member as well as the genetic endowment the individual receives from its parents. But behavior goes beyond reflexes and instincts, the behaviorists would assert, and it is here that the laws of learning take over. One such law is that of association by contiguity: when two events occur together in time, a link is formed between them. Another important principle, discussed earlier, is the law of effect: responses are strengthened when they are followed by reinforcement. Learning is passive, in that associations are formed regardless simply by virtue of events occurring outside the control of the organism; and it is arbitrary, in that associations can be found between any two co-occurring events.
Consider now some experiments on bait-shyness in animals. Animals who eat poisoned food and survive will subsequently avoid it: they seem to form associations between certain attributes of the food and subsequent illness. One remarkable feature of this phenomenon is that the conditioning occurs even over very long delays between the CS and US: while the usual inter-stimulus interval is on the order of one second, it may take minutes or hours for the effects of the poison to be felt. More important for our present purposes, associations are formed between poisoning and some aspects of the eating situation but not others. Garcia and Koelling (l966) fed rats "bright, noisy, sweet water": the water was flavored with saccharine, and the animal drank from a tube in the presence of a bright, flashing light and clicking noise. Afterwards, some animals received a foot shock, while others received a dose of x-rays sufficient to induce nausea. Not surprisingly, all the animals subsequently showed an aversion to the drinking tube. When the three elements of the compound stimulus were tested separately, however, the rats who received foot shock drank sweet water, but not unflavored water presented along with lights and noise; in contrast, the rats who received x-rays avoided the sweet water, but not the bright, noisy, unflavored water. A later experiment with quail showed the opposite results (Wilcoxin, Dragow. & Kral, l97l). The birds drank water that had been colored blue and flavored sour, and then were irradiated. These birds drank sour colorless water, but not blue flavorless water. The rats associated nausea with the gustatory, but not the visual and auditory properties of food; the birds, with the visual but not gustatory properties -- even though the contiguities were the same for all stimuli. Thus, apparently, learning is non-arbitrary; some events are more readily associated than others.
Similar problems crop up in instrumental conditioning situations, and in classical-instrumental combinations. For example, pigeons can be easily trained to peck a key when key-pecking is reinforced by food. However, pigeons will also peck at a key that is illuminated just before food is presented, even though there is no connection between pecking and food. This procedure, known as autoshaping (Brown & Jenkins, l968) resembles the superstition experiment described earlier, except that all birds end up pecking, rather than each engaging in its own idiosyncratic behavior. Autoshaping appears to be an instance of classical conditioning in which the light serves as a CS, evoking a pecking CR which resembles the Pecking UR which the pigeon gives to a food US. So, Skinner's choice of key-pecking as the response to be reinforced by food in instrumental conditioning experiments, while arbitrary from his point of view, was fortuitous in that key-pecking is easily associated with food. More to the point, when a keypeck results in the omission of reinforcement, pigeons do not cease pecking (Williams & Williams, l969). Key-pecking can be controlled by some contingencies; reinforcement, but not others. A dramatic demonstration of the nonarbitrariness of instrumental conditioning was provided by Shettleworth (l975), who focused on six behaviors emitted by hamsters at roughly equivalent baseline rates. For different animals, one of these was reinforced by food. The result was that three responses increased greatly with reinforcement, while the others hardly changed at all. Interestingly, the behaviors that were successfully shaped were those which were increased as the animal became hungry in the baseline - recording session.
As it happens, while pigeons easily learn to peck a key in order to obtain food, it is very difficult for them to learn to peck in order to avoid shock. Avoidance itself poses no difficulty, however, because these birds can easily learn to hop or flap their wings in such situations. Similarly, rats, and other rodents quickly learn to jump or run to escape or avoid shock, but take longer to learn to press a lever. Bolles (l970, l972) took evidence such as this as support for a notion of species- specific defense reactions (SSDR). According to this argument, each species has a set of innate maneuvers by which it can defend itself against predation and aggression. By and large, these are variants on "fright, flight, and fight": freezing, running, flying, scratching, and gnawing. According to Bolles, each species will first run through its repertoire of SSDRs: if one proves effective, then its frequency will be increased. Only if none of these works will the frequency of non-SSDRs increase. Thus, in the case of SSDRs, reinforcement is not really shaping a new response and adding it to the animal's environment. Rather, the response is already in the animal's repertoire, and is elicited only so long as it is effective in escaping or avoiding aversive events.
Demonstrations of the non-arbitrary nature of associations in learning, whether in classical or instrumental conditioning situations, have led to the formulation of the concept of preparedness (Rozin & Kalat, l97l; Seligman, l970). The argument is that by virtue of its evolutionary history, each species possesses, as part of its genetic endowment, a predisposition to form associations between some stimuli. These associations are classified as prepared. Other associations are unprepared, in that they are formed and maintained only with considerable difficulty; still others are contraprepared, meaning that they cannot be formed at all -- or if they happen to be formed, they are unreliable. Highly prepared associations are acquired even under degraded conditions in few trials and even with long inter-stimulus intervals; and they are difficult to extinguish. In order to prevent preparedness from being defined in a circular manner, it also can be said that prepared responses have a clear evolutionary basis, while unprepared and contraprepared responses seem somehow inappropriate or unnatural.
Cognitive Factors in Learning
While experiments on preparedness force us to be wary of the arbitrariness assumption, the principle of association by contiguity is undermined by other experiments. Consider, first, some variants on classical conditioning. In the standard Pavlovian situation, known as standard pairing, the CS is reliably followed by the US and the two terminate at the same time. Especially when the inter-stimulus interval is rather short, conditioning proceeds quite rapidly. Conditioning is also satisfactory under delay conditions, where the interval between CS and US onset is long, and under trace conditioning where CS offset occurs before onset of the US (Figure l0. x). But in simultaneous conditioning, where the CS and US are presented at precisely the same time, conditioning is poor; and in backwards conditioning, where the US precedes the CS rather than following it, no conditioning occurs at all (Figure l0. x). In delay and trace conditioning, and especially in cases of highly prepared associations such as bait-shyness, there is ample conditioning even though the events are not contiguous in time. In simultaneous and backwards conditioning contiguity is preserved, but little or no conditioning occurs. Something else must be going on.
Predictability, Surprise, and Conditioning
A classic experiment by Rescorla (l967) showed that contingency is more important than contiguity in conditioning. Rescorla varied the predictability of the US, given the CS, (Figure l0.x). In Panel A, the CS is always immediately followed by the US; stated in mathematical terms, p(US|CS) = l.0. Moreover, the US never occurred unless it had been immediately preceded by the CS: p(US|CS) = 0.0. This is standard pairing, and gives good conditioning. In Panel B, the number of CSs has been doubled, and none of these is immediately followed by a US. Mathematically, p(US|CS) = 0.5; but p(US|CS) = 0 still, and conditioning is still adequate. In Panel C, the number of USs has also been doubled, and none of these is immediately preceded by a CS. Mathematically, both p(US|CS) and p (US|CS) = 0, and no conditioning occurs even though the CSs and USs often co-occur. The point, of course, is that conditioning is not simply a matter of forming associations between contingent events. It appears that the organism literally computes the conditional probability of the US given the CS, and shows conditioning only when the CS predicts the US. In simultaneous conditioning there is no predictability because the two stimuli occur at precisely the same time, while in backwards conditioning the CS actually predicts that the US will not follow.
Prediction alone is not the whole matter, however, as shown in a classic experiment on blocking by Kamin (l968). In the first phase of the experiment, rats were conditioned to bar-press for food. After a stable level of response was acquired, the animals also received a free foot-shock US signaled by a CS consisting of a tone or a light. The unconditioned response to foot-shock in such a situation is suppression of instrumental behavior, and suppression comes to be associated with the CS as well -- what is known as the conditioned emotional response (CER; Estes & Skinner, l945). After the CER has been established, Kamin added a second element, creating a compound CS consisting of both tone and light. After further trials, in which the animals continued to give the CER, he tested each element of the compound individually. When this was done, he found that while the CER was given to the original CS, no conditioning had accrued to the new element. This was in contrast to what happened in which the CER was established to the compared stimulus from the outset. In this circumstance, conditioning occurred to both elements equally. Moreover, conditioning occurs when the compound is associated with a change in the US, as for example an increase in shock intensity. Kamin concluded that conditioning occurred only when the US surprises the organism. Then, it will search for possible predictors, condition to them, and ignore others. In the situation where the compound CS was introduced after the CER had already been established, the new element (e.g., the light) was redundant with the old one (e.g., the tone), and provided no extra information concerning the occurrence of the US.
A somewhat similar effect is observed in avoidance learning and punishment. According to the two-process theory of Mowrer (l947), the instrumental behavior is motivated by classically conditioned fear. The only problem with this view is that animals in such situations don't show clear signs of fear. Granted, fear is apparent on early trials of avoidance learning, as manifested both in overt behavior and the activity of the autonomic nervous system; but on later trials, the animals appear almost nonchalant as they go about the business of responding. Moreover, the two-process theory of avoidance confronts us with a paradox. If fear is conditioned to the signal by virtue of its association with shock, then successful avoidance, which eliminates shock, should lead to the extinction of conditioned fear and thus extinction of avoidance? Yet avoidance responding is notoriously difficult to extinguish. If there is no conditioned fear, why does avoidance persist? Seligman and Johnson (l973) offered the interpretation that the organism has an expectancy that its behavior will prevent the aversive event from occurring. This expectancy is repeatedly confirmed by the fact that the animal is not getting shocked. Avoidance will therefore, be hard to extinguish unless the animal is forced to test reality. If it is prevented from making its customary avoidance response, and shock is turned off, the animal will initially show a return of fear. Eventually the fear subsides, and the animal will no longer make the avoidance response even when given the opportunity to do so. This is obviously a kind of "flooding therapy" for obsessional dogs.
Predictability and Controllability
Apparently what the animal learns in avoidance situations is a set of expectancies: that shock is predicted by the tone; and that shock can be controlled by its own behavior. Now consider a variant on avoidance learning in which the animal is first exposed to classical fear conditioning. The dog is placed in a harness, or the rat in an operant chamber, and a tone CS is reliably followed by shock US. After conditioning occurs, then the animal is placed in an avoidance situation where the tone is still followed by shock, but the animal can make an avoidance response in the CS-US interval. The prediction of two-process theory is clear: because of the extra fear attached to the CS during classical conditioning, avoidance responses should be even more vigorous, and more rapidly acquired, than usual. In fact, however, acquisition of avoidance responses is retarded rather than hastened. Dogs, for example, will sit, whimpering, and passively accept escapable and avoidable shock. Even if they should inadvertently make a response that leads to escape an avoidance, they do not appear to pick up on the contingency. Avoidance can only be established, and then with difficulty, by forcing the animal to make the effective response. This phenomenon has been named learned helplessness (Seligman, Maier, & Solomon), l97l; Maier & Seligman, l976). By virtue of the repeated classical conditioning trials, where the shock is both inescapable and unavoidable, the animals apparently acquired a negative expectation -- that there is nothing to be done about aversive events. Interestingly, a prior experience with control over environmental events seems to immunize the animal against the deleterious effects of later situations where events are temporarily uncontrollable.
Analyzed in these terms, classical and instrumental conditioning appear to have to do, respectively, with the organism's attempt to predict and control environmental events (Mineka & Kihlstrom, l978). There are two relevant contingencies in classical conditioning, as noted earlier: p(US|CS) and p (US|CS). The values of these two parameters can be varied independently from 0 to l, and as the two values approach each other the environment comes to be increasingly unpredictable. In the limiting case, p(US|CS) = p(US|CS), and the CS gives the organism no information about the occurrence of the US. A large body of literature indicates that animals and humans alike prefer predictable to unpredictable aversive events, and that unsignalled aversive events are more stressful than signaled ones. Animals and humans also seem to prefer immediate over delayed shock, perhaps because a restricted CS-US interval provides less opportunity for mis-estimation of just when the shock will occur. "Learned irrelevance" occurs when prior experience with random presentations of CS and US retards acquisition of a CR at a later time when these stimuli are paired (MacKintosh, l973; Seligman, l969). In a similar manner, the two crucial parameters in instrumental conditioning, p(reinforcement|response) and p(reinforcement|response), vary independently from 0 to l. The two values approach each other, the environment becomes increasingly uncontrollable. There is an extensive literature on both humans and animals showing that controllable aversive events are less aversive, less disruptive, and less stressful than uncontrollable ones. The "learned helplessness" effects of such events have already been described.
Interestingly, there are strong indications that uncontrollable appetitive events, while not strictly aversive, induce passivity and helplessness. Of course, predictability and controllability are logically related to each other: although predictable events are not necessarily controllable, controllable events necessarily involve some element of predictability, at least over the offset of the stimulus. While some have argued that prediction is more important than control (e.g., Averill, l973) others have argued that control is essential to feelings of competence and mastery (e.g., Seligman, l975).
A history of uncontrollable aversive events, leading to learned helplessness, may be at the root of certain forms of depression observed in humans (Abramson, Seligman, & Teasdale, l978; Seligman, l975). Similarly, unpredictability seems to lie at the bottom of certain forms of free- floating anxiety. Interestingly, clinical studies indicate that anxiety and depression often go together, especially in neurotics (e.g., Derogatis, Klerman, & Lippman, l972; Mendels & Weinstein, l972). Certainly, unpredictable and uncontrollable aversive events are centrally involved in the "experimental neurosis" induced in laboratory animals, and whose symptoms seem to resemble those of neuroses seen in the psychological clinic (Mineka & Kihlstrom, l978). In the present context it is most important to note that the parameters defining predictability and controllability have to do with contingency rather than contiguity.
The Role of Reinforcement
Finally, we take up the problem of the effects of reinforcement on learning. According to the radical behaviorists, reinforcement is crucial for learning. However, it turns out to be difficult to define reinforcement in a non-tautological fashion: A reinforcer is a stimulus that increases the probability of a response. Whenever a behavior increases in frequency, it is because it has been reinforced. But behaviorists often find it difficult to specify, in advance, what kind of event will reinforce a behavior. Nonetheless, when a reinforcer is found its effects are held to be trans-situational: a reinforcer that is effective in increasing the probability of one response will also increase the probability of any other behavior that it follows (Meehl, l950). This follows directly from the law of effect, and the arbitrariness assumption. However, such a definition has proved unworkable. Premack (l959) has suggested that all of an organism's behavior can be rank-ordered in terms of the value which they have for it. If so, then some activity can reinforce only those activities which have been lesser, but not greater, than is possessed by the reinforcing activity itself. The implication of the Premack Principle is that reinforcers are defined in a relative rather than absolute fashion. If these activities-- A, B, and C are listed in increasing order of preference, then C will reinforce A and B, but B will reinforce only A. Of course, this contradicts the arbitrariness assumption, which holds that any response can be associated with any reward.
Moreover, reinforcement is not necessary for learning to occur. Consider, for example, some experiments on maze-running in rats. In one experiment (Tolman & Honzik, l930) two groups of rats were run in a maze for l0 trials. One group always received a reward of food in the goal box. The other group was never rewarded on these trials. Not surprisingly, the first group ran from start to finish much faster than the second group. This makes sense if one thinks of food as reinforcing the responses involved in traversing the maze. However, on the eleventh trial the previously unrewarded group found food in the goal box. On the very next trial, their maze running was indistinguishable from that of the group that had been reinforced all along. A second experiment also involved maze learning (Tolman & Gleitman, l949). Turning in one direction led to a wide room with white walls, while turning in the other led to a narrow room with dark walls; while the goal boxes were distinctive, the animal found food in both of them. In the next phase of the experiment, the goal boxes were detached from the maze, and the rat was shocked in one of them. When the goal boxes were reattached in their original position, the animals ran toward the one in which they received food but not shock. Clearly, the rats had learned that which direction in the maze led to each goal box. However, this learning could not have been mediated by reinforcement, because in Phase I there was no differential reinforcement associated with the two boxes. Apparently, in both cases, the animals were learning something else: their way through the maze. This knowledge was applied when it was useful in obtaining reward or relief, but it was originally acquired in the absence of reinforcement.
Other demonstrations that learning can occur are to be found in the literature on social learning theory (Bandura, l977; Bandura & Walters, l963; Mischel, l968, l973; Rotter, l954), as discussed more fully in Chapter X. Social learning theorists acknowledge that learning can occur through trial and error, and the direct experience of response consequences. However, they argue that most of what humans do, anyway, is acquired through modelling. Knowledge is acquired either through precept instruction) or example (observation of others). Consider, first, a medical resident learning to do surgery. Surgical competence is most definitely not acquired through trial and error, with the death or survival of the patient reinforcing particular strokes of the scalpel. Instead, she reads books that describe the procedure (precept), and watches more experienced physicians perform them (example). The knowledge is acquired through observational learning before it is even put into practice. Bandura's own research has convincingly documented the influence of observational learning on aggression. In a classic study (Bandura, Gruter, & Menlow, l966), he had children watch a film of another child aggressing against a "Bobo the Clown" doll, followed by either positive, neutral, or negative consequences administered by an adult. Then the observers were given the opportunity to interact with the doll. Those who witnessed reward showed the highest frequency of imitative aggression, while those who witnessed punishment showed the lowest. Later, however, when the children from all these groups were offered a reward for imitating aggression, the group differences were markedly reduced. Aggressive behaviors can be acquired even though the observer does not actually imitate the behavior being modeled, so that there is nothing to be reinforced. Of course, the observer does witness the consequences to the model of engaging in the behavior. By taking others as models, we acquire information about objects and events in the world; we also learn to anticipate, and control, events and their outcomes.
Perhaps most troublesome for the law of effect is the finding that reinforcement can decrease the probability of the behavior that it follows. This paradoxical result was first obtained by Lerner, Greene, and Nisbett (l973), who rewarded preschoolers for doing something that they already liked to do: draw pictures. Some children were promised, and given an attractive toy as a reward for drawing a certain number of pictures; others were not. Later, when observed in their classrooms, the children who had been rewarded displayed less spontaneous (unrewarded) drawing activity than those who had never been reinforced.
[More here, on esp. Harackiewicz's work]
All of this research converges to indicate the major role played by expectations and judgments in learning and performance. reinforcement, whether it comes in the form of the unconditioned stimulus in classical conditioning, or the appetitive or aversive consequence in instrumental learning, provides information concerning the likely correlates and outcomes of particular events and actions; or, alternatively, it signals the organism that it has made a correct or incorrect response. Most of the research cited here has involved nonhuman subjects. The choice was deliberate: studies of humans which apparently contradict association by contiguity or the law of effect, might well be rejected on a variety of grounds. For example, for obvious reasons the learning history of human subjects prior to entering the experiment cannot be controlled, leading them to perform in a different manner than what would be observed in a "pure" laboratory preparation. Alternatively, the introspections and self- reports of subjects might obscure the effects of contiguity and reinforcement -- or, as Skinner might have it, mislead investigators concerning the variables that are truly important in determining behavior. All the more compelling, therefore, is the evidence that the principles held so dear by behaviorists cannot even account for the behavior of organisms who lack language, who presumably do not engage in very much introspective self-reflection, and whose genetic endowment and learning history can be completely determined. When we turn specifically to the case of humans, the failure of these principles becomes ever more apparent.
Certainly the most hard-hitting critiques of the behaviorist approach to psychology have been offered by the linguist Noam Chomsky (l959, l972). While typically directed at Skinner's analysis of "verbal behavior", they actually have a much broader force, undercutting the entire behaviorist enterprise. For example, he attempts to show that such terms as stimulus, response, reinforcement, etc., do not posses the scientific rigor that is ascribed to them. Thus because stimuli and responses can only be defined through a lawful (predictable, controllable) relationship, their definitions are circular. Consider, for example, the verbal behavior of one who enters an environment containing a red chair. If she says "red", that is because "redness" was the stimulus; if instead she says "chair", then "chairness" was the stimulus. Analogously, if a shopper says "I don't like it" when shown a shirt and "I don't like it" again when shown a pinstripe suit, then presumably his two responses are controlled by some stimulus property of unlikeableness that is shared by the stimuli in question. At a cocktail party, a woman sits down at the piano and plays songs from Guys and Dolls because she is reinforced for doing so -- even though she is not paid for her services, and even though nobody else pays any particular attention to her. Skinner often appeals to "net reinforcement" (over the long term), and to automatic self-reinforcement, as in the statement in Verbal Behavior that "the young child alone in the nursery may automatically reinforce his own exploratory verbal behavior when he produces sounds which he has heard in the speech of others".
Chomsky's criticism is on at two levels. First, he notes that terms like stimulus, response, and schedule of reinforcement have precise, non-circular, technical meanings in the context of the kind of experiment in which a rat presses a bar or a pigeon pecks a key in order to obtain food. When the topic turns to the behavior of humans in the real world, however, the terms lose their precision and objectivity. Quite often, we can identify the stimulus only when we hear the response. Moreover, in the case of humans at least, it proves to be difficult to define the unit of behavior, in order to determine the degree of stimulus control. If the stimulus is intense for example, should the person respond with loud, high- pitched, rapid, or repetitive speech? Finally, we are often forced to infer the presence or absence of reinforcement from the fact that a response has occurred, or has failed to occur. Second, and rather paradoxically from the point of view of a radical behaviorist, such definitions of terms as we are able to contrive in the human case force us to enter the mind of the individual perceiver - actor in order to determine what the stimulus is, how the response is connected to the stimulus, and what kinds of events he or she finds reinforcing. The behaviorist who says that the response "red" is elicited by the quality of redness in a red pencil, we are actually saying that the person perceives, and is paying attention to, a particular hue. When the persons says "green" to the same stimulus when asked to say the complimentary color on the visible spectrum, it is not simply that the stimulus environment has changed; rather, the person remembers the instruction and knows the transformational rule. Instead of saying that someone tells the truth, or passes the salt, or agitates for civil rights legislation in the South, or fails to show up as promised for a Saturday night date, because she is reinforced for doing so, it is much more precise to say that the person wants, or intends, or likes, or hates, to do such things. But of course, such references to hidden, unobservable mental structures and processes undercuts the entire behaviorist enterprise and its emphasis on describing the functional relations between observable stimuli and responses.
In other writings, Chomsky (l972) has criticized the view of human malleability that lies at the core of radical behaviorism's view of personality -- the view that people are shaped wholly by their genetic endowment and (much the more) by environmental forces. He begins by noting the potential for totalitarianism in Skinner's system for design of a culture, since Skinner takes the survival of that culture -- which may easily be identified with the political state -- as being of greater value then the life, liberty, and happiness of the individuals within that culture. The fact that Skinner disallows punishment in his utopian society does not reduce the potential for totalitarianism, because rewards remain contingent on conformist behavior, and social deviants will remain in a state of deprivation. It is not enough simply to adjust the system so that the agents of social control are themselves subject to these contingencies, because in fact this is true in any police state. Skinner does not favor totalitarianism, as Chomsky is quick to point out; but neither does he offer any way to prevent totalitarian control from developing. The antidote for totalitarian control by environmental stimuli takes the form of just those mental structures and processes whose existence, operation, and importance are denied by behaviorists.
[Insert an argument, based on Chomsky and Nelson, that we respond to the meanings given to stimuli, and choose among possible responses given our goals and intentions. And that the generativity of language, as a tool of thought, gives humans the ability to think freely even when their behavior is constrained.