Find an overview of concepts and categories in the General Psychology lecture supplements on Thinking, Reasoning, Problem-Solving, and Decision-Making.
Social perception is concerned with the ways in which we use stimulus information -- in the form of trait terms or more physical features of the stimulus -- to form mental representations -- impressions -- of people and situations. As we have already seen person perception entails more than extracting information from a stimulus: the perceiver must combine information from the stimulus (including the background) with knowledge retrieved from memory. Much of this pre-existing knowledge comes in the form of implicit personality theory, but more broadly the act of perception is not completed until the new percept is related to the perceiver's pre-existing knowledge. Paraphrasing Jerome Bruner (1957), we can say that Every act of perception is an act of categorization.
What Bruner actually said was:
"Perception involves an act of categorization.... The use of cues in inferring the categorial [sic] identity of a perceived object... is as much a feature of perception as the sensory stuff from which percepts are made."
knowledge of the stimulus with knowledge about the kind of
object or event the stimulus is. This conceptual
knowledge exists as part of semantic memory. In
contrast with the autobiographical knowledge of specific
events and experiences that comprises episodic memory,
semantic memory holds abstract, context free knowledge:
In the social-intelligence
view of personality (Cantor & Kihlstrom, 1987), social
categorization sorts persons, situations, and behaviors into equivalence
classes that are the basis for behavioral
consistency. People behave similarly in situations that
they perceive to be similar; and categorization is the basis
of perceptual similarity, because instances of a category are
broadly similar to each other.
Concepts and Categories
categories exist in the real world, while concepts
exist in the mind. However, this technical
distinction is difficult to uphold, and
psychologists commonly use the two terms
interchangeably. In fact, objective categories may
not exist in the real world, independently of the
mind that conceives them (a question related to
the philosophical debate between realism
and idealism). Put another way, the
question is whether the mind picks up on the
categorical structure of the world, or whether the
mind imposes this structure on the
Some categories may be defined through enumeration: an exhaustive list of all instances of a category. A good example is the letters of the English alphabet, A through Z; these have nothing in common except their status as letters in the English alphabet.
A variant on enumeration is to define a category by a rule which will generate all instances of the category (these instances all have in common that they conform to the rule). An example is the concept of integer in mathematics, which is defined as the numbers 0, 1, and any number which can be obtained by adding or subtracting 1 from these numbers one or more times.
common definitions of categories are by attributes:
properties or features which are shared by all
members of a category. Thus, birds are
warm-blooded vertebrates with feathers and wings,
while fish are cold-blooded vertebrates with
scales and fins. There are three broad types of
attributes relevant to category definition:
Of course, some categories are defined by mixtures of perceptual, functional, and relational features.
Still, most categories are defined by attributes, meaning that concepts are summary descriptions of an entire class of objects, events, and ideas. There are three principal ways in which such categories are organized: as proper sets, as fuzzy sets, and as sets of exemplars.
Now having defined the differences between the two terms, we are going to use them interchangeably again. The reason is that it's boring to write concept all the time; moreover, the noun category has a cognate verb form, categorization, while conceptual does not (unless you count conceptualization, which is a mouthful that doesn't mean quite the same thing as categorization).
Still, the semantic difference between
concepts and categories raises two particularly
interesting issues for social categorization:
Concepts and categories are just about
the most interesting topic in all of psychology
and cognitive science, and two very good books
have been written on the subject. They are
Here in Berkeley's Psychology Department, Prof. Eleanor Rosch -- who made fundamental contributions to the "prototype" view of conceptual structure -- gives a wonderful course on the subject. Prof. George Lakoff, who has also made fundamental contributions to our understanding of concepts and categories, gives a similar course in the Linguistics Department.
study of social categorization encompasses a wide variety of
social categorization has been studied in the domains of
persons and social groups.
The Karass and the Grandfalloon
In his novel Cat's Cradle
(1963), Kurt Vonnegut makes a distinction between
two types of social categories:
Vonnegut's example of a granfalloon is the term Hoosiers, referring to residents of the state of Indiana.
In the novel, Vonnegut invents a religion, Bokonism, that celebrates people's karasses.
With social categories -- with any categories, really, but especially with social categories -- it's important to consider whether the category in question is a karass -- a category that really means something -- or a granfalloon.
Perhaps the most basic scheme for social categorization divides the world into two groups: Us and Them -- or, to use the technical terms of sociology and social psychology, the ingroup and the outgroup. As Charles Sumner put it (1906, p. 12):
The insiders in a we-group are in a relation of peace, order, law, government, and industry, to each other. their relation to all outsiders, or others-groups, is one of war and plunder.... Sentiments are produced to correspond. Loyalty to the group, sacrifice for it, hatred and contempt for outsiders, brotherhood within, warlikeness without -- all grow together, common products of the same situation.
The division of the social
world into US and Them is vividly illustrated
by one of the earliest examples of experimental social
psychology -- the "Robbers Cave" experiment conducted by
Muzafer Sherif and his colleagues. Through extensive
pretesting, Sherif et al. identified a group of 22 5th-grade
boys from Oklahoma City who were absolutely "average" in every
imaginable way. These children were then offered a
vacation at a camp located at Robbers Cave State Park (hence
1 of the experiment, the boys were divided into two groups,
unbeknownst to each other, and assigned to physically separate
campsites. For one week, each group was engaged in a
number of independent activities encouraged to foster
intragroup cohesion, and the establishment of a hierarchy of
2, the two groups were brought together for a series of
tournaments. There the researchers observed the
development of considerable intergroup competitiveness and
hostility; they also observed shifts in leadership within each
In the Robbers Cave experiment, the two groups achieved a clear group identity before they were brought together, and initially encountered each other in an environment of competition for limited resources -- precisely the circumstances in which Sumner thought that a distinction between Us and Them would emerge. But it turns out that competition for limited resources is unnecessary for the division into ingroup and outgroup to occur.
A series of classic experiments by Henri Tajfel and his colleagues (1971; Billig & Tajfel, 1973) employing the minimal group paradigm shows how powerful social categorization can be.
In his experiments, Tajfel assigned subjects to groups on an essentially arbitrary basis - - for example, based on their expressed preferences for various paintings -- or, in the most dramatic instance, based on the results of a coin-toss. Members of the two groups did not know other members in either group. They had no experiential basis for the formation of ingroup and outgroup stereotypes. And they had no history of group interaction that could lead to the formation of differential attitudes. Nevertheless, when group members were given the opportunity to distribute rewards to other group members, the subjects consistently favored members of their own ingroup, relative to the outgroup.
Based on this line of
research, Tajfel and Turner (1979) formulated social
identity theory, which argues that there are two sources
of self-esteem: one's own personal status and accomplishments,
and the status and accomplishments of the groups of which one
is a member. By boosting the status of their own
ingroup, compared to outgroups, individuals indirectly
increase their own status and self-esteem. They also
discovered a phenomenon known as basking in reflected
glory, by which individual group members receive boosts
in self-esteem based on the achievements of their ingroups,
even though they themselves had nothing to do with those
achievements -- and even when their connection to the group is
An interesting phenomenon of
group membership is the outgroup homogeneity effect
(Allen & Wilder, 1979). In their experiment, Allen
and Wilder took pre-experimental measures of attitudes toward
various topics. Subjects were then arbitrarily assigned
to two groups, ostensibly on the basis of their preferences
for paintings by Kandinsky or Klee, as in the original
experiment by Tajfel et al. Then they were asked to
predict the responses of ingroup and outgroup members to
various attitude statements. Subjects ascribed attitudes
to other group members in such a manner as to decrease the
perceived attitudinal similarity between themselves and other
ingroup members, increase the perceived attitudinal similarity
among members of the outgroup, and also to increase the
perceived difference between ingroup and outgroup. This
was true even for attitude statements that had nothing to do
with abstract art.
The Outgroup Homogeneity Effect in Literature
Kurt Vonnegut must have read a lot of social psychology. Another of his novels, Slapstick: Or, Lonesome No More (1976), uses the outgroup homogeneity effect as a kind of plot device. In this novel, a computer randomly assigns every person a new middle name -- either Daffodil-11 or Raspberry-13. Almost immediately, the Daffodil-11s and Raspberry-13s organize themselves into interest groups.
So, the mere division of people into two groups, however arbitrary, seems to create two mental categories, Us and Them, with "people like us" deemed more similar to each other than we are to "people like them".
The Us-Them situation becomes even more complicated when you consider how many ingroups we are actually members of, each ingroup containing a corresponding outgroup.
As an exercise, try to determine how many ingroups you're a member of, and see how many different outgroups those ingroup memberships entail.
The basic division of the social world into Us and them, ingroups and outgroups, is the topic of Us and Them: Understanding Your Tribal Mind by David Berreby (Little, Brown 2005). In his book, Berry analyzes what he sees as "a fundamental human urge to classify and identify with 'human kinds'" (from "Tricky, Turbulent, Tribal" by Henry Gee, Scientific American, 12/05).
Arguably, an even more fundamental category is between Self and Other, about which more later.
What are the natural categories in the domain of persons? Here's a list, inspired by the lexicographical work of Roger Brown (1980):
There Are Two Kinds of People
There's an old
joke that there are two kinds of people: those who say
that there are two kinds of people and those who
don't. Dwight Garner, a book critic and
collector of quotations (see his book, Garner's
Quotations: A Modern Miscellany), has collected
these quotes along the same lines ("Let's Become More
Divided", New York Times, 01/31/2021).
At first blush, the gender categories look simple enough: people come in two sexes, male and female, depending on their endowment of sex chromosomes, XX or XY. But it turns out that things are a little more complicated than this, so that gender categorization provides an interesting example of the intersection of natural and artificial, and biological and social, categories.
As it happens, chromosomal sex (XX or XY) is not determinative of phenotypic sex (whether one has male or female reproductive anatomy). As in everything else, heredity interacts with environment, and in this case the hormonal environment of the fetus is particularly important in gender differentiation. Sometimes due to accidents of genetics, as in Klinefelter's syndrome (XXY) and Turner's syndrome (XO), but mostly due to accidents of the endocrine system, individuals can be born with ambiguous external genitalia. It is possible, for example, to be chromosomally male but phenotypically female (e.g., the androgen-insensitivity syndrome), or to be chromosomally female but phenotypically male (e.g., congenital adrenal hyperplasia).
What do do with these cases of pseudohermaphroditism? (There are no true hermaphrodites, who have the complete reproductive anatomies of both males and females - except in mythology.) For a long time they were simply ignored. Then, in an attempt to help people with these conditions to lead better lives, they were often surgically "corrected" so that their external genitalia more closely corresponded to the male or (usually) female ideal -- see, for example, the cases described in Man and Woman, Boy and Girl by J. Money & A. Ehrhardt (1972).
More recently, however, some authorities have argued that such individuals constitute their own gender categories. For example, Anne Fausto Sterling (in Myths of Gender, 1985, 1992; and especially in Sexing the Body, 2000) has identified three "intersex" gender categories, where the individuals deviate from the "Platonic ideal":
Rather than force these individuals to conform to the Platonic ideal for males or females, Fausto-Sterling argues that they constitute separate gender categories, and should be acknowledged as such and considered to be normal, not pathological. According to Fausto-Sterling's account, then, there are really five sexes, not two. Put another way, the categorization of people into two sexes is a social construction, imposed on the individual by society.
Fausto-Sterling's argument is provocative, but it is also controversial. See, for example, "How common is intersex? A response to Anne Fausto-Sterling" by L. Sax, Journal of Sex Research, 2002.
It is one thing to be male or female biologically, and another thing to identify oneself as such. Most people, even most of those who fall into the "intersex" category, identify themselves as either male or female. Even transgender
individuals will identify them as "a man trapped in a woman's
body" (meaning that they identify themselves as male), or the
reverse (meaning that they identify themselves as
female). Gender identity usually corresponds to
phenotypic sex, but this is not necessarily the case. In
any event, with respect to social cognition, we are mostly
interested in gender identity -- how people identify and
present themselves with respect to gender. In
Fausto-Sterling's world, there would be at least a third
category for gender identity, intersex.
Transgendered and Transsexual
Sex researchers, endocrinologists, and feminists can debate whether there are five categories of gender, but there's no question that a third category of transgender individuals has begun to emerge. The definition of "transgender" is a little ambiguous (no joke intended), but generally appears to refer to people who are for whatever reason uncomfortable with the gender of their birth (or, put in terms compatible with social constructivism, their assigned gender). A transgender male may simply not identify himself as a male; alternatively, he may identify himself as a female, in which case we may speak of a transsexual individual. Transsexuals may seek to have their bodies surgically altered to conform to their gender identities. Transgender individuals may not go that far, because they do not necessarily identify themselves as either male or female.
For an article on transgender and transsexual students on American college campuses, see "On Campus, Rethinking Biology 101" by Fred A. Bernstein, New York Times, 03/07/04).
subjective matters of gender identity, there is the matter of
gender role -- the individual's public display of
characteristics associated with masculinity or
femininity. It turns out, that having a male or female
gender identity does not necessarily mean that the person will
adopt the "corresponding" masculine or feminine gender role.
there is the matter of erotic and sexual orientation (not
"preference", as some would have it -- as if the matter of
what turns people on sexually was a matter of choice, as
between corn flakes and shredded wheat for breakfast).
Most people are heterosexual, with men falling in love with,
and having sex with, women and women falling in love with, and
having sex with, men. But there are other varieties of
turns out that gender categories are more complicated than
this. If there are really
People can be classified as male or female (etc.), but they can also be classified by their relationships to each other.
family, celebrated by popular television shows of the
1950s such as Father Knows Best and The Adventures
of Ozzie and Harriet, consists of four kinship
course, there is also an extended family, consisting
(depending on how far out it is extended) of additional
kinship categories, both paternal (on the father's side) and
maternal (on the mother's side):
Never mind the difficulties created by "foster" families. We're talking only about blood relations -- relations determined by consanguinity -- here
given that we are talking about relations that are determined
by shared blood, it would seem that kinship categories are
natural, and are biologically defined:
a particularly interesting example, Hopi sibling terminology
has specific terms for:
As a variant on kinship categories, we also classify people by their marital status.
The big category here is married vs. single.
the "single" category, there are a number of subcategories,
a controversy erupted over whether gays and lesbians should
have the right to marry (in 2003 the Episcopal Church
considered a proposal to solemnize unions between same-sex
partners, and in 2004 Gavin Newsom, the Mayor of San
Francisco, ordered the City Registrar to issue marriage
licenses to same-sex couples), prompting President George W.
Bush to call for an amendment to the US Constitution that
would restrict marriage to a "union of a man with a woman" (in
the language of one proposed amendment). Other
arrangements would be called civil unions or somesuch,
but they would not necessarily have the same legal status as a
marriage. Setting aside discussion of the wisdom of this
proposal, it seems to be an attempt to apply a classical,
proper-set view of concepts to the concept of marriage.
That is, the union of "a man with a woman" would be the singly
necessary and jointly sufficient feature to define the concept
of marriage. But as one lesbian participant in the San
Francisco same-sex-marriage marathon noted, "I live in the
suburbs, I have two kids, and I drive an SUV -- why shouldn't
I be able to get married" (or words to this effect).
Clearly, she is defining marriage in terms of non-necessary
features. Perhaps she thinks of marriage as a fuzzy set,
where perhaps the union of "one man with one woman" is the
prototype, but other kinds of marriages are possible.
In 2015, the Supreme Court rendered this
debate moot by deciding, by a 5-4 vote, that there is a civil
right to same-sex marriage.
Age is another "natural", "biological" variable: we're born at age 0, and die some time later. If we're lucky, we pass through infancy and childhood, puberty, adolescence, adulthood, and old age. In strictly biological terms, more or less, infancy starts at birth, puberty marks the boundary between childhood and adolescence, and death marks the end of adulthood; where the boundary is between adolescence and adulthood is uncertain, as even causal observation indicates.
where are the boundaries between infancy and childhood, and
between adolescence and adulthood? Although Brown
identified age as a "natural category" of persons, it is also
clear that, at least to some extent, even age categories are
divided childhood into a succession of five stages of
Properly speaking, the first stage of psychosexual development is birth, the transition from fetus to neonate. Freud himself did not focus on this aspect of development, but we may fill in the picture by discussing the ideas of one of his colleagues, Otto Rank (1884-1939).
Rank believed that birth trauma was the most important psychological development in the life history of the individual. He argued that the fetus, in utero, gets primary pleasure -- immediate gratification of its needs. Immediately upon leaving the womb, however, the newborn experiences tension for the first time. There is, first, the over-stimulation of the environment. More important, there are the small deprivations that accompany waiting to be fed. In Rank's view, birth trauma created a reservoir of anxiety that was released throughout life. All later gratifications recapitulated those received during the nine months of gestation. By the same token, all later deprivations recapitulated the birth trauma.
Freud disagreed with the specifics of Rank's views, but he agreed that birth was important. At birth, the individual is thrust, unprotected, into a new world. Later psychological development was a function of the new demands placed on the infant and child by that world.
From birth until about one year of age, the child is in the oral stage of psychosexual development. The newborn child starts out as all id, and no ego. He or she experiences only pleasure and pain. With feeding, the infant must begin to respond to the demands of the external world -- what it provides, and the schedule on which it does so. Initially, Freud thought, instinct-gratification was centered on the mouth: the child's chief activity is sucking on breast or bottle. This activity has obvious nutritive value: it is the way the child copes with hunger and thirst. But, Freud held, it also has sexual value because the child takes pleasure in sucking; and it has destructive value because the child can express aggression by biting.
Freud pointed out that the very young child needs his or her mother (or some reasonable substitute) for gratification. Her absence leads to frustration of instinctual needs, and the development of anxiety. Accordingly, the legacy of the oral stage is separation anxiety and feelings of dependency.
After the first year, Freud held, the child moves into the anal stage of development. The central event of the anal stage is toilet training. Here the child has his or her first experience with the external regulation of impulses: the environment teaches him or her to delay urination or defecation until an appropriate time and place. Thus, the child must postpone the pleasure that comes from relieving tension in the bladder and rectum. Freud believed that the child in the anal stage acquired power by virtue of giving and retaining. Through this stage of development, the child also acquired a sense of loss, and also a sense of self-control.
The years from three to five, in Freud's view, were taken up with the phallic stage. In this case, there is a preoccupation with sexual pleasure derived from the genital areas. It is at about this time that the child begins to develop sexual curiosity, exhibits its genitalia to others, and begins to masturbate. There is also an intensification of interest in the parent of the opposite sex. The phallic stage revolves around the resolution of the Oedipus Complex, named for the ancient Egyptian king who killed his father and married his mother, and brought disaster to his country. In the Oedipus complex, there is a sexual cathexis toward the parent of the opposite sex, and an aggressive cathexis toward the parent of the same sex.
The beginnings of the Oedipus Complex are the same for boys and girls. Both initially love the mother, simply because she is the child's primary caretaker -- the one most frequently responsible for taking care of the child's needs. In the same way, both initially hate the father, because he competes with the child for the mother's attention and love. Thereafter, however, the progress and resolution of the Oedipus complex takes a different form in the two sexes.
The male shows the classic pattern known as the Oedipus Complex. The boy is already jealous of the father, for the reasons noted earlier. However, this emotion is coupled with castration anxiety: the child of this age is frequently engaged in autoerotic activities of various sorts, which are punished when noticed by the parents. A frequent threat on the part of parents is that the penis will be removed -- and Freud noticed that this threat would be reinforced by his observation that the girls and women around him, in fact, do not have penises. As the boy's love for his mother intensifies into incestuous desire, the risk is correspondingly increased that he will be harmed by this father. However, the father appears overwhelmingly powerful to the child, and thus must be appeased. Accordingly, the child represses his hostility and fear, and through reaction formation turns them into expressions of love. Similarly, the mother must be given up, and the boy's sexual longing for her repressed. The final solution, Freud argued, is identification with the father. By making his father an ally instead of an enemy, the boy can obtain, through his father, vicarious satisfaction of his desire for his mother.
A rather different pattern, technically known as the Electra Complex after the Greek heroine who avenged her father's death. The Electra Complex in girls is not, as some might think, the mirror-image of the Oedipus Complex in boys. The young girl has the usual feelings of love toward her mother as caretaker, Freud believed, but harbored no special feelings toward her father. Girls, Freud noted, were not typically punished for autoerotic activity -- perhaps because they did not engage in it as often, perhaps simply because it is less obvious. Eventually, Freud believed, the girl discovers that she lacks the external genitalia of the boy. This leads to feelings of disappointment and castration that are collectively known as penis envy. She blames her mother for her fate, and envies her father because he possesses what she does not have. Thus the sexual cathexis for the mother is weakened, while the one for the father is simultaneously strengthened. The result is that the girl loves her father, but feels hatred and jealousy for her mother. The girl seeks a penis from her father, and sees a baby as a symbolic substitute. In contrast to the situation in boys, girls do not have a clear-cut resolution to the Electra Complex. For them, castration is not a threat but a fact. Eventually, the girl identifies with her mother in order to obtain vicarious satisfaction of her love for her father.
It should now be clear why Freud named this the "phallic" stage, when only one of the sexes has a phallus. In different ways, he argued, children of both sexes were interested in the penis. The first legacy of the phallic stage, for both sexes, is the development of the superego. The child internalizes social prohibitions against certain sexual object-choices, and also internalizes his or her parents' system of rewards and punishments. (Because girls are immune to the threat of castration, Freud thought, women had inherently weaker consciences than men.) The second legacy, of course, is psychosexual identification. The boy identifies with his father, the girl with her mother. In either case, the child takes on the characteristic role and personality of the parent of the same sex.
The phallic stage is followed by the latency period, extending approximately from five to eleven years of age. In this interval, Freud thought that the sexual instincts temporarily subsided. In part, this was simply because there is a slowing of the rate of physical growth. A more important factor in this state of affairs, however, are the defenses brought to bear on the sexual instincts during and after the resolution of the Oedipus Complex. During this time, however, the child is not truly inactive. On the contrary, the child is actively learning about the world, society, and his or her peers.
Finally, with the onset of puberty at about age 12, the child enters the genital stage. This stage continues the focus on socialization begun in the latency period. The coming of sexual maturity reawakens the sexual instincts, which had been dormant throughout the latency period. However, the sexual instincts show a shift away from primary narcissism, in which the child takes pleasure in stimulating his or her own body, to secondary narcissism, in which the child takes pleasure in identifying with his or her ego-ideal. Thus, sexuality itself undergoes a shift from an orientation toward pleasure to one oriented toward reproduction, in which pleasure is secondary. The adolescent's attraction to the opposite sex is, for the first time, coupled with ideas about romance, marriage, and children. When the adolescent (or adult) becomes sexually active, events in the earlier stages will influence the nature of his or her genital sexuality -- for example, in those body parts which are sexually arousing, and in preferences for foreplay.
Erik Erikson is the most prominent disciple of Freud who lived after World War II (in fact, he was psychoanalyzed by Anna Freud) -- and after Freud himself, perhaps, the psychoanalyst who has had the most impact on popular culture. Erikson focused his attention on the issue of ego identity, which he defined as the person's awareness of him- or herself, and of his or her impact on other people. Interestingly, this was an issue for Erikson personally (for a definitive biography of Erikson, see Coles, 1970; for an autobiographical statement, see Erikson, 1970, reprinted 1975).
Erikson has described himself as a "man of the border". He was a Dane living in Germany, the son of a Jewish mother and a Protestant father, both Danes. Later his mother remarried, giving Erikson a German Jewish stepfather. Blond, blue-eyed, and tall, he experienced the pervasive feeling that he did not belong to his family, and entertained the fantasy that his origins were quite different than his mother and her husband led him to believe. A similar problem afflicted him outside his family: the adults in his parents' synagogue referred to him as a gentile, while his schoolmates called him a Jew. Erikson's adoptive name was Erik Homburger. Later he changed it to Erik Homburger Erikson, and still later just Erik Erikson -- assuming a name that, taken literally, meant that he had created himself.
Erikson agreed with the other neo-Freudians that the primary issues in personality are social rather than biological, and he de-emphasized the role of sexuality. His chief contribution was to expand the notion of psychological development, considering the possibility of further stages beyond the genital stage of adolescence. At the same time, he gave a social reinterpretation to the original Freudian stages, so that his theory is properly considered one of psychosocial rather than of psychosexual development.
Erikson's developmental theory is well captured in the phrase, "the eight ages of man". His is an epigenetic conception of development similar to Freud's, in which the individual must progress through a series of stages in order to achieve a fully developed personality. At each stage, the person must meet and resolve a particular crisis. In so doing, the individual develops particular ego qualities; these are outlined in Erikson's most important book, Childhood and Society (1950), and in Identity: Youth and Crisis (1968). In Insight and Responsibility (1964), he argued that each of these strengths was associated with a corresponding virtue or ego strength. Finally, in Toys and Reasons (1976), Erikson argued that a particular ritualization, or pattern of social interaction, develops alongside the qualities and virtues. Although Erikson's theory emphasizes the development of positive qualities, negative attributes can also be acquired. Thus, each of the eight positive ego qualities has its negative counterpart. Both must be incorporated into personality in order for the person to interact effectively with others -- although, in healthy development, the positive qualities will outweigh the negative ones. Similarly, each positive ritualization that enables us to get along with other people has its negative counterpart in the ritualisms that separate us from them. Development at each stage builds on the others, so that successful progress through the sequence provides a stable base for subsequent development. Personality development continues throughout life, and ends only at death.
Stage 1: Trust, mistrust, and hope. The oral-sensory stage of development covers the first year of life. In this stage the infant hungers for nourishment and stimulation, and develops the ability to recognize objects in the environment. He or she interacts with the world primarily by sucking, biting, and grasping. The developmental crisis is between trust and mistrust. The child must learn to trust that his or her needs will be satisfied frequently enough. Other people, for their part, must learn to trust that the child will cope with his or her impulses, and not make their lives as caregivers too difficult. By the same token, if others do not reliably satisfy the child's needs, or make promises that they do not keep, the child acquires a sense of mistrust. As noted earlier, both trust and mistrust develop in every individual -- though in healthy individuals, the former outweighs the latter.
Out of the strength of trust the child develops the virtue of hope: "the enduring belief in the attainability of fervent wishes, in spite of the dark urges and rages which mark the beginning of existence". The basis for hope lies in the infant's experience of an environment that has, more than not, provided for his or her needs in the past. As a result, the child comes to expect that the environment will continue to provide for these needs in the future. Occasional disappointments will not destroy hope, provided that the child has developed a sense of basic trust.
An important feature of social interaction during this period is the ritualization of greeting, providing, and parting. The child cries: the parents come into the room, call its name, nurse it or change it, make funny noises, say goodbye, and leave -- only to return in the same manner, more or less, the next time the situation warrants. Parent and child engage in a process of mutual recognition and affirmation. Erikson calls this ritualization numinous, meaning that children experience their parents as awesome and hallowed individuals. This can be distorted, however, into idolism in which the child constructs an illusory perception of his or her parents as perfect. In this case, reverence is transformed into adoration.
Stage 2: Autonomy, Shame, Doubt, and Will. The muscular-anal stage covers the second and third years of life. Here the child learns to walk, to talk, to dress and feed him- or herself, and to control the elimination of body wastes. The crisis at this stage is between autonomy and shame or doubt. The child must learn to rely on his or her own abilities, and deal with times when his or her efforts are ineffectual or criticized. There will of course be times, especially early in this period, when the child's attempts at self-control will fail -- he will wet his pants, or fall; she will spill her milk, or put on mismatched socks. If the parents ridicule the child, or take over these functions for him or her, then the child will develop feelings of shame concerning his or her efforts, and doubt that he or she can take care of him- or herself.
If things go well, the child develops the virtue of will: the unbroken determination to exercise free choice as well as self- restraint, in spite of the unavoidable experience of shame and doubt in infancy. As will develops, so does the ability to make choices and decisions. Occasional failures and misjudgments will not destroy will, so long as the child has acquired a basic sense of autonomy.
The ritualization that develops at this time is a sense of the judicious, as the child learns what is acceptable and what is not, and also gets a sense of the rules by which right and wrong are determined. The hazard, of course, is that the child will develop a sense of legalism, in which the letter of the law is celebrated over its spirit, and the law is used to justify the exploitation and manipulation of others.
Stage 3: Initiative, Guilt, and Purpose. The locomotor-genital stage covers the remaining years until about the sixth birthday. During this time the child begins to move about, to find his or her place in groups of peers and adults, and to approach desired objects. The crisis is between initiative and guilt. The child must approach what is desirable, at the same time that he or she must deal with the contradictions between personal desires and environmental restrictions.
The development of autonomy leads to the virtue of purpose: the courage to envisage and pursue valued goals uninhibited by the defeat of infantile fantasies, by guilt and by the foiling fear of punishment.
Stage 4: Industry, Inferiority, and Competence. The latency stage begins with schooling and continues until puberty, or roughly 6 to 11 years of age. Here the child makes the transition to school life, and begins to learn about the world outside the home. The crisis is between industry and inferiority. The child must learn and practice adult roles, but in so doing he or she may learn that he or she cannot control the things of the real world. Industry permits the development of competence, the free exercise of manual dexterity and cognitive intelligence.
Stage 5: Identity, Role Confusions, and Fidelity. The stage of puberty-adolescence covers ages 11-18. Biologically, this stage is characterized by another spurt of physiological growth, as well as sexual maturity. Socially, the features of adolescence are involvement with cliques and crowds, and the experience of adolescent love. The crisis is between identity and role confusion. The successful adolescent understands that the past has prepared him or her for the future. If not, he or she will not be able to differentiate him- or herself from others, or find his or her place in the world. Identity, a clear sense of one's self and one's place in the world, forms the basis for fidelity, the ability to sustain loyalty to another person.
Stage 6: Intimacy, Isolation, and Love. Erikson marks the stage of young adulthood as encompassing the years from 18 to 30. During this time, the person leaves school for the outside world of work and marriage. The crisis is between intimacy and isolation. The person must be able to share him- or herself in an intense, long-term, committed relationship; but some individuals avoid this kind of sharing because of the threat of ego loss. Intimacy permits love, or mutuality of devotion.
Stage 7: Generativity, Stagnation, and Care. The next 20 years or so, approximately 30 to 50 years of age, are called the stage of adulthood. Here the individual invests in the future at work and at home. The crisis is between generativity and stagnation. The adult must establish and guide the next generation, whether this is represented in terms of children, students, or apprentices. But this cannot be done if the person is concerned only with his or her personal needs and comfort. Generativity leads to the virtue of care, the individual's widening concern for what has been generated by love, necessity, or accident.
Stage 8: Ego Integrity, Despair, and Wisdom. The final stage, beginning at about 50, is that of maturity. Here, for the first time, death enters the individual's thoughts on a daily basis. The crisis is between ego identity and despair. Ideally, the person will approach death with a strong sense of self, and of the value of his or her past life. Feelings of dissatisfaction are especially destructive because it is too late to start over again. The resulting virtue is wisdom, a detached concern for life itself.
Stage 9: Despair, Hope, and Transcendance? As he (actually, his wife and collaborator, Joan Erikson) entered his (her) 9th decade, Erikson (in The Life Cycle Completed, 1998) postulated a ninth stage, in which the developments of the previous eight stages come together at the end of life. In this stage of very old age, beginning in the late 80s, the crisis is despair vs. hope and faith, as the person confronts a failing body and mind. If the previous stages have been successfully resolved, he will be able to transcend these inevitable infirmities.
Observational studies have provided some evidence for this fourth stage, but Erikson's original "eight-stage" view is the classic theory of personality and social development across the life cycle.
Shakespeare's Seven Ages of Man
Erikson's account of the Eight Ages of Man is a play on the Seven Ages of Man, described by Shakespeare in As You Like It:
Erikson's theory was extremely influential. By insisting that development is a continuous, ceaseless processes, he fostered the new discipline of life-span developmental psychology, with its emphasis on personality and cognitive development after adulthood. Much life-span work has been concerned with cognitive changes in the elderly, but personality psychologists have been especially concerned with the years between childhood and old age.
Erikson's stages inspired a number of popular treatments of "life span" personality development, including Roger Gould's (1978) identification of periods of transformation; Gail Sheehy's Passages (1976) and New Passages (1995), and Daniel Levinson's The Seasons of a Man's Life (1985).
These and other schemes are all, to a greater or lesser extent, social conventions superimposed on the biological reality that we're born, age, and die. They are social categories that organize a continuum of age.
Piaget's Stages of Cognitive Development
The Swiss developmental psychologist
Jean Piaget marked four stages of cognitive
Some "neo-Piagetian theorists, such as Michael Commons, have argued that there are even higher stages in the Piagetian scheme (presumably corresponding, roughly, to calculus and other higher mathematics)
The late C.N. Alexander even argued that the Science of Creative Intelligence announced by the Maharishi Mahesh Yogi as an offshoot of his Transcendental Meditation program promoted cognitive development beyond the Piagetian stages.
Piaget's theory was very influential among psychologists and educators (though it also proved controversial). But Piaget's stages never entered popular parlance, the way Freud's and even Erikson's did, so it would not seem appropriate to include them as social categories.
Moving from the
individual life cycle to social history: In 1951, Time
magazine coined the term "Silent Generation" to describe
those born from 1923-1933. The term "generation", as a
demographic category referring to people who were born, or
lived, during a particular historical epoch, gained currency
with the announcement of the Baby Boom (1946-1964) by the
Following these examples, a number of different generations have been identified by two sociologists, Strauss and Howe (1991, 1997). Other "generations" include:
As with any other social categorization, generational categories can be a source of group conflict. For example, the 2008 race for the presidency pitted John McCain, a member of the Silent Generation, against Barack Obama, a member of Generation X, who had won the Democratic nomination over Hillary Rodham Clinton, a member of the Baby Boom Generation.
These examples are
drawn from American culture, but they can be found in other
cultures, too. Consider the terms used to characterize
various generations of the Japanese diaspora (Nikkei).
Artist June Yee explores these stereotypes in her piece, Two Chinese Worlds in California, on display in the Gallery of California History at the Oakland Museum of California (2010).
"I was surprised at how much misunderstanding there was. They called us FOBs, for fresh off the boat, and they were ABCs, American-born Chinese. Ironically, we did not fit into each other's stereotype, even though we were all Chinese. We weren't aware of the anti-Chinese sentiment they had endured for years. And they didn't understand our feelings about Mao, who in the '60s was a hero for many ABCs who joined the student protests. I remember being appalled by ABCs who embraced Mao's Little Red Book" (OMCA Inside Out, Spring 2010).
In South Africa, young
people born since the end of apartheid in 1994 are called
the "Born Free" generation (they cast their first votes in a
presidential election in 2014).
Although the concept of "generations"
may be familiar in popular culture, its scientific status
is suspect (see critiques by Bobby Duffy in The
Generation Myth and Gen Z, Explained by
Roberta Katz, Sarah Olgivie, Jane Shaw, and Linda
Woodhead, both reviewed by Louis Menand in "Generation
Overload", New Yorker,
10/18/2021). Menand points out that the
pop-culture concept of a "generation" differs radically
from the biblical span of 30 years -- which is also the
heuristic employed by reproductive biologists. The
current popular concept of "generations" had its origins
in 19th-century efforts to understand cultural change:
Karl Mannheim, an early sociologist, introduced the term
"generation units" to refer to elites to delibereately
embraced new ways of thinking and acting. Charles
Reich, a Yale legal scholar, revived the concept in The
Greening of America (1970), based on his
observations of the young people in San Francisco during
the Summer of Love (1967). new "generation" of
. Social scientists who embrace the idea of
"generations" differ in terms of whether generations are
cause or effect of socio-historical change. In the
"pulse" hypothesis, each generation introduces new ways of
thinking; in the "imprint" hypothesis, each generation is
affected by the historical events that they lived through,
like World Wars I and II, the Depression, Vietnam, the
civil rights movement, the September 11 terror attacks,
These days, though, the concept of
"generations" is mostly a marketing
ploy. The boundaries between "generations" are
fuzzy: as noted earlier, Barack Obama is technically a
Baby Boomer. And as with the fraught issue of
racial differences, it turns out that variability within
generations is greater than variability between
generations. Menand points out, for example, that
the salient figures of the "generational revolt" of the
1960s and 1970s -- Gloria Steinem (feminism), Tom Hayden
(Vietnam War protests) Abbie Hoffman (hippies), Martin
Luther King (civil rights), and many, many others --
were all members of the putative "Silent
Generation"! Timothy Leary, Allen Ginsberg, and
Pauli Murray (look her up and be amazed, as I was,
that you never heard of her) were even older. A
poll of young people taken in 1969 found that most had
not smoked marijuana, and most supported the Vietnam
War. The same goes for Generation Z.
So, as with race, "generations" are definitely social constructions. But as with all social constructions, perception is reality -- perceived reality, that is, and from a cognitive social-psychological perspective that is pretty much all that counts. If you believe that you're part of a generation, you're likely to behave like part of that generation; and if you believe others are part of a generation, you're likely to treat them as if they really were.
(especially) have devoted a great deal of energy to measuring
socioeconomic status. In these schemes,
information about occupation, education, and income is used to
classify individuals or families into categories:
All these terms have entered common parlance: they're not just technical terms used in formal social science.
In contrast to earlier classification schemes, there is nothing "biological" about these categories, which wouldn't exist at all except in societies at a certain level of economic development. In feudal economies, for example, there was a distinction between serf and master that simply doesn't exist in industrial economies.
As the serf-master distinction indicates, classification by socioeconomic status evolves as societies evolve. In England, for example, the traditional class distinction was the tripartite one described above: upper, middle, and working classes (In England, as viewers of Downton Abbey understand, the upper class prided themselves on the fact that they did not work for a living). In 2013, the British Broadcasting Corporation commissioned "The Great British Class Survey", which revealed that British society now included at least seven distinct social classes,
As described in the BBC press release:
list of the
Standard Occupational Classification employed by the
Bureau of Labor Statistics in the U.S. Department of
Labor: This is the list employed beginning in 2010; as
of 2013, a new classification scheme was under review.
These are clearly social categories -- but are they any less natural than biological categories, just for being social rather than biological in nature?
Caste in Hindu India...
A unique set of social categories is found in the caste system in Hindu India. Although a product of the Vedic age (1500 BCE to 600 CE), the term "caste" (casta) itself was first used by 16th-century Portuguese explorers.
Traditionally, Indian society was
divided into four varnas (Sanskrit for
"class" or "color"):
Below these four groups are the Panchamas ("fifth class"), popularly known as "untouchables". Mahatma Gandhi labeled these individuals as Harijans, or "children of God". Untouchability was outlawed in 1949, though -- as in the American "Jim Crow" South -- prejudice against them remained strong. As an outgrowth of social protest in the 1970s, the untouchables began to view the Harijan label as patronizing, and to identify themselves as Dalit, or "oppressed ones".
Membership in a caste is largely hereditary, based on ritual purity (the panchamas are untouchable because they are considered to be polluting), and maintained by endogamy. So long as one follows the rules and rituals (Dharma) of the caste into which he or she is born, a person will remain in his or her caste. However, one can lose one's caste -- become an outcast, as it were -- identity by committing various offenses against ritual purity, such as violations of dietary taboos or rules of bodily hygiene; and one can move "up" in the caste system by adopting certain practices, such as vegetarianism -- a process known as "Sanskritization". One can also regain his or her original caste status by undergoing certain purification rites. Movement "upwards" from untouchability is not possible, however -- though in recent years the Indian government has created "affirmative action" programs to benefit untouchables.
Caste is not exactly a matter of socioeconomic status: there can be poor Brahmans (especially among the scholars!). Parallel to varna is a system of social groupings known as Jati, based on ethnicity and occupation.
Although the caste system has its origins in Hindu culture, Indian Muslims, Sikhs, and Christians also follow caste distinctions. For example, Indian Muslims distinguish between ashraf (Arab Muslims) and non-ashraf (such as converts from Hinduism).
The caste system has been formally outlawed in India, but remnants of it persist, as for example in the identification of a broad class of largely rural "daily-wages people", which is more a matter of social identity than economics.
Beginning in 1993, the Indian government began a sort of affirmative action program, guaranteeing 27% of jobs in the central and state government, and college admissions, to members of an official list of "backward classes" -- of which there are more than 200, taking into account various subcastes and other caste-like groupings -- not just the dalits.
Sometimes members of the "backward
classes" take matters into their own hands.
An article in the Wall Street Journal
about India's affirmative-action program told the
story of Mohammad Rafiq Gazi, a Muslim from West
Bengal, whose family name was Chowduli -- roughly,
the Bengali equivalent of the "N-word". He
legally changed his last name to the higher-caste
Gazi in order to escape the stereotypes and social
stigma associated with his family name. But
when India initiated its affirmative-action
program, individuals with the higher-caste surname
of Gazi were not eligible, so he sought to change
his name back to the low-caste Chowduli ("For
India's Lowest Castes, Path Forward is 'Backward'"
by Geeta Anand and Amol Sharma, 12/09/2011).
Actually, India has now extended its
affirmative action program beyond the untouchables, leading
to the spectacle of some higher
castes seeking recognition under
the category of "other backward
classes". In August 2015,
for example, the Patidar caste of
Gujarat, traditionally farmers, who
have begun to achieve middle class
of income and education, petitioned for the
"reservations" granted to the "other backward
classes". The same move was
made by the Jats, another farming caste in
Haryana, in February 2016.
...and in Japan...
Feudal Japan had a class of outcasts,
known as the seti or buraku, who
were indistinguishable, in ethnic terms, from any
other native Japanese. Like the
"untouchables" of India, the buraku
performed various tasks that were deemed impure by
classical Buddhism, such as slaughtering animals
and handling corpses. They wore distinctive
clothing and lived in segregated areas.
Although officially liberated in 1871, they lagged
behind other Japanese in terms of education and
socioeconomic status. From 1969 to 2002,
they were the subjects of affirmative action
programs designed to diminish historic
inequalities. But despite substantial gains,
the descendants of the buraku still live
largely in segregated areas, and are objects of
continuing, if more subtle, discrimination.
For example, by 2001 Hiromu Nonaka, a burakumin
politician, had achieved the #2 position in
Japan's ruling Liberal Democratic Party, but was
not able to make the leap to the #1 position, and
the post of prime minister. As Taro Aso, the
LDP prime minister at the time, reportedly said at
a private meeting, "Are we really going to let
those people take over the leadership of
Japan?". [See "Japan's Outcasts Still Wait
for Society's Embrace" by Norimitsu Onishi, New
York Times, 01/16/2009.]
...and perhaps in America.
Some social theorists argue that the
United States, too, has a caste system, otherwise
known as white supremacy. That is,
American society effectively relegates Blacks and
other persons of color to a permanent "one-down"
position in the social hierarchy in which persons
of color are not valued as much as whites
are. As Isabel Wilkerson writes in Caste:
The Origins of Our Discontents (2020;
reviewed by Sunil Khilnani in "Top Down", New
Yorker, 08/17/2020), "Caste is insidious and
therefore powerful because it is not hatred; it is
not necessarily personal. It is the worn
grooves of comforting routines and unthinking
expectations, patterns of a social order that have
been in place for so long that it looks like the
natural order of things". In her book,
Wilkerson mounts considerable evidence in support
of her argument that racism is just a visible
manifestation of a deeper, more subtle, system of
social domination. The argument is not new
with her, however. Allison Davis, an early
Black anthropologist made a similar argument in Deep
South (1941), his study of racial and class
relations in the post-Reconstruction "Jim Crow"
South. Gunnar Myrdal cited Davis's ideas
about race and cast favorably in his own study of
American race relations, An American Dilemma
(1944). Martin Luther King, whose doctrines
of nonviolent civil disobedience, told a story
about how, when he was visiting India, he was
introduced by a Dalit schoolmaster as "a fellow
untouchable". The story may be apocryphal,
as Khilnani suggests, but Wilkerson convincingly
argues for the similarities between racial
relations in America -- and perhaps the rest of
the industrialized world as well -- as very
similar to a caste system. This remains
true, Wilkerson argues, even considering the gains
that Blacks have made since the success of the
Civil Rights Movement in the 1950s and
1960s. In India, Dalits may attain higher
education and professional status, by virtue of
India's own version of affirmative action; but
they're still Dalits. The caste system in
America, to the extent that it actually exists, is
what "systemic racism" -- the focus of the Black
Lives Matter movement, and distinct from personal
racial prejudice -- is all about
Wilkerson also argues that Nazi
Germany practiced a caste system regulating the
relations (if you could call them that) between
"Aryans" (a false racial category) and Jews (not a
racial category either). It's an interesting
argument -- but, as Khilnani points out, the
Nazi's killed Jews by the millions. The goal
of the Final Solution was elimination, not mere
domination. With the exception of the police
killings that inspired the Black Lives Matter
movement, he points out that American whites,
mostly just try to keep Blacks "in their place" --
for example, by failing to recognize the status of
political scientists (as well as other social scientists) slot
people into categories based on their political
affiliations. In the United States, some commonly used
political categories are:
The category "Progressive" is still used in certain states in the Upper Midwest, but not anywhere else. The category "Communist" used to be (somewhat) popular, but pretty much disappeared after the fall of the Soviet Union in 1989 (actually, it died long before that). The "Green" party label is emerging in some places.
Some Germans used to be Nazis, and from 1933-1945 they killed or incarcerated many Germans who used to be Communists; but in the cold War, there were lots of communists in East Germany (though not so many in West Germany); in the post-Cold War era, Germans come in three new political categories categories: Christian Democrats, Social Democrats, and Greens.
addition, political science employs a number of alternative
categorization schemes, which have also entered common
parlance. Some examples include:
Just to underscore how much of a social construction these categories are:
categories of very recent vintage include "Soccer Mom" and
"NASCAR Dad". Not to mention "TEA Party".
Democratic People's Republic of North Korea, otherwise known
as North Korea, there exist a peculiar combination of
political and socioeconomic categories known as songbun,
or class status (note the irony of class distinctions in an
avowedly communist country). There are, in fact, 51
songbun (note the irony again, especially if you missed it the
first time), based on official judgments of loyalty to the
state and to the ruling Kim family (as of 2017, North Korea
was on its third generation of Kims -- another irony, if you
want to note it). These 51 categories are collected into
3 superordinate categories, essentially representing "core"
(about 25% of the population), "wavering" (about 55%), and
"hostile" (about 20%). Membership in the class is
hereditary, as in the Hindu caste system of India, but you can
be demoted from one songbun to a lower one.
Either way, class membership has all sorts of socioeconomic
consequences -- for education, employment, access to consumer
goods and even to food. For example, members of the
"core" songbun are allowed to live in the (more or
less) First-World conditions of central Pyongyang, the
capital; other songbun are permitted to live only in
the suburbs, where life is still tolerable; the rest are
relegated to extremely impoverished rural areas (which,
admittedly is probably still better than one of the DPRK's
notorious prison camps. For details, see Marked
for Life: Songbun, North
Korea's Social Classification System by Robert
Collins (2012), published by the Committee for Human Rights in
People are commonly classified by religion. Indeed, religious classifications can be a major source of group conflict, as seen by the disputes between Muslims, Eastern Orthodox, and Catholics in the former Yugoslavia, or the disputes between Hindus and Muslims in India and Pakistan.
The obvious form of religious classification is by religion itself -- Jewish, Christian, Muslim, Buddhist, Hindu, etc.
an even higher level than that is a classification based on
the number of gods worshiped in a religion:
also a new category of "Spiritual but not Religious",
preferred by many Americans who do not affiliate with any
classify people by their national origin.
Nationality categories also change with historical and political developments. For example, with the formation and consolidation of the European Union, many citizens of European countries have begun to identify themselves as "European" as well as Dutch, Italian, etc. Based on the Eurobarometer survey, Lutz et al. (Science, 2006) reported that 58% of Europeans above 18 reported some degree of "multiple identity" (actually, a dual identity), as against 42% who identified themselves only in terms of their nationality. The percentages were highest in Luxembourg, Italy, and France (despite the French rejection of the proposed European constitution in 2006), and lowest in Sweden, Finland, and the United Kingdom (which maintains its national currency instead of adopting the Euro). Perhaps not surprisingly, younger respondents were more likely to report a multiple national identity than older respondents.
The Israeli-Palestinian conflict is an interesting case in point (see Side by side: Parallel Narratives of Israel-Palestine by Sami Adwan, Dan Bar-On, and Eyal Naveh, 2012; see the review by Geoffrey Wheatcroft, "Can They Ever Make a Deal?", New York Review of Books, 04/05/2012). Yasser Arafat, president of the Palestinian National Authority, and his successor, Mahmoud Abbas, agitated for a Palestinian state separate from both Israel and Jordan; on the other hand, Golda Meier (1969), the former Israeli prime minister, denied that there was such a thing as a Palestinian people, and Newt Gingrich (2012), the former US presidential candidate, called the Palestinians "an invented people". Which raises a question: What does it mean to be a Palestinian -- or an Israeli, for that matter, but let's stick with the Palestinian case for illustration. It turns out that national consciousness -- one's identity as a citizen of a particular nation -- is a relatively recent cultural invention. Before the 1920s, Arabs in Palestine -- whether Muslim or Christian -- considered themselves part of the Ottoman Empire, or perhaps as part of a greater Arab nation, but apparently not as Palestinians as such. In fact, it has been argued that the Palestinian identity was created beginning in the 1920s in response to Zionism -- an identity which was itself an invention of the 1890s, before which Jewish tradition did not include either political Zionism or the idea of a Jewish state. It's one thing to be Jewish (or Palestinian) as a people; it's quite another to be citizens of a Jewish or Palestinian (or greater Arab) nation. And -- just so I'm not misunderstood here -- Israelis and Palestinians are by no means unique in this regard.
These two aspects of identity -- identity as a people and identity as a nation -- are not the same thing. But at the Versailles Conference that followed World War I, Woodrow Wilson championed the idea that every people should get their own nation -- this is what is known as self-determination, as opposed to the imperial and colonial systems (including those of Britain, France, and Belgium) which had existed prior to that time. On the other hand, Walter Lippman argued that self-determination was not self-evidently a good thing, because it "rejects... the ideal of a state within which diverse peoples find justice and liberty under equal laws". Lippman predicted that the idea of self-determination would lead to mutual hatred -- the kind of thing that boiled up in the former Yugoslavia in the late 20th century.
The question of national identity can become very vexed, especially as nation-states arose in the 18th century, and again in the 20th century with the breakup of the Austro-Hungarian and Ottoman empires. In contrast to non-national states, where the state was identified with some sort of monarch (a king or a queen, an emperor, or a sultan), who ruled over a large and usually multi-ethnic political entity (think of the Austro-Hungarian Empire, or the Ottoman Empire), nation-states are characterized by a loyalty to a particular piece of territory, defined by natural borders or the settlement of a national group, common descent, common language, shared culture promulgated in state-supported public schools -- and, sometimes, the suppression of "non-national" elements. Think of England, France, and Germany.
But immigration, globalization, and other trends can challenge this national identity, raising the question of exactly what it means to be a citizen of a nation-state. It turns out that belonging to the group is not precisely a matter of citizenship.
As Henry Gee notes,
"the abiding horror [of the July 7, 2005 suicide attacks on the London Underground] is that the bombers were not foreign insurgents -- Them -- but were British, born and raised; in Margaret Thatcher's defining phrase, One of Us" (in "Tricky, Turbulent, Tribal", Scientific American 12/05).
Traditionally, the United States has portrayed itself as a great melting pot, in which immigrants from a wide variety of nations, religions, and ethnicities blended in to become a homogeneous group of -- well, Americans. But after World War II, with the rise of the Black civil rights movement, and increasing degrees of Hispanic-Latino and Asian immigration, the American self-image has shifted from that of a melting pot to that of a stew (Jesse Jackson's famous image), or a gumbo, where the various ingredients combine and influence each other, making something delicious while each maintaining its original character.
Other societies have not favored the melting-pot image, striving to maintain ethnic homogeneity and resisting immigration. A case in point is Belgium, a country which includes both the German-speaking Flemish (in the northern Flanders) and the French-speaking Walloons (in the southern Wallonia), and the conflicts between the two have made for highly unstable governments, and increasing discussion of the possibility that the country will, in fact, break up -- much as happened in the former Czechoslovakia and the former Yugoslavia. The irony is that Brussels, seat of the European Union, while nominally bilingual, is for all practical purposes Francophone -- and it's located in German-speaking Wallonia. So breaking up isn't going to be easy.
Yet other societies have fostered immigration, but have held to the melting-pot image, despite the desire of new immigrants to retain their ethnic identities -- creating the conditions for cultural conflict. Ironically, the potential for conflict has been exacerbated by those societies' failure to make good on the promise of integrating new immigrants
A case in point is rioting that broke out in some Arab immigrant communities in France in 2005, and the more recent dispute over the desire of some observant Muslim Frenchwomen to wear the headscarf, or hijab, as an expression of modesty or their religious heritage or, perhaps, simply their identity.
As part of the heritage of the French revolution, which abolished the aristocratic system and made all Frenchmen simple "citizens", the French Constitution guarantees equality to all -- so much so that, until recently, anyone born in any territory ever held by France is eligible to become President -- including Bill and Hillary Rodham Clinton, born, respectively, in Arkansas and Illinois (in fact, the law was changed when the French realized that Bill Clinton was eligible). Unlike the United States, where terms like "African-American", "Asian-American", and "Mexican-American" have become familiar, there are no such "hyphenated" categories in France, and the French census has no provision for identifying the race, ethnicity, national origin, or religion of those who respond to it. So the government has no idea how many of its citizens are immigrants, or from where. And, officially, it doesn't care. Everybody's French, and all French are alike. In theory, anyway, and in law (see "Can You Really Become French?" by Robert O. Paxton, New York Review of Books, 04/09/2009)..
But it has become painfully clear that (paraphrasing George Orwell in Animal Farm) some French are more equal than others. Despite a large number of immigrants from Algeria and Morocco, there are few Arabs represented in government, or in the police force. Many Arab immigrants feel that they have been left out of French society -- effectively denied education, employment, and other opportunities that were available to the "native" French. As one immigrant put it, "The French don't think I'm French" (quoted in "France Faces a Colonial Legacy: What Makes Someone French?" by Craig S. Smith, New York Times, 11/11/05). The situation has been worsened by the fact that, while there is full freedom of religious practice in France, the state virtually outlaws any public display of religious piety, such as the headscarf (hijab) worn by many Muslim women (as well as the Jewish yarmulke and oversize Christian crosses). Moreover, as part of a policy of secularization, the state owns and maintains all religious properties. Just as it has not built any new churches or synagogues, so it hasn't built any mosques. The problem is that while there are plenty of churches and synagogues to go around, there are lots of Muslims who are forced to worship in gymnasiums and abandoned warehouses.
In part, the 2005 riots in France reflect a desire on the part of recent Arab immigrants to be classified as fully French, and treated accordingly, without discrimination; but also a desire to be recognized as different, reflecting their African origins and their Muslim religion. Such are the contradictions of social categorization.
the French treatment of the Roma people, often called
"Gypsies". The Roma are a nomadic people who migrated
into Eastern Europe and the Balkans from Northern India about
1,000 years ago. Previously, they were mostly confined
there, chiefly in Romania and Bulgaria, but under the laws of
the European Union, which guarantee "free movement of persons"
among member states, they have begun to move into Western
Europe, as well, including France -- spurring popular
movements to expel them. This cannot be done, legally,
unless individual Roma acquire criminal records -- then they
can be deported back to their countries of origin. But
how do you know who's Roma and who isn't? France is an
interesting case because, by virtue of its republican
tradition, the French government recognizes no ethnic
distinctions among its citizens or residents. But France
also requires everyone to have a fixed place of residence
(even if it's a tourist hotel), and the Roma are nomadic,
traveling in groups, and don't have a fixed address. The
most France can do is to classify Roma as gens du voyage,
or "traveling people", and waive the requirement that they
have a fixed residence.
Despite the mythology of the melting pot, the United States itself is not immune from these issues. Many of the earliest European settlers, especially in the original 13 colonies, came to the New World to escape ethnic and religious conflict, and quite quickly a view of a new American type, blending various categories, emerged. In Letters from an American Farmer (1782, Letter III), Hector St. John de Crevecoeur, a French immigrant to America in the 18th century, noted the mix of "English, Scotch, Irish, French, Dutch, Germans, and Swedes" in the New World and characterized "the American" as a "new man", in which "individuals of all nations are melted into a new race of men". In Democracy in America (1835), Alexis de Tocqueville (another Frenchman) predicted that America, as a country of immigrants, would be exempt from the conflicts between ethnicities, classes, and religions that had so often beset Europe -- initiating a view of American exceptionalism. The image of America as a "melting pot" was fixed in Israel Zangwill's play of that title, first produced in 1908.
Beginning in the 19960s this traditional view of what it means to be an American was challenged, first by a new wave of African-American civil rights leaders, and later by Mexican-Americans, Chinese-Americans, and others who wanted to keep their traditions at the same time as they became Americans. This movement away from assimilationism toward multiculturalism is captured by the image of America as a "gorgeous mosaic", or "salad bowl" of cultures -- an image derived, in turn from John Murray Gibbon's image of Canada. It's what Jesse Jackson has in mind with his "Rainbow Coalition" -- a rainbow in which white light can be decomposed into several different colors.
In 1963, Nathan Glazer and Daniel Patrick Moynihan noted in their book, Beyond the Melting Pot, that "the point about the melting pot... is that it did not happen ". By 1997, Glazer would title his new book on the subject We're All Multiculturalists Now.
It turns out that
whether members of the majority culture (meaning whites) hold
assimilationist or multiculturalist views has an impact on the
quality of life of members of majority cultures (meaning
persons of color, broadly defined). Victoria Plaut and
her colleagues conducted a "diversity climate survey" in 17
departments of a large corporation, and found that white
employees embrace of multiculturalism was strongly associated
with both minority employees' "psychological engagement" with
the company and their perceptions of bias. But where the
dominant ideology of the white employees tended toward
"colorblindness" (a variant on assimilationism), minority
employees were actually less psychologically engaged, and
perceived their white co-workers as more biased against them.
we categorize other people in terms of their national
identity, but national identity can also be part of one's own
self-concept. In The Red Prince: The Secret Lives of
a Habsburg Archduke (2008), the historian Timothy Snyder
tells the story of Archduke Wilhelm (1895-1948), son of
Archduke Stefan (1860-1933), of the Austro-Hungarian Empire.
Anne Applebaum, reviewing The Red Prince (Laughable and Tragic", New York Review of Books, 10/23/2008), writes:
Snyder is more convincing when he places Wilhelm's story not in the politics of contemporary Ukraine, but in the context of more general contemporary arguments about nations and nationalism. For the most striking thing about this story is indeed how flexible, in the end, the national identities of all the main characters turn out to be, and how admirable this flexibility comes to seem. Wilhelm is born Austrian, raised to be a Pole, chooses to be Ukrainian, serves in the Wehrmacht as a German, becomes Ukrainian again out of disgust for the Nazis -- and loses his life for that decision. His brother Albrecht chooses to be a Pole, as does his wife, even when it means they suffer for it too. And it mattered: at that time, the choice of "Polishness" or "Ukrainianness" was not just a whim, but a form of resistance to totalitarianism.
These kinds of choices are almost impossible to imagine today, in a world in which "the state classifies us, as does the market, with tools and precision that were unthinkable in Wilhelm's time," as Snyder puts it. We have become too accustomed to the idea that national identity is innate, almost genetic. But not so very long ago it was possible to choose what one wanted to be, and maybe that wasn't such a bad thing. In sacrificing that flexibility, something has been lost. Surely, writes Snyder,
the ability to make and remake identity is close to the heart of any idea of freedom, whether it be freedom from oppression by others or freedom to become oneself. In their best days, the Habsburgs had a kind of freedom that we do not, that of imaginative and purposeful self-creation.
And that is perhaps the best reason not to make fun of the Habsburgs, or at least not to make fun of them all the time. Their manners were stuffy, their habits were anachronistic, their reign endured too long, they outlived their relevance. But their mildness, their flexibility, their humanity, even their fundamental unseriousness are very appealing, in retrospect -- especially by contrast with those who sought to conquer Central Europe in their wake.
The complicated relationship between natural categories of people, based on biology or geography, and social categories of people, based on social convention, is nowhere better illustrated than with respect to race and ethnicity. By virtue of reproductive isolation, the three "races" (Caucasoid, Mongoloid, and Negroid), and the various ethnicities (Arab vs. Persian, Chinese vs. Japanese) do represent somewhat different gene pools. But members of different races and ethnicities have much more in common, genetically, than not: they hardly constitute different species or subspecies of humans. Moreover, social conventions such as the "one drop rule", widespread in the American south (and, frankly, elsewhere) during the "Jim Crow" era, by which an individual with any Negro heritage, no matter how little, was classified as Negro (see "One Drop of Blood" by Lawrence Wright, New Yorker, 07/24/94), indicates that much more goes into racial and ethnic classifications than genes.
"Now that we've got a post-black president, all the rest of the post-blacks can be unapologetic as we reshape the iconography of blackness. For so long, the definition of blackness was dominated by the '60s street-fighting militancy of the Jesses and the irreverent one-foot-out-of-the-ghetto angry brilliance of the Pryors and the nihilistic, unrepentant ghetto, new-age thuggishness of the 50 Cents. A decade ago they called post-blacks Oreos because we didn't think blackness equaled ghetto, didn't mind having white influences, didn't seem full of anger about the past. We were comfortable employing blackness as a grace note rather than as our primary sound. Post-blackness sees blackness not as a dogmatic code worshipping at the alter of the hood and the struggle but as an open-source document, a trope with infinite uses."
In his own book, Who's Afraid of Post-Blackness? What It Means to be Black Now (2012), Toure argued that:
The definitions and boundaries of Blackness are expanding in forty million directions -- or really, into infinity. It does not mean that we are leaving Blackness behind, it means we're leaving behind the vision of Blackness as something narrowly definable and we're embracing every conception of Blackness as legitimate. Let me be clear: Post-Black does not mean "post-racial". Post-racial posits that race does not exist or that we're somehow beyond race and suggests color-blindness: It's a bankrupt concept that reflects a naive understanding of race in America. Post-Black means we are like Obama: rooted in but not restricted by Blackness.
For an interesting debate concerning the biological reality of racial classifications, compare two recent books:
- A Troublesome Inheritance: Genes, Race, and Human History, by Nicholas Wade, a prominent science reporter who argues that racial differences in behavior, not just physical appearance, are based on genetic differences.
- Fatal Invention: How Science, Politics, and Big Business Re-Create Race in the 21st Century by Dorothy Roberts, a sociologist and legal scholar.
a continuing issue, and not just for multiracial individuals,
but also for governmental bookkeeping.
The census, and common usage, suggests that African-Americans comprise a single group -- a good example of the outgroup homogeneity effect described earlier. But things look different if you're in the outgroup, and they look different if you're an outgroup member who's looking closely. For example, W.E.B. Dubois famously distinguished between the "Talented Tenth" and other American Negroes (as they were then called).
Robinson (in Disintegration: The Splintering of Black
America, 2010) argues that there is no longer any such
thing as a "black community" in America -- that is, a single
group with shared identity and experience. Instead,
Robinson argues that American blacks divide into four quite
The first three
migrations arguably created three very distinct groups of
African-Americans. But Berlin notes that the distinction
between native-born African Americans, with a family heritage of
slavery, and black immigrants to America, is very real to
African-Americans. He quotes one Ethiopian-born
man, speaking to a group in Baltimore: "I am African and I am an
American citizen; am I not African American?". Berlin
reports that "To his surprise and dismay, the audience responded
no. Such discord over the meaning of the African-American
experience and who is (and isn't) part of it is not new, but of
late has grown more intense (Migrations Forced and Free" by Ira
Berlin, Smithsonian, 02/2010).
When Tom Met Sally...
As an example of the power of the "one-drop rule" in American history, consider the case of Thomas Jefferson, principal drafter of the Declaration of Independence, third President of the United States, and founder of the University of Virginia, who we now know fathered as many as six children by one of his Negro slaves, Sally Hemmings. In a letter written in 1815, Jefferson tried to work out the "mathematical problem" of determining how many "crossings" of black and white would be necessary before a mixed-race offspring could be considered "white" (see "President Tom's Cabin" by Jill Lepore, reviewing the Hemmingses of Monticello: An American Family by Annette Gordon-Reed, New Yorker, 09/22/08, from which the following quotation is drawn).
Given that Sally Hemmings herself had both a white father and a white grandfather, Jefferson was apparently satisfied that his own children by her -- who, by the way, were each freed as they reached age 21 -- had "cleared the blood". In any event, their daughter Harriet, and one of their sons, Beverly, did in fact live as whites in Washington, D.C. -- though another son, Madison, remained part of the community of free Negroes in Virginia.
society, Blacks have often been subject to racial
discrimination, but this was not always the case.
Frank M. Snowden, a historian of blacks in the ancient world,
has argued that "color prejudice" was virtually unknown in the
ancient world of Egypt, Assyria, Greece and Rome (see his books,
Blacks in Antiquity: Ethiopians in the Greco-Roman Experience,
1970, and Before Color Prejudice: The Ancient View of Blacks,
1983). In his view, blackness was not equated with
inferiority and subordination because ancient Whites encountered
Blacks as warriors and statesmen, rather than as slaves or
colonial subjects. Color prejudice, then, appears largely
to be a social construction, arising relatively recently out of
specific historical circumstances.
Who is a Jew?
Social categorization can have important legal (and personal) ramifications. For example, the question of Jewish identity, which mixes categorization on the basis of ethnicity and religion, took an interesting turn with the establishment of Israel as a Jewish state in 1948, and the enactment in 1950 of the "Law of Return", which gave any Jew the right to aliyah -- to immigrate to Israel and live in the country as a citizen. Thus, the question, Who is a Jew?, is addressed by the Israeli Chief Rabbinate, whose court, or Beit din, is dominated by Orthodox and Ultra-Orthodox rabbis who operate under Halakha, or Jewish Rabinical law.
Because Jewish culture is matrilineal
(Deuteronomy 7:4), the easiest answer is
that anyone born to a Jewish mother is Jewish; if
not, then not. It doesn't matter whether the
child is raised Jewish, or whether the mother
considers herself to be Jewish. In
this view, "Jew" is a proper set, with all
instances sharing a single defining feature. But
then things get complicated.
The situation is made more acute by the fact that the Israeli Chief Rabbinate not only controls aliyah but also controls marriage: Jews are not permitted to marry non-Jews in Israel -- and, as just described, the criteria for "Who is a Jew?" are at best unclear, and at worst incredibly strict. But the rules of categorization have real-life consequences (see "How do You Prove You're a Jew?" by Gershom Gorenberg, New York Times Magazine, 03/02/2008).
To make things even more interesting "Jew" is an ethnic category as well as a religious one. This dual status was brought to light in a lawsuit heard in Britain's supreme Court in 2009, over the issue of admission to a Jewish high school in London. Britain has some 7000 publicly financed religious schools; although these are normally open to all applicants, when there are more applicants than openings these schools are permitted to select students based on their religion. The plaintiff in the case, known as "M", is Jewish, but his mother converted to Judaism in a non-Orthodox synagogue. Therefore, she does not meet the Orthodox criteria for being Jewish -- and, so, neither does "M". "M" appealed, and the British Court of Appeals declared that the classic definition of Judaism, based on whether one's mother is Jewish, is inherently discriminatory. The appeals court argued that the only test of religious faith should one be religious belief, and that classification based on parentage turns a religious classification into an ethnic or racial one -- which is quite illegal under British law. The Orthodox rabbinate, which controls these sorts of things, claims that this ruling violates 5,000 years of Jewish tradition, and represents an unlawful intrusion of the State into religious affairs. (See "British Case Raises Issue of Identity for Jews" by Sarah Lyall, New York Times, 11/08/2009.)
Another perspective on "Jewish" as a social category is provided by a rabbinical debate concerning fertility treatments. In one form of fertility treatment, eggs from a donor will be implanted in an infertile woman, to be fertilized by her husband (or whatever). Recall that, according to Orthodox rule, a child is a Jew if his or her mother is a Jew. But in the case of egg-donation, who's the mother? The woman who donated the egg, or the woman who gave birth to the child? This issue was hotly debated at a conference in Jerusalem hosted by the Puah Institute. Many Orthodox authorities argue that it's the egg donor who matters, and the birth mother is only an "incubator", whose womb is an "external tool". Others argue that because Judaism accepts converts, it cannot be considered a "genetic" religion -- that there's no "Jewish blood". But then again, that principle is compromised by the British school case described earlier -- though maybe not: in the school case, if the mother had undergone an Orthodox conversion, the issue of her son's Jewishness would never have arisen. Then again, it's conceivable that, at an Orthodox wedding , the officiating rabbi could ask the couple how they were conceived, and require evidence of the egg-donor's Jewishness.
This sidebar can provide only the
briefest sketch of the question, which turns out
to be incredibly complicated -- and also in flux,
depending on the precise makeup (in terms of the
balance between Modern Orthodox and Ultra-Orthodox
members) of the Israeli Chief Rabbinate. I
claim no expertise in Halakha. The
point is that the question "Who is a Jew?" is not
self-evident, and it's not just a matter of
anti-Semitic stereotyping, but has real
consequences, even among Jews, and even in Israel
itself. (See "Fertility Treatment Gets
More Complicated" by Gabrielle Birkner, Wall
Street Journal, 05/14/2010).
The basic issue seems to be this:
It's a good example of the issues that surround social categorization. Social categories may exist in the mind(s) of the beholder(s), but -- as the Thomases would surely agree -- they are real in their consequences.
It should be said, in conclusion, that the racial category "white" -- the usual contrast for social categories such as Black, and Asian is also problematic. In the ancient world, ethnic distinctions were based on culture, not physical differences such as skin color -- as when the Greeks and Romans, not to mention the Chinese, distinguished between themselves and "barbarians". In fact, the Greeks noted that the Scythians and Celts were lighter in skin tone than themselves. So were the Circassians, from whom we derive the very term Caucasian -- but at this time the Caucasians were hardly a dominant ethnic group, and whiteness had no special cachet.
Apparently, the notion of "white" as an ethnic category began with German "racial science" in the 18th and 19th centuries, such as Johann Friedrich Blumenbach -- an early anthropologist who classified humans into five races based on skin color: Caucasians (white), Mongolians (yellow), Malays (brown), Negroids (black), and Americans (red). Blumenbach's system was adopted in America by Thomas Jefferson and others. However, while Bumenbach took Caucasians as the exemplars of whiteness, Jefferson and others focused on Anglo-Saxons (English and lowland Scots, but not the Irish) and Teutons (Germans). Later, the boundaries of "white" were extended to Nordics and Aryans, and still later to Alpines (Eastern Europeans) and Mediterraneans (Italians and Greeks). The Irish, who were considered to be only 30% Nordic, and 70% Mediterranean, were granted "white" status after the Civil War. Thomas Carlyle considered the French to be an "ape-population" -- but then again, a 1995 episode of The Simpsons referred to the French as "cheese-eating surrender monkeys", so maybe we haven't progressed so far after all. [For details, see The History of White People (2010) by Nell Irvin Painter; the illustration, by Leigh Wells, is taken from the review of Painter's book, by Linda Gordon, New York Times 03/28/2010.]
Hardly anyone uses the term "Caucasian" anymore -- nor, for that matter, the other Blumenbachian terms, "Mongoloid" and "Negroid". But it has come up in interesting contexts. In Taka Ozawa vs. United States (1922), the United States Supreme Court found a Japanese man ineligible for citizenship because he was night "Caucasian" -- even though he was light-skinned. In United States v. Bhagat Singh Thind (1923), the Court denied citizenship to a man of Indian descent because, although he was technically "Caucasian", he was not light-skinned (the case is a notable example of the judicial theory of Original Intent).
Shaila Dewan, an American of East Indian and European descent, writes about "whiteness" in her essay, "Has 'Caucasian' Lost Its Meaning?" (New York Times, 07/07/2013). She notes that in the American South, she was often asked about her ethnic origins. When she answered that her father was from India, but her mother was white, she felt pressed for further clarification: "What kind of white". The answer was that her mother was a mix of Norwegian, Scottish, and German ancestry. Which experience, in turn, led her to think about sub-classifications within the category of "white". The implication, as explained to her by Matthew Pratt Guterl, who wrote The Color of Race in America, 1900-1940, is that "all whitenesses are not created equal".
All of which seems to illustrate the outgroup homogeneity effect. Whites care about whether someone is English or Irish, Swedish or Norwegian, Polish or Lithuanian; but they don't distinguish between Chicanos and other Hispanics; and they don't ask whether the ancestors of African-Americans were from East or West Africa. However, Latinos may well distinguish between people of Mexican, South-American, Cuban, or for that matter, Iberian heritage; and African-Americans may well distinguish between those with a heritage of slavery (like Michele Obama) and those without one (like Barack Obama). There's a study here: hint, hint.
Race Blindness -- Literally
The point, says
Obasogie, is that "race" isn't a visual
characteristic. We are all, he argues, "trained"
through our exposure to various social practices to "see
race" the same way -- regardless of whether we can see
Nowhere is the intertwining of the
"natural" and "social" bases for racial and ethnic
classification clearer than with the history of the United
For more details, see "Historical Census Statistics on Population Totals by Race, 1790 to 1990, and by Hispanic Origin, 1970 to 1990), for the United States, Regions, Divisions, and States" by Campbell Gibson & Kay Jung (Working Paper Series No. 56, Population Division, U.S. Census Bureau (09/02), from which much of this this material is taken.
For an excellent analysis of the evolution of the 'Hispanic" category in the US Census, see the book by UCB's own Cristina Mora: Making Hispanics: How Activists, Bureaucrats, and Media Constructed a New America (2014).
The difficulties of social categorization are not confined to the census.
Consider the evolution of ethnic categories offered to undergraduate applicants to the University of California system.
of its recent history, the UC has classified its applicants
into eight categories:
And, as a smaller example, the racial and ethnic classifications used in the Research Participation Program of the UCB Department of Psychology. For purposes of prescreening, students in the RPP are asked to classify themselves with respect to gender identity, ethnic identity, and other characteristics.
In 2004, a relatively small number of such categories were employed -- pretty much along the lines of the 2000 census.
But in 2006, RPP employed a much more
diverse set of racial and ethnic categories -- with more than
a dozen subcategories for Asians and Asian-Americans, for
example. Arguably, the ethnic composition of the
Berkeley student body didn't change all that much in just two
years! Rather, the change was motivated by the fact that
the Psychology Department has a number of researchers
interested in cultural psychology, and especially in
differences between people of Asian and European
heritage. In this research, it is important to make
rather fine distinctions among Asian, with respect to their
ancestral lands. But note some anomalies:
category of Hispanic has also been contested: should
it apply to anyone with Spanish heritage, including immigrants
from Spain as well as Latin America -- not to mention Spanish
Concepts are mental
representations of categories, and so classification is not
always accurate. It's one thing to be categorized as a
member of some racial or ethnic group, and quite another thing
to actually be an member of that group. Or the
reverse. Discussions of racial and ethnic categorization
often raise the question of "passing" -- presenting
oneself, and being perceived (categorized), as a member of one
group (usually the majority) when one is really a member of
another (usually a minority group).
The term has its
origins Passing, in a 1929 novel by Nella Larsen, about
the friendship between two light-skinned Black women who grew up
together in Chicago. One, Clare, goes to live with white
relatives when her father dies, marries a bigoted white man who
knows nothing of her mixed-race background, and hasa
light-skinned daughter who likewise knows nothing of her
heritage. The other, Irene, marries a Black physician and
raises two dark-skinned boys in New York at the time of the
Harlem Renaissance. One day, while Irene is passing for
white in a whites-only restaurant, she encounters Clare.
In 2021, the novel was made into a film directed by Rebecca Hall
(see "The Secret Toll of Racial Ambiguity" by Alexandra Kleeman,
New York Times Magazine, 10/24/2021; also "Black Sink,
White Masks", a review of the film by Manohla Dargis, New
York Times, 11/12/2021).
There is an interesting story here. According to Kleeman, Larsen herself was of mixed race, daughter of a white mother and a Black father, with light skin. When her white remarried to a white man, and gave birth to a white daughter, the contrast was obvious. Larsen's mixed-race lineage made it difficult for her family to move into a white working-class neighborhood.
And Hall, too, may has a mixed-race background. She is the daughter of Sir Peter Hall, founder of Britain's Royal Shakespeare Company, and Maria Ewing, an opera superstar (she briefly appeared nude at the end of the "Dance of the Seven Veils" in a famous Metropolitan Opera production of Richard Strauss's Salome), whose father, born to a former slave (who once toasted Frederick Douglass at a banquet) and a free woman of color (descended from a Black man who fought in the Revolutionary War), himself passed for white. (Got that? For details, see Season 8, Episode 1 (01/04/2022) of Henry Louis Gates's TV program, "Finding Your Roots").
An excellent scholarly account of passing is A Chosen Exile: A History of Racial Passing in American Life by Allyson Hobbs, a historian at Stanford (2014). Reviewing the book in the New York Times Book Review (11/23/2014), Danzy Senna wrote:
Hobbs tells the curious story of the upper-class black couple Albert and Thyra Johnston. Married to Thyra in 1924, Albert graduated from medical school but couldn't get a job as a black doctor, and passed as white in order to gain entry to a reputable hospital. His ruse worked and he and his wife became pillars of an all-white new Hampshire community. For 20 years, he was the town doctor and she was the center of the town's social world. Their stately home served as the community hub, and there they raised their four children, who believed they were white. Then one day, when their eldest son made an off-the-cuff comment about a black student at his boarding school, Albert blurted out, "Well, you're colored." It was almost as if Albert had grown weary after 20 years of carefully guarding their secret. And with that Albert and Thyra began the journey toward blackness again.
Occasionally, we see
examples of passing in reverse. A famous case in
point is Black Like Me by John Howard Griffin (1961), a
white journalist who artificially darkened his skin and traveled
through the American South as a black man.
Many novels and films have plots based on passing. Famous examples include The Tragedy of Pudd'nhead Wilson by Mark Twain (1894), in which a 1/32 black infant is switched with a white baby; and Gentlemen's Agreement (1947), a film starring Gregory Peck as a Gentile reporter who pretends to be Jewish.
For a scholarly treatment, see A Chosen Exile: A History of Racial Passing in American Life by Allyson Hobbs (2014).
Obviously, our language contains a large number of nouns which designate various types of people. These types are categories of people, and the nouns are category labels. Many of these classificatory labels have their origins in scientific research on personality, including the terms used to label various forms of mental illness, but they have also filtered into common parlance. You don't have to be a psychologist or a psychiatrist to label someone an extravert or a psycho.
The classification of people according to their personality type has a history that goes back almost 2,500 years.
preoccupation of Greek science was with classification.
Aristotle (384-322 B.C.), in his Historia Animalium
provided a taxonomy, or classificatory scheme, for biological
phenomena. Theophrastus (370-287 B.C.), his
successor as head of the Peripatetic School in Athens (so
named because the teachers strolled around the courtyard while
lecturing), followed his example by developing a two-part
classification of plants that heavily influenced the modern
"genus-species" taxonomy introduced by Linnaeus.
Then he turned his attention to developing a taxonomy of
people. His work is embodied in Characters, a
delightful book in which he described the various types of
people encountered in Athenian society. Unfortunately, that
portion of the book which described socially desirable types
has been lost to history: All that remains are his portraits
of 30 thoroughly negative characters, most of whom are
instantly recognizable even today, more than 2000 years later.
All his descriptions follow the same expository format: a
brief definition of the dominant feature of the personality
under consideration, followed by a list of typical behaviors
representative of that feature.
The Distrustful Man
It goes without saying that Distrustfulness is a presumption of dishonesty against all mankind; and the Distrustful man is he that will send one servant off to market and then another to learn what price he paid; and will carry his own money and sit down every furlong to count it over. When he is abed he will ask his wife if the coffer be locked and the cupboard sealed and the house-door bolted, and for all she may say Yes, he will himself rise naked and bare-foot from the blankets and light the candle and run round the house to see, and even so will hardly go to sleep. Those that owe him money find him demand the usury before witnesses, so that they shall never by any means deny that he has asked it. His cloak is put out to wash not where it will be fulled best, but where the fuller gives him good security. And when a neighbor comes a-borrowing drinking-cups he will refuse him if he can; should he perchance be a great friend or a kinsman, he will lend them, yet almost weigh them and assay them, if not take security for them, before he does so. When his servant attends him he is bidden go before and not behind, so that he may make sure he do not take himself off by the way. And to any man who has bought of him and says, 'Reckon it up and set it down; I cannot send for the money just yet,' he replies, 'Never mind; I will accompany you home' (Theophrastus, 319 B.C./1929, pp. 85-87).
Theophrastus initiated a literary tradition which became very popular during the 16th and 17th centuries, especially in England and France (for reviews see Aldington, 1925; Roback, 1928). However, these later examples represent significant departures from their forerunner. Theophrastus was interested in the objective description of broad types of people defined by some salient psychological characteristic. In contrast, the later efforts show an increasing interest in types defined by social class or occupational status. In other instances, the author presents word portraits of particular individuals, with little apparent concern with whether the subjects of the sketch are representative of any broader class at all. Early examples of this tendency are to be found in the descriptions of the pilgrims in Chaucer's (c. 1387) Canterbury Tales. Two examples that lie closer to Theophrastus' intentions are the Microcosmographie of John Earle (1628) and La Bruyere's Les Caracteres (1688). More recent examples of the form may be found in George Eliot's Impressions of Theophrastus Such (1879) and Earwitness: Fifty Characters (1982) by Elias Canetti, winner of the 1981 Nobel Prize for Literature.
later character sketches also became increasingly opinionated
in nature, including the author's personal evaluations of the
class or individual, or serving as vehicles for making ethical
or moral points. Like Theophrastus, however, all of these
authors attempted highly abstract character portraits, in
which individuals were lifted out of the social and temporal
context in which their lives ran their course. Reading one of
these sketches we have little or no idea what forces impinged
on these individuals to shape their thoughts and actions; what
their motives, goals, and intentions were; or what their lives
were like from day to day, year to year. As authors became
more and more interested in such matters they began to write
"histories" or "biographies" of fictitious characters -- in
short, novels. In the 18th century the novel quickly rose to a
position as the dominant literary form in Europe, and interest
in the character-sketch waned. Character portraits still occur
in novels and short stories, but only as a minor part of the
whole -- perhaps contributing to the backdrop against which
the action of the plot takes place. Again, insofar as they
describe particular individuals, character sketches imbedded
in novels lack the quality of universality which Theophrastus
sought to achieve.
A new translation of Characters was published in 2018 by Pamela Mensch, with wonderful, usually anachronistic illustrations by Andre Carrilho (e.g., Marie Antoinette taking a selfie with a cellphone). The review of her book by A.E. Stallings ("You Know the Types", Wall Street Journal, 12/08/2018) sets the book in context, but also gives it a contemporary spin (e.g., Theophrastus's "Newshound" spreads Fake News). Stallings's review is worth reading all on its own, not least because he suggests some modern-day characters that Theophrastus would have thought of if he lived long enough: the Mansplainer, the Humblebragger, the Instagram Poet, the Meme-Spreader, the Virtue Signaler, the More-Outraged-Than-Thou, and the Troll.
Characters is a classic of literature because -- despite the radical differences between ancient Athenian culture and our own -- Theophrastus' 30 character types are instantly recognizable by readers of any place and time. As a scientific endeavor, however, it is not so satisfying. In the first place, Theophrastus provides no evidence in support of his typological distinctions: were there really 30 negative types of Greeks, or were there 28 or 32; and if there were indeed 30 such types, were they these 30? (Theophrastus didn't describe any positive characters, but suggested that he described them in another manuscript that has been lost - -or, perhaps Theophrastus was just kidding.) Moreover, Theophrastus did not offer any scheme to organize these types, showing how they might be related to each other. Perhaps more important -- assuming that Characters attained classic status precisely because Theophrastus' types were deemed to be universal -- is the question of the origin of the types. Theophrastus raised this question at the very beginning of his book, but he did not offer any answer:
I have often marvelled, when I have given the matter my attention, and it may be I shall never cease to marvel, why it has come about that, albeit the whole of Greece lies in the same clime and all Greeks have a like upbringing, we have not the same constitution of character (319 B.C./1929, p. 37).
The ancients had solutions to all problems, both scientific and pseudoscientific.
Some popular approaches to creating typologies of personality have their origins in ancient folklore, and from time to time they have been endowed with the appearance of science. For example, a tradition of physiognomy diagnosed personality on the basis of similarities in physical appearance between individual humans and species of infra-human animals. Thus, a person possessing hawk-like eyes, or an eagle-like nose was presumed to share behavioral characteristics with that species as well.
By far the most prominent of these pseudoscientific approaches to personality was (and still is) astrology, which holds that the sun, moon, planets, and stars somehow influence events on earth. The theory has its origins in the ancient idea that events in the heavens -- eclipses, conjunctions of stars, and the like -- were omens of things to come. This interest in astral omens has been traced back almost 4000 years to the First Dynasty of the kingdom of Babylon. Astrology per se appears to have begun in the 3rd century B.C., when religious authorities began using the planets to predict events in an individual's life. The various planets, and signs of the Zodiac, were thought to be associated with various attributes. The astrologer prepared a horoscope, or map of the heavens at the moment of an individual's birth (or, sometimes, his or her conception), and predicted on the basis of the relative positions of the heavenly bodies what characteristics the person would possess. Of course, because these relative positions varied constantly, somewhat different predictions could be derived for each individual. To the extent that two individuals were born at the same time and in the same place, then, they would be similar in personality.
this complicated system was considerably simplified such that
these predictions were based on the zodiacal signs themselves.
Each sign was associated with a different portion of the
calendar year, and individuals born during that interval were
held to acquire corresponding personality characteristics.
Thus, modern astrology establishes 12 personality types, one
for each sign of the Zodiac. In the passages which follow,
taken from the Larousse Encyclopaedia of Astrology,
note the stylistic similarity to the character portraits of
Astrology was immensely powerful in the ancient world, and even in this century various political leaders such as Adolph Hitler in Germany and Lon Nol in Cambodia have computed horoscopes to help them in decision-making (Nancy Reagan famously consulted an astrologer about the scheduling of some White House events). However, by the 17th century astrology had lost its theoretical underpinnings. First, the new astronomy of Copernicus (1473-1543), Galileo (1564-1642), and Kepler (1571-1630), showed that the earth was not at the center of the universe, as astrological doctrine required. Then, the new physics of Descartes (1596-1650) and Newton (1642-1727) proved that the stars could have no physical influence on the earth. If that were not enough, the more recent discovery of Uranus, Neptune, and Pluto would have created enormous problems for a system that was predicated on the assumption that there were six, not nine, planets. In any event, there is no credible evidence of any lawful relationship between horoscope and personality.
Never mind that there are actually thirteen signs of the zodiac. The Babylonians noted that the sun also passes through the constellation Ophiuchus, the serpent-holder (November 29-December 17). But the sun spends less time in Ophiuchus than it does in the other constellations, and the "pass" is really only a tangential nick in spatial terms. So the Babylonians, who wanted there to be just twelve zodiacal signs, discarded it, leaving us with the twelve signs we know today. And never mind that the boundaries between astrological signs are wrong. Because of the astronomical phenomenon of precession, caused by the wobbling of the Earth on its axis, the actual dates are shifted by about a month from their conventional boundaries. The true dates for Scorpio, which are usually given as October 24-November 22, are actually November 23-November 29. If you want to mock either astrologers or horoscope-readers for not being faithful to their system, then you should knock Sir Isaac Newton as well. After all, a prism really breaks white light up into only six primary colors (look for yourself), and he added indigo because he thought that the number 7 had occult significance (he was also an alchemist, after all.
In 2011, Parke Kunkel, an astronomer and member of the Minnesota Planetarium Society, reminded astrologers of these inconvenient fact, which meant that large number of people would have to adjust their signs. According to a news story ("Did Your Horoscope Predict This?", by Jesse McKinley, New York Times, 01/15/2011), one astrology buff Twittered: "My zodiac sign changed. Does that mean that I'm not anymore who I used to be?!?". Another wrote, "First we were told that Pluto is not a planet, now there's a new zodiac sign, Ophiuchus. My childhood was a bloody lie." On the other hand, an astrologer told of "A woman who told me she'd always felt there were one or two traits about Sagittarius that didn't fit her personality, but that the new sign is spot on". Other people, I'm sure responded "I don't care: I'm still a Scorpio" or whatever -- which, I think, is eloquent testimony to the fact that the traditional zodiacal signs really do serve as social categories, and as elements of personal identity -- which is why so many people exchange their astrological signs on first dates.
Greek science had another answer for these questions, in the form of a theory first proposed by Hippocrates (460?-377? B.C.), usually acknowledged as the founder of Western medicine, and Galen (130-200? A.D.), his intellectual heir. Greek physics asserted that the universe was composed of four cosmic elements, air, earth, fire, and water. Human beings, as microcosms of nature, were composed of humors -- biological substances which paralleled the cosmic elements. The predominance of one humor over the others endowed each individual with a particular type of temperament.
theory was the first scientific theory of personality -- the
first to base its descriptions on some basis other than the
personal predilections of the observer, and the first to
provide a rational explanation of individual differences. The
theory was extremely powerful, and dominated both
philosophical and medical discussions of personality well into
the 19th century. Immanuel Kant , the German philosopher,
abandoned Greek humor theory but retained its fourfold
classification of personality types in his Anthropology
of 1798 (this book was the forerunner of the now-familiar
introductory psychology textbook). His descriptions of the
four personality types have a flavor strongly reminiscent of
The Four Temperaments
The classic fourfold typology, derived from ancient Greek humour theory, is often referred to as The Four Temperaments. Under that label, it has been the subject of a number of artworks.
In music, a humoresque is a term given to a light-hearted musical composition. But Robert Schumann's "Humoreske in Bb", Op. 20 (1839), is a suite based on the four classical humours.
The German composer Paul Hindemith also wrote a suite for piano and strings -- actually, a theme with four variations -- entitled The Four Temperaments (1940), which was choreographed for the Ballet Society, the forerunner of the New York City Center Ballet by George Balanchine (1946).
With the emergence of psychology as a scientific discipline separate from philosophy and physiology in the late 19th century, a number of other typological schemes were proposed. Most of these had their origins in astute clinical observation by psychiatrists and clinical psychologists rather than in rigorous empirical research. However, all of these were explicitly scientific in intent, in that their proponents attempted to develop a body of evidence that would confirm the existence of the types.
Beginning in the late 18th century, and especially in the late 19th century, as psychiatry began to emerge as a distinct branch of medicine, a great deal of attention was devoted to classification by intellectual ability, as measured by IQ (or something like it).
lower end of the scale, there were three subcategories of
"mental defective" (what we now call mental retardation):
Sigmund Freud (1908), a Viennese psychiatrist whose theory of personality was enormously influential in the 20th century (despite being invalid in every respect), claimed that adults displayed constellations of attributes whose origins could be traced to early childhood experiences related to weaning, toilet training, and sexuality. Freud himself described only one type -- the anal character, which displays excessive frugality, parsimony, petulance, obstinacy, pedantry, and orderliness. His followers, working along the same lines, elaborated a wide variety of additional types such as the oral, urethral, phallic, and genital (Blum, 1953; Fenichel, 1945; Shapiro, 1965).
The passage through the five stages of development leaves its imprint on adult personality. If all goes well, the person emerges possessing what is known as the genital character. Such a person is capable of achieving full sexual satisfaction through orgasm, a fact which for the first time permits the effective regulation of sexual impulses. The individual no longer has any need to adopt primitive defenses, though the adaptive defenses of displacement, creative elaboration, and sublimation are still operative. The person's emotional life is no longer threatening, and he or she can express feelings openly. No longer ambivalent, the person is capable of loving another.
Unfortunately, according to Freud, things rarely if ever go so well. People do not typically pass through the psychosexual stages unscathed, and thus they generally do not develop the genital character spontaneously. Developmental crises occurring at earlier stages prevent growth, fulfillment, and the final achievement of genital sexuality. These difficulties are resolved through the aid of additional defense mechanisms. For example the child can experience anxiety and frustration while he or she is in the process of moving from one stage to the next. Fixation occurs when the developmental process is halted, such that the person remains at the earlier stage. Alternatively, the child may experience anxiety and frustration after the advance has been completed. In this case, the person may return to an earlier stage, one that is free of these sorts of conflicts. This regression, of course, results in the loss of growth. Because of fixation and regression, psychological development does not necessarily proceed at the same pace as physical development.
Nevertheless, the point at which fixation or regression occurs determines the person's character -- Freud's term for personality -- as an adult. Not all of the resulting character types were described by Freud, but they have become generally accepted by the psychoanalytic community (Blum, 1953).
The Oral Character "... is extremely dependent on others for the maintenance of his self-esteem. External supplies are all- important to him, and he yearns for them passively.... When he feels depressed, he eats to overcome the emotion. Oral preoccupations, in addition to food, frequently revolve around drinking, smoking, and kissing" (Blum, 1953, p. 160).The oral character develops through the resolution of conflict over feeding and weaning. The oral dependent type relies on others to enhance and maintain self-esteem, and to relieve anxiety. Characteristically, the person engages in oral preoccupations such as smoking, eating, and drinking to overcome psychic pain. By contrast, the oral aggressive type expresses hostility towards those perceived to be responsible for his or her frustrations. This anger and hatred is not expressed by physical biting, as it might be in an infant, but rather by "biting" sarcasm in print or speech.
The Urethral Character: "The outstanding personality features of the urethral character are ambition and competitiveness..." (Blum, 1953, p. 163).
The Anal Character develops through toilet training. The anal expulsive type retaliates against those deemed responsible for his or her suffering by being messy, irresponsible, disorderly, or wasteful. Or, through the mechanism of reaction formation, the person can appear neat, meticulous, frugal, and orderly. If so, however, the anal expulsive character underlying this surface behavior may be documented by the fact that somewhere, something is messy. The anal creative type, by contrast, produces things in order to please others, as well as oneself. As a result, such an individual develops attributes of generosity, charity, and philanthropy. Finally, the anal retentive type develops an interest in collecting and saving things -- as well as personality attributes of parsimony and frugality. On the other hand, through reaction formation he or she may spend and gamble recklessly, or make foolish investments.
The Phallic Character "behaves in a reckless, resolute, and self-assured fashion.... The overvaluation of the penis and its confusion with the whole body... are reflected by intense vanity, exhibitionism, and sensitiveness.... These individuals usually anticipate an expected assault by attacking first. They appear aggressive and provocative, not so much from what they say or do, but rather in their manner of speaking and acting. Wounded pride... often results in either cold reserve, deep depression, or lively aggression" (Blum, 1953, p. 163). The phallic character, by virtue of his or her development, overvalues the penis. The male must demonstrate that he has not been castrated, and does so by engaging in reckless, vain, and exhibitionistic behaviors -- what is known in some Latin American cultures as machismo. The female resents having been castrated, and is sullen, provocative, and promiscuous -- as if to say, "look what has been done to me".
In the final analysis, Freud held that adult personality was shaped by a perpetual conflict between instinctual demands and environmental constraints. The instincts are primitive and unconscious. The defenses erected against them in order to mediate the conflict are also unconscious. These propositions give Freud's view of human nature its tragic flavor: conflict is inevitable, because it is rooted in our biological nature; and we do not know the ultimate reasons why we do the things that we do.
Jung (1921), an early follower of Freud, developed an
eightfold typology constructed from two attitudes and four
functions. In Jung's system, the attitudes represented
different orientations toward the world: the extravert,
concerned with other people and objects; and the introvert,
concerned with his or her own feelings and experiences. The
functions represented different ways of experiencing the
objects of the attitude: thinking, in which the person was
engaged in classifying observations and organizing concepts;
feeling, in which the person attached values to observations
and ideas; sensing, in which the person was overwhelmingly
concerned with concrete facts; and intuition, in which the
person favored the immediate grasping of an idea as a whole.
historically important typology was developed by Sheldon
(1940,1942), as an extension of the constitutional psychology
introduced by the German psychiatrist, Ernst Kretchmer (1921).
Kretchmer and Sheldon both asserted that there was a link
between physique and personality. On the basis of his
anthropometric studies of the bodily builds, in which he took
various measurements of the head, neck, chest, trunk, arms,
legs, and other parts of the body, Sheldon discerned three
types of physique reflecting both the individual's
constitutional endowment and his or her current physical
also found a relationship between the physical and
psychological typologies, such that:
Caveat emptor. The ostensible correlation between somatotype and personality type is almost entirely spurious, because Sheldon and his research assistants were not "blind" to the subjects' somatotypes when they evaluated their personality types. Accordingly, Sheldon's research was vulnerable to experimenter bias and other expectancy confirmation effects, especially perceptual confirmation.
The Dreaded "Posture Photograph"
For several decades in the middle part of the 20th century, incoming freshmen at many Ivy League and Seven Sisters colleges, and some other colleges as well (including some state universities) lined up during orientation week to be photographed nude, or in their underwear (front, sides, and back), for what were called "posture photographs". The ostensible purpose of this exercise, especially at the Seven Sisters schools, was to identify students whose posture and other orthopedic characteristics might need attention during physical education classes; but in many cases the real purpose was to collect data for Sheldon's studies of somatotypes and their relation to personality. Such photographs were taken at Harvard from 1880 into the 1940s, and many served as the illustrations for Sheldon's monograph, An Atlas of Men. They were also taken from 1931 to 1961 at Radcliffe, then the women's college associated at Harvard. Note that posture photographs were continued at Radcliffe long after they were discontinued at Harvard. Somewhere there may exist a photograph of a nearly-naked George W. Bush, Yale Class of 1968 (but not of Bill Clinton, who went to Georgetown).
The practice of taking posture photographs was discontinued in the 1960s, and many sets were destroyed, but some still exist in various archives (see "The Posture Photo Scandal" by Ron Rosenbaum, New York Times Magazine, 02/12/95), and "Nude Photos Are Sealed at Smithsonian", New York Times, 01/21/95).
follower of Freud, Karen Horney (1945), proposed a
three-category system based on characteristic responses of the
developing child to feelings of helplessness and anxiety.
Helen Fisher, an anthropologist who is the "chief scientific advisor" to the internet dating site Chemistry.com, has proposed her own typology, which serves as the basis for that site's assessment of personality and predictions of compatibility. Each type is, ostensibly, based on the dominance of a particular hormone in the individual's body chemistry.
Fisher assesses these types with a 56-item questionnaire, helpfully reprinted in her popular-press book, Why Him? Why Her? (2010; see also her previous pop-psych book, Why We Love -- The Nature and Chemistry of Romantic Love, 2004). However, she argues that each of these types, much like the classical humours, has its roots in the excessive activity of one or more hormones. Why she just doesn't employ a blood panel to screen for relative hormone levels isn't clear. Nick Paumgarten, has noted that Fisher's approach "represent[s] a frontier of relationship science, albeit one that is thinly populated and open to flanking attach" ("Looking for Someone: Sex, love, and loneliness on the Internet", New Yorker, 07/04/2011).
Through fMRI, Fisher has also identified the ventral area of the tegmentum and the caudate nucleus as the brain centers associated with "mad" romantic love -- and even gone so far as to suggest that couples undergo brain-scanning to find out whether, in fact, they actually love each other.
Other typological systems have resulted from the analysis of whole societies rather than of individual clinic patients. Although sociology is an empirical science, these typologies are not typically determined quantitatively. Rather, like their clinical counterparts, they represent the investigator's intuitions about the kinds of people who inhabit a particular culture.
German philosopher Spranger (1928) did not, strictly speaking,
postulate a typology. Rather, he was interested in describing
various coherent sets of values which a person could use to
guide his or her life. However, the argument was presented in
a book entitled Types of Men, so the descriptions below (as
summarized by Allport & Vernon, 1931), seem to fit our
of the most influential pieces of social science written since
World War II, David Riesman (1950) analyzed the impact of
industrial development on personality.
Fromm, who was as much influenced by Karl Marx as by Sigmund
Freud, and by economics as much as by psychopathology, offered
a list of five basic character types, which result from
differential socialization rather than from childhood
types of adjustment are labelled "unproductive" by Fromm,
because they prevent the individual from realizing his or her
Most countries with a developed political system include a set of categories by which people can be classified according to their political leanings, and which, in turn, help predict what they will they and how they will vote concerning various topics.
United States, the most familiar of these are the contrasting
categories of Democrat and Republican.
But these two organized political parties only scratch the
aside organized political parties are more informal political
categories, such as
On the Continent, their respective counterparts are often known as Liberal Democrats and Christian Democrats.
In Japan, they are the Democratic Party and the Liberal Democratic Party (which is actually pretty conservative).
In the Soviet Union then, and in China now, there is only one party, the Communist Party -- which pretty much gave, and gives, their political scenes a serious ingroup-outgroup, "us versus them" quality.
The typologies of Theophrastus
and his successors -- Jung, Sheldon, Reisman, and others --
are satisfying from one standpoint, because they seem to
capture the gist of many of the people whom we encounter in
our everyday lives.
One problem with personality typologies, however, is the sheer number of different typological schemes that have been proposed. Most of these typologies are eminently plausible: each of us knows some extraverts and some sanguines, thinkers and intuiters, somatotonics and other-directed people. The reader who has stuck with this material so far may also have discovered that some (perhaps many or most) of his or her acquaintances can be classified into several different type categories, depending on which features are the focus of attention. This puts us in the curious position that a person's type can change according to the mental set of the observer, even if his or her behavior has not changed at all.
Moreover, as Allport (1937) noted, no typology can encompass all the attributes of a person:
Whatever the kind, a typology is always a device for exalting its author's special interest at the expense of the individuality of the life which he ruthlessly dismembers .... This harsh judgment is unavoidable in the face of conflicting typologies. Certainly not one of these typologies, so diverse in conception and scope, can be considered final, for none of them overlaps any other. Each theorist slices nature in any way he chooses, and finds only his own cuttings worthy of admiration .... What has happened to the individual? He is tossed from type to type, landing sometime in one compartment and sometime in another, often in none at all (p. 296).
As this passage makes clear, Allport's objection to typologies was not that there were so many of them, but that they are biosocial rather than biophysical in nature. Allport held that types are cognitive categories rather than attributes of people: they exist in the minds of observers rather than in the personalities of the people who are observed. For Allport, as for many other personologists who wish to base a theory of personality on how individuals differ in essence, types appear to be a step in the wrong direction. Allport believed that traits would provide the basis for a biophysical approach to personality that would go beyond cognitive categories to get at the core of personality. But, as we will see, traits can be construed as categories as well.
Diagnosis lies at the heart of the medical model of psychopathology: the doctor's first task is to decide whether the person has a disease, and what that disease is. Everything else flows from that. A diagnostic system is, first and foremost, a classification of disease -- a description of the kinds of illnesses one is likely to find in a particular domain. But advanced diagnostic systems go beyond description: they also carry implications for underlying pathology, etiology, course, and prognosis; they tell us how likely a disease is to be cured, and which cures are most likely to work; failing a cure, they tell us how successful rehabilitation is likely to be; they also tell us how we might go about preventing the disease in the first place. Thus, diagnostic systems are not only descriptive: they are also predictive and prescriptive. Diagnosis is also critical for scientific research on psychopathology -- as R.B. Cattell put it, nosology precedes etiology. Uncovering the psychological deficits associated with schizophrenia requires that we be able to identify people who have the illness in the first place.
Psychiatric diagnosis is intended as a classification of mental illness, but it quickly becomes a classification of people with mental illness. Thus,
Before Emil Kraepelin, the nosology of mental illness was a mess. Isaac Ray (1838/1962) followed Esquirol and Pinel in distinguishing between insanity (including mania and dementia) and mental deficiency (including idiocy and imbecility), but otherwise denied the validity of any more specific groupings. It fell to Kraepelin to systematically apply the medical model to the diagnosis of psychopathology, attempting a classification of mental illnesses that went beyond presenting symptoms. But in this respect, Kraepelin's program largely failed. Beginning in the fifth edition (1896) of his Textbook, and culminating in the seventh and penultimate edition (the second edition to be translated into English), Kraepelin acknowledged that classification in terms of pathological anatomy was impossible, given the present state of medical knowledge. His second choice, classification by etiology, also failed: Kraepelin freely admitted that most of the etiologies given in his text were speculative and tentative. In an attempt to avoid classification by symptoms, Kraepelin fell back on classification by course and prognosis: what made the manic-depressive psychoses alike, and different from the dementias, was not so much the difference between affective and cognitive symptoms, but rather that manic-depressive patients tended to improve while demented patients tended to deteriorate.
By focusing on the course of illness, in the absence of definitive knowledge of pathology or etiology, Kraepelin hoped to put the psychiatric nosology on a firmer scientific basis. In the final analysis, however, information about course is not particularly useful in diagnosing a patient who is in the acute stage of mental illness. Put bluntly, it is not much help to be able to say, after the disease has run its course, "Oh, that's what he had!". Kraepelin appears to have anticipated this objection when he noted that:
there is a fair assumption that similar disease processes will produce identical symptom pictures, identical pathological anatomy, and an identical etiology. If, therefore, we possessed a comprehensive knowledge of any one of these three fields, -- pathological anatomy, symptomatology, or etiology, -- we would at once have a uniform and standard classification of mental diseases. A similar comprehensive knowledge of either of the other two fields would give not only just as uniform and standard classifications, but all of these classifications would exactly coincide. Cases of mental disease originating in the same causes must also present the same symptoms, and the same pathological findings (1907, p. 117).
Accordingly, Kraepelin (1904/1907) divided the mental illnesses into 15 categories, most of which remain familiar today, including dementia praecox (later renamed schizophrenia), manic-depressive insanity (bipolar and unipolar affective disorder), paranoia, psychogenic neuroses, psychopathic personality, and syndromes of defective mental development (mental retardation). What Kraepelin did for the psychoses, Pierre Janet later did for the neuroses (Havens, 1966), distinguishing between hysteria (today's dissociative and conversion disorders) and psychasthenia (anxiety disorder, obsessive-compulsive disorder, and hypochondriasis).
Paradoxically, Kraepelin's assertion effectively justified diagnosis based on symptoms -- exactly the practice that he was trying to avoid. And that is just what the mental health professions have continued to do, for more than a century. True, the predecessors of the current Diagnostic and Statistical Manual for Mental Disorders (DSM), such as the Statistical Manual for the Use of Institutions for the Insane or the War Department Medical Bulletin, Technical 203 spent a great deal of time listing mental disorders with presumed or demonstrated biological foundations. But for the most part, actual diagnoses were made on the basis of symptoms, not on the basis of pathological anatomy -- not least because, as Kraepelin himself had understood, evidence about organic pathology was usually impossible, and evidence about etiology was usually hard, to obtain. In distinguishing between psychosis and neurosis, and between schizophrenia and manic-depressive disorder, or between phobia and obsessive-compulsive disorder, all the clinician had was symptoms.
Similarly, while the first edition of the Diagnostic and Statistical Manual for Mental Disorders (DSM-I; American Psychological Association, 1952) may have been grounded in psychoanalytic and psychosocial concepts, diagnosis was still based on lists of symptoms and signs. So too, for the second edition (DSM-II; American Psychiatric Association, 1968). For example, the classical distinctions among simple, hebephrenic, catatonic (excited or withdrawn), and paranoid schizophrenia were based on presenting symptoms, not on the basis of pathological anatomy (they were "functional", etiology (unknown), or even course (all chronic and deteriorating).
In point of fact, the first two editions of DSM gave mental health professionals precious little guidance about how diagnoses were actually to be made -- which is one reason why diagnosis proved to be so unreliable (e.g., Spitzer & Fleiss, 1974; Zubin, 1967). Correcting this omission was one of the genuine contributions of what has come to be known as the neo-Kraepelinian movement in psychiatric diagnosis (Blashfield, 1985; Klerman, 1977), as exemplified by the work of the "St. Louis Group" centered at Washington University School of Medicine (Feighner, Robins, Guze, Woodruff, Winokur, & Munoz, 1972; Woodruff, Goodwin, & Guze, 1974), and the Research Diagnostic Criteria (RDC) promoted by a group at the New York State Psychiatric Institute (Spitzer, Endicott, & Robins, 1975). The third and fourth editions of the Diagnostic and Statistical Manual for Mental Disorders (DSM-III, DSM-III-R, and DSM-IV; American Psychiatric Association, 1980, 1987, 1994), were largely the product of these groups' efforts.
Diagnosis by symptoms was codified in the Schedule for Affective Disorders and Schizophrenia (SADS; Endicott & Spitzer, 1978), geared to the RDC, and in analogous instruments geared to the DSM: the Structured Clinical Interview for DSM-III-R (SCID; Spitzer, Williams, Gibbon, & First, 1990) and Structured Clinical Interview for DSM-IV Axis I Disorders (SCID-I; First, Spitzer, Gibbon, & Williams, 1997). The neo-Kraepelinian approach exemplified by DSM-IV and SCID-I has arguably made diagnosis more reliable, if not more valid. For example, clinicians can show a high rate of agreement in diagnosing Multiple Personality Disorder (in DSM-III; in DSM-IV, renamed Dissociative Identity Disorder) but it is difficult to believe that the "epidemic" of this diagnosis observed in the 1980s and 1990s represented a genuine increase in properly classified cases (Kihlstrom, 1999).
|Excerpted from Kihlstrom, J.F. (2002). To honor Kraepelin...: From symptoms to pathology in the diagnosis of mental illness. In L.E. Beutler & M.L. Malik (Eds.), Alternatives to the DSM (pp. 279-303). Washington, D.C.: American Psychological Association.|
Every culture and subculture
provides its members with a set of social stereotypes.
On American college campuses, for example, there are
stereotypes of jocks and preppies, townies,
wonks, nerds, radicals, and so
on. In addition, students may be stereotyped by their
choice of major, or by their living units.
OKCupid.com, an Internet
dating site, promotes a Dating
Persona Test which classifies people into various
personality types -- 16 for men and another 16 for women,
based on combinations of four personality characteristics:
Thus, the male "Boy Next Door" (random, gentle, love, dreamer) contrasts with the "Pool Boy" (random, gentle, sex, dreamer), while the female "Maid of Honor" (deliberate, brutal, sex, master) contrasts with the "Sonnet" (deliberate, gentle, love, dreamer).
originally defined by the American journalist Walter Lippman
(1922), "a stereotype is an oversimplified picture of the
world, one that satisfies a need to see the world as more
understandable than it really is". From a cognitive
point of view, stereotypes are concepts -- summary mental
representations of an entire class of individuals. But
not all individuals in a society are subject to stereotyping.
of the content of social stereotypes confirm that they are
represented by lists of features.
social categories, stereotypes have two aspects:
Person categories are as diverse as the classical fourfold typology and Freud's four psychosexual characters. It is one thing (and great fun) to determine what categories of persons there are, but the more important scientific task is to determine how social categories are organized in the mind -- in other words, to determine their structure. Cognitive psychology has made a great deal of progress in understanding the structure of concepts and categories, and research and theory in social cognition has profited from, and contributed to these theoretical developments.
Perhaps the earliest philosophical discussion of conceptual structure was provided by Aristotle in his Categories. Aristotle set out the classical view of categories as proper sets -- a view which dominated thinking about concepts and categories well into the 20th century. Beginning in the 1950s, however, and especially the 1970s, philosophers, psychologists, and other cognitive scientists began to express considerable doubts about the classical view. In the time since, a number of different views of concepts and categories have emerged -- each attempting to solve the problems of the classical view, but each raising new problems of its own. Here's a short overview of the evolution of theories of conceptual structure.
According to the classical view, concepts are summary descriptions of the objects in some category. This summary description is abstracted from instances of a category, and applies equally well to all instances of a category.
classical view, categories are structured as proper sets,
meaning that the objects in a category share a set of defining
features which are singly necessary and
jointly sufficient to demarcate the category.
Examples of classification by proper sets include:
According to the proper set view, categories can be arranged
in a hierarchical system which represents the vertical
relations between categories, and yield the distinction
between superordinate and subordinate
Such hierarchies of proper sets are characterized by perfect nesting, by which we mean that subsets possess all the defining features of supersets (and then some). Examples include:
for example, the perfect nesting in the hierarchy of
Such hierarchies show perfect nesting: all instances of subcategories also possess the defining features of the relevant superordinate category. All trapezoids have the features of quadrilaterals, and all quadrilaterals have the features of planes.
Proper sets are also characterized by an all-or-none arrangement which characterizes the horizontal relations between adjacent categories, or the distinction between a category and its contrast. Because defining features are singly necessary and jointly sufficient, proper sets are homogeneous in the sense that all members of a category are equally good instances of that category (because they all possess the same set of defining features). An entity either possesses a defining feature or it doesn't; thus, there are sharp boundaries between contrasting categories: an object is either in the category or it isn't. You're either a fish, or you're not a fish. There are no ambiguous cases of category membership.
to the classical view, object categorization proceeds by a
process of feature-matching. Through
perception, the perceiver extracts information about the
features of the object; these features are then compared to
the defining feature of some category. If there is a
complete match between the features of the object and the
defining features of the category, then the object is labeled
as another instance of a category.
The proper set view of categorization is sometimes called the classical view because it is the one handed down in logic and philosophy from the time of the ancient Greeks. But there are some problems with it which suggest that however logical it may seem, it's not how the human mind categorizes objects. Smith & Medin (1981) distinguished between general criticisms of the classical view, which arise from simple reflection, and empirical criticisms, which emerge from experimental data on concept-formation.
On reflection, for example, it appears that some concepts are disjunctive: they are defined by two or more different sets of defining features.
Disjunctive categories violate the principle of defining features, because there is no defining feature which must be possessed by all members of the category.
Another problem is that many entities have unclear category membership. According to the classical, proper-set view of categories, every object should belong to one category or another. But is a rug an article of furniture? Is a potato a vegetable? Is a platypus a mammal? Is a panda a bear? We use categories like "furniture" without being able to clearly determine whether every object is a member of the category.
Furthermore, some categories are associated with unclear definitions. That is, it is difficult to specify the defining features of many of the concepts we use in ordinary life. A favorite example (from the philosopher Wittgenstein) is the concept of "game". Games don't necessarily involve competition (solitaire is a game); there isn't necessarily a winner (ring-around-the-rosy), and they're not always played for amusement (professional football). Of course, it may be that the defining features exist, but haven't been discovered yet. But that doesn't prevent us from assigning entities to categories; thus, categorization doesn't seem to depend on defining features.
another problem is imperfect nesting: it follows from
the hierarchical arrangement of categories that members of
subordinate categories should be judged as more similar to
members of immediately superordinate categories than to more
distant ones, for the simple reason that the two categories
share more features in common. Thus, a sparrow should be
judged more similar to a bird than to an animal. This
principle is often violated: for example, chickens, which are
birds, are judged to be more similar to animals than
birds. This results in a tangled hierarchy of
The chicken-sparrow example reveals the last, and perhaps the biggest, problem with the classical view of categories as proper sets: some entities are better instances of their categories than others. This is the problem of typicality. A sparrow is a better instance of the category bird -- it is a more "birdy" bird -- than is a chicken (or a goose, or an ostrich, or a penguin). Within a culture, there is a high degree of agreement about typicality. The problem is that all the instances in question share the features which define the category bird, and thus must be equivalent from the classical view. But they are clearly not equivalent; variations in typicality among members of a category can be very large.
Variations in typicality can be observed even in the classic example of a proper set -- namely, geometrical figures. For example, subjects usually identify an equilateral triangle, with equal sides and equal angles, as more typical of the category triangle, than isosceles, right, or acute triangles.
are a large number of ways to observe typicality effects:
Typicality appears to be determined by family resemblance. Category instances seem to be united by family resemblance rather than any set of defining features shared by all members of a category. Just as a child may have his mother's nose and his father's ears, so instance A may share one feature with instance B, and an entirely different feature with instance C, while B shares yet a third feature with C which that it does not share with A. Empirically, typical members share lots of features with other category members, while atypical members do not. Thus, sparrows are small, and fly, and sing; chickens are big, and walk, and cluck.
Typicality is important because it is another violation of the homogeneity assumption of the classical view. It appears that categories have a special internal structure which renders instances nonequivalent, even though they all share the same singly necessary and jointly sufficient defining features. Typicality effects indicate that we use non-necessary features when assigning objects to categories. And, in fact, when people are asked to list the features of various categories, they usually list features that are not true for all category members.
implication of these problems, taken together, is that the
classical view of categories is incorrect. Categorization by
proper sets may make sense from a logical point of view, but
it doesn't capture how the mind actually works.
Recently, another view of categorization has gained status within psychology: this is known as the prototype or the probabilistic view. The probabilistic view has its origins in the philosophical work of Ludwig Wittgenstein, but was brought into psychological theory by UCB's Prof. Eleanor Rosch (now Prof. Emerita), who, with a single paper in 1975, overturned 2,500 years of thinking -- ever since Aristotle, actually -- about concepts and categories.
The prototype view retains the idea, from the classical view, that concepts are summary descriptions of the instances of a category. Unlike the classical view, however, in the prototype view the summary description does not apply equally well to every member of the category, because there are no defining features of category membership.
to the prototype view, categories are fuzzy sets, in
that there is only a probabilistic relationship between any
particular feature and category membership. No feature is
singly necessary to define a category, and no set of features
is jointly sufficient.
Fuzzy Sets and Fuzzy Logic
The notion of categories as fuzzy rather than sets, represented by prototypes rather than lists of defining features, is related to the concept of fuzzy logic developed by Lofti Zadeh, a computer scientist at UC Berkeley. Whereas the traditional view of truth is that a statement (such as an item of declarative knowledge) is either true or false, Zadeh argued that statements can be partly true, possessing a "truth value" somewhere between 0 (false) and 1 (true).
Fuzzy logic can help resolve certain logical conundrums -- for example the paradox of Epimenides the Cretan (6th century BC), who famously asserted that "All Cretans are liars". If all Cretans are liars, and Epimenides himself is a Cretan, then his statement cannot be true. Put another way: if Epimenides is telling the truth, then he is a liar. As another example, consider the related Liar paradox: the simple statement that "This sentence is false". Zadeh has proposed that such paradoxes can be resolved by concluding that the statements in question are only partially true.
Fuzzy logic also applies to categorization. Under the classical view of categories as proper sets, a similar "all or none" rule applies: an object either possesses a defining feature of a category or it does not; and therefore it either is or is not an instance of the category. But under fuzzy logic, the statement "object X has feature Y" can be partially true; and if Y is one of the defining features of category Z, it also can be partially true that "Object X is an instance of category Z".
A result of the probabilistic relation between features and categories is that category instances can be quite heterogeneous. That is, members of the same category can vary widely in terms of the attributes they possess. All of these attributes are correlated with category membership, but none are singly necessary and no set is jointly sufficient.
Some instances of a category are more typical than others: these possess relatively more central features.
According to the prototype view, categories are not represented by a list of defining features, but rather by a category prototype, or focal instance, which has many features central to category membership (and thus a family resemblance to other category members) but few features central to membership in contrasting categories.
It also follows from the prototype view that there are no sharp boundaries between adjacent categories (hence the term fuzzy sets). In other words, the horizontal distinction between a category and its contrast may be very unclear. Thus, a tomato is a fruit but is usually considered a vegetable (it has only one perceptual attribute of fruits, having seeds; but many functional features of vegetables, such as the circumstances under which it is eaten). Dolphins and whales are mammals, but are usually (at least informally) considered to be fish: they have few features that are central to mammalhood (they give live birth and nurse their young), but lots of features that are central to fishiness.
Actually, there are two different versions of the prototype view.
versions of the of the prototype view have somewhat different
implications for categorization.
You Say "Tomato"... I Say "It's a Truck"
It turns out that defining categories isn't easy, even for legislators, judges, and policymakers.
Consider, again, the tomato. In 1883, Congress enacted a Tariff Act which placed a 10% duty on "vegetables in their natural state", but permitted duty-free import of "green, ripe, or dried" fruits. The Customs Collector in the Port of New York, seeing the prospects of increased revenues, declared that tomatoes were vegetables and therefore taxable. The international tomato cartel sued, and the case (known as Nix v. Hedden) eventually reached the United States Supreme Court, which unanimously declared the tomato to be a vegetable, while knowing full well that it is a fruit. Justice Gray wrote for the bench:
Nearly a century later, the Reagan administration, trying to justify cuts in the budget for federal school-lunch assistance, likewise declared tomato ketchup -- like cucumber relish -- to be a vegetable.
While tomatoes are commonly considered to be vegetables of a sort to be found on salads and in spaghetti sauce, and not fruits of a sort found on breakfast cereals or birthday cakes, Edith Pearlman did find Tomate offered as a dessert in a restaurant in Paris ("Haute Tomato", Smithsonian, July 2003). Nevertheless, as the British humorist Miles Kingston noted, that "Knowledge tells us that a tomato is a fruit; wisdom prevents us from putting it into a fruit salad" (quoted by Heinz Hellin, Smithsonian, September 2003).
As another example: American elementary-school students are commonly taught that there are five "Great Lakes" on the border between the United States and Canada -- Ontario, Erie, Michigan, Huron, and Superior. But in 1998, at the behest of Senator Patrick Leahy of Vermont, Congress voted to designate Lake Champlain, which lies on the border between Vermont and New York, as a sixth Great Lake. Leahy's logic is unclear, but seems to have been that the Great Lakes were all big lakes that were on political boundaries, or at least near Canada, and Lake Champlain was also a big lake on a political boundary, or at least it was near Canada too, so Lake Champlain ought to be a Great Lake too (the designation was written into law, but later revoked).
And finally, is an SUV a car or a truck? (see "Big and Bad" by Malcolm Gladwell, New Yorker, 01/12/04). Generally, cars are intended to move people, while trucks are intended to move cargo. When the Detroit automakers introduced sport utility vehicles, they were classified as trucks, on the grounds that they were intended for off-road use by farmers, ranchers, and the like for carrying heavy loads and towing heavy trailers. But then the same vehicles were marketed to urban and suburban customers as a "lifestyle choice" for everyday use. Legally, the classification of SUVs as trucks means that they do not have to conform to "CAFE" standards for fuel efficiency, or to car standards for safety. Still, the categories of car and truck appear to be fuzzy sets, as illustrated by the following exchange between Tom and Ray Magliozzi, "the Click and Clack" car guys of National Public Radio ("Reader Defends Headroom in Subaru", West County Times, 02/28/04):
The prototype view solves most of the problems that confront the classical view, and (in my view, anyway) is probably the best theory of conceptual structure and categorization that we've got. But as research proceeded on various aspects of the prototype view, certain problems emerged, leading to the development of other views of concepts and categories.
prototype view, as in the classical view, related categories
can be arranged in a hierarchy of subordinate and
superordinate categories. Many accounts of the
prototype view argue that there is a basic level of
categorization, which is defined as the most inclusive
level at which:
In the realm of animals, for example, dog and cat are at the basic level, while beagle and Siamese are at subordinate levels. In the domain of musical instruments, piano and saxophone are at the basic level, while grand piano and baritone saxophone are at subordinate levels. The basic level is in some important sense psychologically salient, and preferred for object categorization and other cognitive purposes.
The Exemplar View
For example, some theorists
now favor a third view of concepts and categories, which
abandons the definition of concepts as summary descriptions
of category members. According to the exemplar view,
concepts consist simply of lists of their members, with no
defining or characteristic features to hold the entire set
together. In other words, what holds the instances together
is their common membership in the category. It's a little
like defining a category by enumeration, but not exactly.
The members do have some things in common, according to the
exemplar view; but those things are not particularly
important for categorization.
When we want to know whether an object is a member of a category, the classical view says that we compare the object to a list of defining features; the prototype view says that we compare it to the category prototype; the exemplar view says that we compare it to individual category members. Thus, in forming categories, we don't learn prototypes, but rather we learn salient examples.
Teasing apart the prototype and the exemplar view turns out to be fiendishly difficult. There are a couple of very clever experiments which appear to support the exemplar view. For example, it turns out that we will classify an object as a member of a category if it resembles another object that is already labeled as a category member, even if neither the object, or the instance, particularly resemble the category prototype.
Nevertheless, some theorists investigators are worried about it because it seems to be uneconomical. The compromise position, which has many adherents, is that we categorize in terms of both prototypes and exemplars. For example, and this is still a hypothesis to be tested, novices in a particular domain categorize in terms of prototypes and experts categorize in terms of exemplars.
these differences, the exemplar view agrees with the
prototype view that categorization proceeds by way of similarity
judgments. And they further agree that similarity
varies in degrees. They just differ in what the object
must be similar to:
the work of Amos Tversky, Medin (1989) has outlined a modal
model of similarity judgments:
In either case, similarity is sufficient to describe conceptual structure -- all the instances of a concept are similar, in that they either share some features with the category prototype or they share some features with a category exemplar.
As noted, the prototype and exemplar views of categorization are all based on a principle of similarity. What members of a category have in common is that they share some features or attributes in common with at least some other member(s) of the same category. The implication is that similarity is something that is an attribute of objects, that can either be measured (by counting overlapping features) or judged (by estimating them). But ingenious researchers have uncovered some troubles with similarity as a basis for categorization -- and, for that matter, with similarity in general.
Effects. However, recently it has been
recognized that some categories are defined by theories
instead of by similarity.
For example, in one experiment, when subjects were presented with pictures of a white cloud, a grey cloud, and a black cloud, they grouped the grey and black clouds together as similar; but when presented with pictures of white hair, grey hair, and black hair, in which the shades of hair were identical to the shades of cloud, subjects grouped the grey hair with the white hair. Because the shades were identical in the two cases, grouping could not have been based on similarity of features. Rather, the categories seemed to be defined by a theory of the domain: grey and black clouds signify stormy weather, while white and grey hair signify old age.
Ad-Hoc Categories. What do children, money, insurance papers, photo albums, and pets have in common? Nothing, when viewed in terms of feature similarity. But they are all things that you would take out of your house in case of a fire. The objects listed together are similar to each other in this respect only; in other respects, they are quite different.
This is also true of the context effects on similarity judgment: grey and black are similar with respect to clouds and weather, while grey and white are similar with respect to hair and aging.
These observations tell us that similarity is not necessarily the operative factor in category definition. In some cases, at least, similarity is determined by a theory of the domain in question: there is something about weather that makes grey and black clouds similar, and there is something about aging that makes white and grey hair similar.
In the theory-based view of categorization (Medin, 1989), concepts are essentially theories of the categorical domain in question. Conceptual theories perform a number of different functions:
From this point of view, similarity-based classification, as described in the prototype and exemplar views, is simply a short-cut heuristic used for purposes of classification. The real principle of conceptual structure is the theory of the categorical domain in question.
One way or another, concepts and categories have coherence: there is something that links members together. In classification by similarity, that something is intrinsic to the entities themselves; in classification by theories, that something is imposed by the mind of the thinker.
But what to make of this proliferation of theories? From my point of view, the questions raised about similarity have a kind of forensic quality -- they sometimes seem to amount to a kind of scholarly nit-picking. To be sure, similarity varies with context; and there are certainly some categories which are only held together by a theory, and similarity fails utterly to hold a category together. For most purposes, the prototype view, perhaps corrected (or expanded) a little by the exemplar view, seems to work pretty well as an account of how concepts are structured, and how objects are categorized.
As it happens, most work on social categorization has been based on the prototype view. But there are areas where the exemplar view has been applied very fruitfully, and even a few areas where it makes sense to abandon similarity, and to invoke something like the theory-based view.
summarize this history, concepts were first construed as
summary descriptions of category members.
The exemplar view abandons the notion that concepts are summary descriptions, and instead proposes that concepts are collections of instances that exemplify the category. But it does not abandon the notion that concepts are based on similarity of features. While in the prototype view category members are similar to the prototype, in the exemplar view category members are similar to other exemplars.
Between them, the prototype and exemplar views provide a pretty good account of concepts and categories. Conventional wisdom holds that concepts are represented as a combination of prototypes and exemplars, with novices relying on prototypes and experts relying on exemplars for categorization of new objects.
The theory view of categories abandons similarity as the basis for categorization. Instead, concepts are represented as "theories" which guide the grouping of instances into a category. According to the theory view, similarity is a heuristic that we use as an economical shortcut strategy for categorization; but the closer you look, according to this view, the more it becomes clear that conceptual coherence -- the "glue" that holds a concept together -- is really provided by a theory, not similarity.
In any event, it turns out that all four views of classification -- all five, if you count the dimensional and featural versions of the prototype view separately -- have been applied to the problem of person categorization. But by far the most popular framework for the study of person concepts has been the featural version of the prototype view.
Having now looked at theories of conceptual structure in the nonsocial domain, let's see how these have worked out in the social domain of persons, behaviors, and situations.
Initially, person categories were, at least implicitly, construed as proper sets, summarized by a list of singly necessary and jointly sufficient defining features.
Consider, for example, the classical fourfold typology of Hippocrates, Galen, and Kant. As characterized by Wundt (1903), each of the four types was defined by a set of traits which served to define them. Melancholics are anxious and worried, Cholerics are quickly roused and egocentric, Phlegmatics are reasonable and high-principled, and Sanguines are playful and easy-going.
This classification scheme has such wide appeal that it lasted for more than 2,000 years, but it has a kind of Procrustean quality.
The Myth of Procrustes
Procrustes is a character who appears in the Greek myth of Theseus and the Minotaur, as depicted by Apollodorus and other classical authors. Procrustes was an innkeeper in Eleusis who would rob his guests, and then torture them on a special bed. If his victim were too short, he was stretched on a rack until he fit; if a victim were too tall, he was cut to the right length. Theseus, a prince of Athens and cousin of Hercules who had ambitions to become a hero himself, fought with Procrustes, and other bandits as well, and killed him with his own bed.
The tale is perhaps the origin of the old expression: "You made your bed: Now lie in it!".
Empirically, it proves difficult to fit everybody into this (or any other) typological scheme, for the simple reason that most human attributes are not generally present in an all-or-none manner, but are continuously distributed over an infinite series of gradations. The most prominent exceptions are gender and blood type -- and, as noted, even gender isn't exactly all-or-none. The claim of continuity certainly holds true for other strictly physical dimensions such as height, girth, and skin color -- How tall is tall? How fat is fat? How black is black? -- and this is all the more the case for psychological attributes. All sanguines may be sociable, but some people are more sociable than others. The notion that some people may be more sanguine than others, which is what this fact implies, is inconsistent with the classical view of categorization.
Moreover, it is also apparent that individuals can also display features that define contrasting categories. If sanguines are sociable, and cholerics egocentric, what do we make of a person who is both sociable and egocentric?
We are reminded of the entomologist who found an insect which he could not classify, and promptly stepped on it (joke courtesy of E.R. Hilgard).
The problem of partial and combined expression of type features, once recognized and taken seriously, was the beginning of the end for type theories.
These problems were recognized early on, and one of the founders of modern scientific psychology, Wilhelm Wundt (1903) -- offered a solution to both these dilemmas. Wundt was a structuralist, primarily concerned with analyzing mental life into its basic elements. While his work emphasized the analysis of sensory experience, he also turned his attention to the problems of emotion and personality. Wundt argued that the classic fourfold typology of Hippocrates, Galen, and Kant could be understood in terms of emotional arousal.
Instead of slotting people into four discrete categories, Wundt proposed that people be classified in terms of two continuous dimensions reflecting their characteristic speed and intensity of emotional arousal. In this way, Kant's categorical system was transformed into a dimensional system, in which individuals could be located not in categories but as points in two-dimensional space. The classic fourfold types described those individuals who fell along the diagonals of the system.
Abandonment of categorical types in favor of a dimensional scheme proposed by Wundt seemed to allow the differences. between people to be represented more accurately. Such a solution also appealed to Allport, who celebrated the individual and who was more interested in studying a person's unique combinations of attributes than in studying whole classes of people or people in general.
Wundt's solution to the problem of types initiated a tradition in the scientific study of personality known as trait psychology. In the present context, however, and with the benefit of hindsight (because Wundt, like everyone else of his time, implicitly or explicitly embraced the classical view of categories as proper sets), we can view Wundt's work as an early anticipation of the prototype view of categories -- and, specifically, of the dimensional view of concepts. In this view, the instances of each type occupy their respective quadrants in a two-dimensional space; and the "prototypical" instance of each type lies at a point representing the average of all members of that category.
A line of research initiated by Nancy Cantor was the first conscious, deliberate application of the prototype view of categories to the problem of person concepts. Cantor's hypothesis was simply that person categories, like other natural categories, were structured as fuzzy sets and represented by category prototypes.
In one set of studies Cantor & Mischel (1979; Walter Mischel was Cantor's graduate advisor) devised four three-level hierarchies of person categories. These categories were not those of the classic fourfold typology, but rather were selected to be more recognizable to nonprofessionals.
Because her categories were somewhat artificial, Cantor & Mischel first asked a group of subjects to sort the various types into four large categories, and then to sort each of these into subcategories. Employing a statistical technique known as hierarchical cluster analysis, she showed that subjects' category judgments largely replicated the structure that she had intended to build into her experiment.
Next, Cantor & Mischel asked subjects to list the attributes that were "characteristic of and common to" each type of person. They then compiled a list of features listed by more than 20% of the subjects, and asked a new group of subjects to rate the percentage of members in each category who would possess that feature. Finally, Cantor & Mischel assembled "consensual" feature lists for each of the categories, consisting of those features that had been rated by at least 50% of the subjects as "characteristic of and common to" the category members.
& Mischel found, as would be expected from the prototype
view, that examples within a particular category had
relatively few shared features, but they were held together
by a kind of family-resemblance structure, centered around
prototypical examples of each type at each level.
the data they had collected, Cantor & Mischel asked
whether there was a "basic level" of person categorization,
as Rosch and her colleagues (1976) had found for object
categories. Recall that the basic level of
categorization is the most inclusive level at which:
Counting the number of consensual attributes at each level of categorization, Cantor & Mischel found, analogous to Rosch et al. (1976), that the middle level of their hierarchy -- the level of phobic and criminal madman, religious devotee and social activist seemed to function like a basic level for person categorization.
For example, moving from the superordinate level to the middle level produced a greater increase in the number of attributes associated with the category, compared to the shift from the middle level to the subordinate level. In other words, the middle level maximized the information value of the category.
This was also the case for specific attributes, such as physical appearance and possessions,
Things didn't work out quite so neatly for socioeconomic status, where the subordinate level seemed to provide a greater increment in information (but there wasn't much socioeconomic information to begin with).
Cantor & Mischel's work on person prototypes and basic levels was a pioneering application of the prototype view to social categorization, but Roger Brown (in his unpublished 1980 Katz-Newcomb Lecture at the University of Michigan) offered a gentle criticism -- which was that the categories they worked with seemed a little artificial. Accordingly, and following even more closely the method of Rosch et al. (interestingly, Rosch had worked with Brown as an undergraduate and graduate student at Harvard, where she did her pioneering work on color categorization and the Sapir-Whorf hypothesis), Brown began by identifying the most frequent words in English that label categories of persons. In a somewhat impressionistic analyses, Brown suggested that terms like boy and girl, grandmother and grandfather, and lawyer and poet, functioned as the basic-object level for person categorization.
Note to aspiring graduate (and undergraduate) students. Brown's analysis was inspired by that of Rosch et al., but it was not at all quantitative. Somebody who wanted to make a huge contribution to the study of person categorization could do a lot worse than to start where Brown did, with the dictionary, and perform the same sorts of quantitative analyses that Rosch did. Any takers?
An excellent example of the
featural prototype view of social categorization is supplied
by psychiatric diagnosis. Essentially, medical
diagnosis of all sorts not only classifies an illness, it
who suffers from that illness. This is especially
the case for psychiatric diagnosis, which has terms
like schizophrenic, depressive, and neurotic.
The other medical specialties rarely have terms
like canceric, fluic, or heart-attackic. The
various syndromes (hallucinations, anxiety,
delusions) are features, and the various
syndromes (schizophrenia, bipolar disorder,
obsessive-compulsive disorder) are
categories. Categorization proceeds by feature
matching, in which the patient's symptoms are
matched to the symptoms associated with the
syndrome. The Diagnostic
and Statistical Manual (DSM)
of the American Psychiatric
Association, currently in its 5th
edition (2013), constitutes an
official list of the diagnostic
categories and the
features associated with them.
Note, as a matter of historical
interest, the tremendous growth of DSM
since the first edition in
1952. This may represent a
real increase in knowledge about
mental illness. Or it may
represent a "medicalization", or
"pathologization", of ordinary
problems in living.
Early approaches to
psychiatric diagnosis, at least tacitly, construed the
diagnostic categories as proper sets, defined by a set of
symptoms that were singly necessary and jointly sufficient
to assign a particular diagnosis to a particular patient.
Similarly, Janet (1907) distinguished between subcategories of neurosis, namely hysteria and psychasthenia, and identified the defining "stigmata" of hysteria.
The construal of
the diagnostic categories as proper sets, to the extent
that anyone thought about it at all, almost certainly
reflected the classical view of categories handed down
from the time of Aristotle. Indeed, much of the
dissatisfaction with psychiatric diagnosis, at least among
those inclined toward diagnosis in the first place,
stemmed from the problems of partial and combined
expression (e.g., Eysenck,1961). Many patients did not fit
into the traditional diagnostic categories, either because
they did not display all the defining features of a
particular syndrome, or because they displayed features
characteristic of two or more contrasting syndromes.
This worked for a while.
In the 1970s, however, psychologists and other cognitive scientists began to discuss problems with the classical view of categories as proper sets, and to propose other models, including the probabilistic or prototype model (for a review of these problems, and an explication of the prototype model, see Smith & Medin, 1981). According to the prototype view, categories are fuzzy sets, lacking sharp boundaries between them. The members of categories are united by family resemblance rather than a package of defining features. Just as a child may have her mother's nose and her father's eyes, so the instances of a category share a set of characteristic features that are only probabilistically associated with category membership. No feature is singly necessary, and no set of features is jointly sufficient, to define the category. Categories are represented by prototypes, which possess many features characteristic of the target category, and few features characteristic of contrasting categories.
and DSM-IV marked a shift in the structure of the
psychiatric nosology, in which the diagnostic categories
were re-construed as fuzzy, rather than proper
sets, represented by category prototypes, rather than a
list of defining symptoms.
This "fuzzy-set" structure of the diagnostic categories has continued with DSM-5 (which dropped the Roman numerals). The precise criteria for certain diagnoses, such as schizophrenia or major depressive disorder , may have changed somewhat, but the emphasis on characteristic rather than defining symptoms, and therefore the allowance for considerable heterogeneity within categories, remains constant.
The prototype view solves the problems of partial and combined expression, and in fact a seminal series of studies by Cantor and her colleagues, based largely on data collected before the publication of DSM-III, showed that mental-health professionals tended to follow it, rather than the classical view, when actually assigning diagnostic labels (Cantor & Genero, 1986; Cantor, Smith, French, & Mezzich, 1980; Genero & Cantor, 1987). In a striking instance of art imitating life, DSM-III tacitly adopted the prototype view in proposing rules for psychiatric diagnosis. For example, DSM-III permits the diagnosis of schizophrenia if the patient presents any one of six symptoms during the acute phase of the illness, and any two of eight symptoms during the chronic phase. Thus, to simplify somewhat (but only somewhat) two patients -- one with bizarre delusions, social isolation, and markedly peculiar behavior, and the other with auditory hallucinations, marked impairment in role functioning, and blunted, flat, or (emphasis added) inappropriate affect -- could both be diagnosed with schizophrenia. No symptom is singly necessary, and no package of symptoms is jointly sufficient, to diagnose schizophrenia as opposed to something else. Although the packaging of symptoms changed somewhat, DSM-IV followed suit.
Of course, other views of categorization have emerged since the prototype view, including an exemplar view and a theory-based view. These models, too, have been applied to psychiatric diagnosis. For example, research by Cantor and Genero showed that psychiatric experts tended to diagnosis by comparing patients to category exemplars, while psychiatric novices tended to rely on category prototypes.
DSM-V was published in 2013. It was also organized along probabilistic, prototypical lines -- suggesting that the diagnostic categories themselves, and not just the categorization process, are organized as fuzzy sets. However, this is not enough for many psychologists, who seek to embrace another basis for diagnosis entirely.
This is more than a debate over whether one diagnosis or another should be included in the new nomenclature. Some colleagues, heirs of the psychodynamically and psychosocially oriented clinicians who dominated American psychiatry before the neo-Kraepelinian revolution, wish to abandon diagnosis entirely. So do contemporary anti-psychiatrists -- though for quite different reasons. Classical behavior therapists also abjure diagnosis, seeking to modify individual symptoms without paying much attention to syndromes and diseases. For these groups, the best DSM is no DSM at all. Beyond these essentially ideological critiques, there appear to be essentially two (not unrelated) points of view: one that seeks only to put diagnosis on a firmer empirical basis, and another which seeks to substitute a dimensional for a categorical structure for the diagnostic nosology. Both seek to abandon the medical model of psychopathology represented by the neo-Kraepelinians who formulated DSM-III and DSM-IV.
The empirical critique is exemplified by Blashfield (1985), who has been critical of the "intuitive" (p. 116) way in which the neo-Kraepelinians did their work, and who wants the diagnostic system to be placed on firmer empirical grounds. For Blashfield and others like him, a more valid set of diagnostic categories will be produced by the application of multivariate techniques, such as factor analysis and cluster analysis, which will really "carve nature at its joints", showing what really goes with what. The result may very well be a nosology organized along fuzzy-set lines, as DSM-III was and DSM-IV is. But at least diagnosis will not depend on the intuitions of a group of professionals imbued with the traditional nomenclature. If schizophrenia or some other traditional syndrome fails to appear in one of the factors or clusters, that's the way the cookie crumbles: schizophrenia will have to be dropped from the nomenclature. Less radically, the analysis may yield a syndrome resembling schizophrenia in important respects, but the empirically observed pattern of correlations or co-occurrences may require revision in specific diagnostic criteria.
While Blashfield (1985) appears to be agnostic about whether a new diagnostic system should be categorical or dimensional in nature, so long as it is adequately grounded in empirical data, other psychologists, viewing diagnosis from the standpoint of personality assessment, want to opt for a dimensional alternative. Exemplifying this perspective are Clark, Watson, and their colleagues (Clark, Watson, & Reynolds, 1995; Watson, Clark, & Harkness, 1994). They argue that categorical models of psychopathology are challenged by such problems as comorbidity (e.g., the possibility that a single person might satisfy criteria for both schizophrenia and affective disorder) and heterogeneity (e.g., the fact that the present system allows two people with the same diagnosis to present entirely different patterns of symptoms). Clark et al. (1995) are also bothered by the frequent provision in DSM-IV of a subcategory of "not otherwise specified", which really does seem to be a mechanism for assigning diagnoses that do not really fit; and by a forced separation between some Axis I diagnoses (e.g., schizophrenia), and their cognate personality disorders on Axis II (e.g., schizotypal personality disorder).
Clark and Watson's points (some of which are essentially reformulations of the problems of partial and combined expression) are well taken, and it is clear -- and has been clear at least since the time of Eysenck (1961) -- that a shift to a dimensional structure would go a long way toward addressing them. At the same time, such a shift is not the only possible fix. After all, heterogeneity is precisely the problem which probabilistic models of categorization are designed to address (the exemplar and theory-based models also address them), although it seems possible that such categories as schizophrenia, as defined in DSM-III and DSM-IV may be a little too heterogeneous. Comorbidity is a problem only if diagnoses label people, rather than diseases. After all, dual diagnosis has been a fixture in work on alcohol and drug abuse, mental retardation, and other disorders at least since the 1980s (e.g., Penick, Nickel, Cantrell, & Powell, 1990; Woody, McClellan, & Bedrick, 1995; Zimberg, 1993). There is no a priori reason why a person cannot suffer from both schizophrenia and affective disorder, just as a person can suffer from both cancer and heart disease.
There is no doubt that the diagnostic nosology should be put on a firmer empirical basis, and it may well be that a shift from a categorical to a dimensional structure will improve the reliability and validity of the enterprise. It should be noted, however, that both proposals essentially represent alternative ways of handling information about symptoms -- subjectively experienced and/or publicly observable manifestations of underlying disease processes. So long as they remain focused on symptoms, proposals for revision of the psychiatric nomenclature, nosology, and diagnosis amount to rearranging the deckchairs on the Titanic. Instead of debating alternative ways of handling information about symptoms, we should be moving beyond symptoms to diagnosis based on underlying pathology. In doing so, we would be honoring Kraepelin rather than repealing his principles, and following in the best tradition of the medical model of psychopathology, rather than abandoning it.
The theory view of concepts has not been systematically applied to the categorization of persons, but there are hints of the theory view in the literature. In particular, some person categories seem to be very heterogeneous. What unites the members of these categories is not so much similarity in their personality traits, but rather similarity in the theory of how they acquired the traits they have.
The character types of Freud's psychoanalytic theory of personality and psychotherapy (oral, anal, phallic, and genital) can be very heterogeneous -- so heterogeneous that they seem, at first glance, to have little or nothing in common at all. Yet, from a Freudian perspective, all "anal types" have in common that they are fixated at, or have regressed to, the oral stage of psychosexual development. Similarly, the Freudian theory of depression holds that all depressives, despite variability in their superficial symptoms, have in common the fact that they have introjected their aggressive tendencies.
Similarly, a prominent trend among certain mental-health practitioners is to identify people as "survivors" of particular events -- a tendency that has sometimes been called victimology. For example: survivors of child abuse (including incest and other forms of sexual abuse), adult children of alcoholics, and children (or grandchildren) of Holocaust survivors. An interesting feature of these groups is that the members in them can be very different from each other. For example, it is claimed that one survivor of childhood sexual abuse may wear loose-fitting clothing, while another may dress in a manner that is highly provocative sexually (Blume; Walker).
Without denying that such individuals can sometimes face very serious personal problems, in the present context what is interesting about such groupings is that the people in them seem to have little in common except for the fact that they are survivors (or victims) of something. They are similar with respect to that circumstance only. While in some ways survivor might be a proper set, with survivorhood as the sole singly necessary and jointly sufficient defining feature of the category, the role of "survivorhood" seems to go deeper than this -- if only because there are some victims of these circumstances who have no personal difficulties, and for that matter may not identify themselves as "survivors" of anything. In the final analysis, the category seems to be defined by a theory -- e.g., that category members, however diverse and heterogeneous they may be, got the way they are by virtue of their victimization.
Just as personality types can be viewed as categories of people (which, after all, is precisely what they are), so personality traits can be viewed as categories of behavior. Of course, traits -- superordinate traits, anyway -- are also categories of subordinate traits (see the lecture supplement on the Cognitive Perspective on Social Psychology).
for example, the Big Five personality traits. As
described by Norman (1968), among others each of these
dimensions (e.g., extraversion) is a superordinate dimension
subsuming a number of other, subordinate dimensions (e.g.,
talkative, frank, adventurous, and sociable).
Presumably, each of these subordinate dimensions contributes
to the person's overall score on extraversion.
However, the first investigator to apply the prototype view to traits, as opposed to types, was Sarah Hampson, whose work was explicitly inspired by Cantor's work on personality types. In one study, Hampson (1982) presented subjects with a list of 60 trait terms, and had them rate the corresponding trait-related behaviors on a scale of imaginability. Subjects found it easiest to imagine helpful behaviors, and hardest to imagine important behaviors.
Then Hampson asked subjects to generate 6 example behaviors for each trait, and had them rate the ease of doing so. Perhaps not surprisingly, the subjects found it easier to think of behaviors related to traits of high imaginability, compared to traits of low imaginability.
In the next step, Hampson presented subjects with the behaviors generated in the previous stem, and asked them to rate how "prototypical" they were of the trait in question. Again perhaps not surprisingly, the subjects rated behaviors from highly imaginable traits to be relatively prototypical of those traits (in Hampson's original report, highly prototypical traits were given ratings of "1"; these ratings have been reversed for the slide).
Finally, Hampson asked subjects to assign the behaviors to their appropriate categories. Subjects made fewer categorization errors with highly prototypical behaviors -- even if they were related to traits that were not highly "imaginable".
From these and other results, Hampson concluded that traits are indeed categories of behaviors, and that trait-related behaviors varied in their prototypicality.
Buss & Craik (1983),
working at UCB, also applied the prototype view to
personality traits. Although trait psychology is
generally associated with the biophysical view of traits as
behavioral dispositions, these investigators, like Hampson,
viewed traits simply as labels for action tendencies, with
no causal implications. From this point of view,
traits are not explanations of behavior, but rather
summarize the regularities in a person's conduct. Put
another way, traits are simply categories of actions, or
equivalence classes of behavior.
order to study the internal structure of trait categories,
Buss & Craik examined 6 categories of acts. For
each category, they asked subjects to list behaviors that
exemplified the trait, and then they asked them to rate the
prototypicality of the behaviors with respect to each
trait. They found that the various acts did indeed
vary in prototypicality. In the lists that follow,
Equally important, if not more so, they found that many behaviors were associated with multiple traits, although most behaviors were more "prototypical" of one trait than another.
These findings are consistent with the view of traits as fuzzy sets of trait-related behaviors, each represented by a set of "prototypical" behaviors which exemplify the trait in question. The act-frequency model of traits implies that people will label prototypical acts more readily than nonprototypical ones, and that trait attributions will vary with the prototypicality of the target's actions. It also implies that people will remember more prototypical acts better than less prototypical ones.
In social cognition we have categories of persons, represented by types, and categories of behaviors, represented by traits. We also have categories of the situations where persons meet to exchange social behaviors.
Recognizing that a theory of social cognition must take account of situations as well as of persons, Cantor and her colleagues explored the structure of situation categories in much the same manner as they had earlier explored the structure of person categories (Cantor, Mischel, & Schwartz, 1982).
their research, they constructed four three-level
hierarchies, paralleling those that they had used in the
earlier person studies.
Based on subjects' performance in a variety of feature-listing and classification tasks, Cantor et al. concluded that situational categories were fuzzy sets, with no defining features, represented by prototypes possessing many "prototypical" features of the situations in question.
From a social-psychological perspective, social situations are defined less by their physical characteristics than by the expectations and behaviors that occur within them. Accordingly, rather different approach to situational concepts is provided by the notion of social scripts, or scripts for social behaviors.
To some extent, the notion of scripts stems from the role theory of social behavior, as articulated by Theodore R. Sarbin (1954; Sarbin & Allen, 1968). Although role theory has chiefly been applied to the analysis of hypnosis (e.g., Sarbin & Coe, 1972), in principle it is applicable to any kind of social interaction. Role theory is based on a dramaturgical metaphor for social behavior, in which individuals are construed as actors, playing roles, according to scripts, in front of an audience. In a very real sense, situations are defined by the scripts that are played out within them.
the important elements in Sarbin's role theory are:
enactment, in turn, is a function of a number of different
It is unclear to what extent Sarbin (who was a professor here at UCB before he became part of the founding faculty at UC Santa Cruz) thinks that we are "only" playing roles in our social behavior. He wishes to stress that, in talking about "role-playing", he does not think there is anything insincere about our social interactions. It's just in the nature of social life that we are always playing some kind of role, and following some kind of script. But in the present context, roles, and the scripts associated with them, may be thought of as collections of the behaviors associated with various situations. The situations we are in determine the roles we play, and the roles we play define the situations we're in.
Another early expression of the script idea came from John H. Gagnon and William Simon, who applied the script concept to sexual behavior (e.g., Gagnon & Simon, 1973; Gagnon, 1974; Simon, 1974). Not to go into detail, they noted that sexual activity often seemed to follow a script, beginning with kissing, proceeding to touching, then undressing (conventionally, first her and then him, by Simon's account), and then... -- to be followed by a cigarette and sleep. There are variations, of course, and the script may be played out over the minutes and hours of a single encounter, or over days, weeks, and months as a couple moves to "first base", "second base", "third base", and beyond. But Gagnon and Simon's insight is that there is something like a script being followed -- a script learned through the process of sexual socialization.
Although Gagnon and Simon focused their analysis of scripts on sexual behavior, they made it clear that sexual scripts were simply "a subclass of the general category of scripted social behavior" (Gagnon, 1974, p. 29). As Gagnon noted (1974, p. 29):
The concept script shares certain similarities with the concepts of plans or schemes in that it is a unit large enough to comprehend symbolic and non-verbal elements in an organized and time-bound sequence of conduct through which persons both envisage future behavior and check on the quality of ongoing conduct. Such scripts name the actors, describe their qualities, indicate the motives for the behavior of the participants, and set the sequence of appropriate activities, both verbal and nonverbal, that should take place to conclude behavior successfully and allow transitions into new activities. The relation of such scripts to concrete behavior is quite complex and indirect; they are neither direct reflections of any concrete situation nor are they surprise-free in their capacity to control any concrete situation They are often relatively incomplete, that is, they do not specify every act and the order in which it is to occur; indeed..., the incompleteness of specification is required, since in any concrete situation many of the sub-elements of the script must be carried out without the actor noticing that he or she is performing them. They have a major advantage over concrete behavior, however, in that they are manipulable in terms of their content, sequence, and symbolic valuations, often without reference to any concrete situation. We commonly call this process of symbolic reorganization a fantasy when it appears that there is no situation in which a script in its reorganized form may be tested or performed, but in fact, such apparently inapplicable scripts have significant value even in situations which do not contain all or even any of the concrete elements which exist in the symbolic map offered by the script.
later work, Gagnon, Simon, and their colleagues have
distinguished among three different levels of scripting:
notion of scripts in role theory (especially), and even in
sexual script theory, is relatively informal. Just
what goes into scripts, and how they are structured, was
discussed in detail by Schank & Abelson (1977), who went
so far as to write script theory in the form of an operating
computer program -- an exercise in artificial
intelligence applied to the domain of social
cognition. Schank and Abelson based their scripts on conceptual
dependency theory (Schank, 1975), which attempts to
represent the meaning of sentences in terms of a relatively
small set of primitive elements. Included in
these primitive elements are primitive acts such as:
Abelson illustrate their approach with what they call the Restaurant
Entering the Restaurant
Customer PTRANS Customer into restaurant
Customer MOVE Customer to sitting position
Customer MTRANS Signal to Waiter
Waiter PTRANS Food to Customer
Cook ATRANS Food to Waiter
Customer INGEST Food
Waiter ATRANS Check to Customer
Customer PTRANS Customer out of restaurant.
script theory attempts to specify the major elements of a
social interaction in terms of a relatively small list of
conceptual primitives, Schank and Abelson also recognized
that scripts are incomplete.
Scripts are, in some sense, prototypes of social situations, because they list the features of these situations and the social interactions that take place within them. But they go beyond prototypes to specify the relations, particularly, the temporal, causal, and enabling relations, among these features. The customer orders food before the waiter brings it, and the customer can't leave until he pays the check, but he can't pay the check until the waiter brings it.
In any event, scripts enable us to categorize social situations: we can determine what situation we are in by matching its features to the prototypical features of various scripts we know. And, having categorized the situation in terms of some script, that script will then serve to guide our social interactions within that situation. By specifying the temporal, causal, and enabling relations among various actions, the script enables us to know how to respond to what occurs in that situation.
As noted earlier, social
stereotypes are categories of persons. Walter Lippman
(1922), the journalist and political analyst who coined the
term, defined a stereotype as "an oversimplified picture of
the world, one that satisfies a need to see the world as
more understandable than it really is". That's true,
but from a cognitive point of view stereotypes are simply
emotionally laden social categories -- they are a conception
of the character of an outgroup that is shared by
members of an ingroup. Of course, outgroup
members can share stereotypes concerning the ingroup, as
well -- but in this case, the roles of ingroup and outgroup
are simply reversed. It's also true that ingroup
members can share a stereotype of themselves.
Judd and Park (1993)
summarize the general social-psychological understanding
of stereotypes as follows:
As cognitive concepts, stereotypes have an inductive aspect, in that they attribute to an entire group features of a single group member (or a small subset of the group); and a deductive aspect, in that they attribute to every member of the group the features ascribed to the group as a whole.
have a number of functions.
the years, a number of theoretical accounts have been given
Certainly stereotypes look like other categories. Social stereotypes consist of a list of features (like traits) that are held to be characteristic of some group. Alternatively (or in addition), they can consist of a list of exemplars of individuals who are representative of the group. Sometimes, though not often enough, stereotypes acknowledge variability among individual group members, or exceptions to the rule. Arguably, there's more to social stereotypes than a simple list of features associated with various groups. Certainly, features such as various traits play a major role in the content of social stereotypes. But social stereotypes probably also contain information about the variability that surrounds each of these features (not just their central tendency). Moreover, if we believe the exemplar view of categories (and we should), stereotypes as categories may be represented not just by a list of features, but also of instances of the category -- including exceptional instances. When, during the 2008 presidential race, Senate Majority Leader Harry Reid said that the American electorate might be ready to elect a "light skinned" African-American "with no Negro dialect", he was certainly referring to features associated with the African-American stereotype; but he probably also had in mind an older generation of African-American political figures, such as Jesse Jackson of Al Sharpton.
studies of stereotypes have opted for the "featural" view,
and have sought to identity sets of features believed to be
characteristic of various social groups.
Among the most famous of these is
the "Princeton Trilogy" of studies of social
stereotypes held by Princeton University
undergraduates. In a classic study, Katz & Braly
(1933) presented their subjects with a list of adjectives,
and asked them to check off all that applied to the members
of particular racial, ethnic, and national groups.
These can be thought of as the features associated with the
concepts of American, German, etc.
Essentially the same study was repeated after World War II
by Gilbert (1951), in the late 1960s by Karlins et al.
(1969). There were also follow-up studies, conducted
at other universities, by Dovidio &Gaertner (1986) and
Devine & Elliot (1995). The slide at the right shows the traits
most frequently associtaed with the :American" and
Interestingly, when the Katz and Braly (1933) study was repeated, these stereotypes have proved remarkably stable (e.g., Gilbert, 1951). For example, the stereotype of Germans collected in 19967 by Karlins, Coffman, & Walters (1969) showed considerable overlap with the one uncovered by Katz and Braly in the 1930s. Some traits dropped out, and some others dropped in, but most traits remained constant over 36 years. Still, there were interesting differences from one study to the next (Devine & Elliot, 1995).
conventional interpretation of the Princeton Trilogy was
of a general fading of negative stereotypes over time,
presumably reflecting societal changes toward greater
acceptance of diversity, reductions in overt racism, and
trends toward liberalism, and cosmopolitanism. the
three generations covered by the Trilogy tended to include
different traits in their stereotypes, with decreased
consistency, and especially diminished negative
valence. They still had stereotypes, but they were
(quoting President George H.W. Bush, '41) "kinder and
gentler" than before.
At the same time, Devine and Elliot (1995) identified some methodological problems with the Trilogy -- especially those studies that followed up on the original 1933 study by Katz & Braly). For one thing, these studies used an adjective set that, having been generated in the early 1930s, might have been outdated, and thus may have failed to capture the stereotypes in play at the time the later studies were done. More importantly, the instructions given to subjects in the followup studies were ambiguous, because they did not distinguish between their knowledge of the stereotype and their acceptance of it. It is one thing to know what your ingroup at large thinks about Germans or African-Americans, but another thing entirely to believe it yourself. D&E argued that cultural stereotypes were not the same as personal beliefs
The results were fascinating. Like Katz and Braly (1933), but unlike the later studies in the Princeton Trilogy (including the follow-up by Dovidio & Gaertner), Devine and Elliot found a high degree of uniformity in whites' stereotype of African-Americans. They also found comparable levels of negativity. Apparently, both the low degree of uniformity found by the later studies, and the decreasing levels of negativity, stemmed from their failure to clearly distinguish between stereotypes and personal beliefs. As predicted, both high- and low-prejudiced subjects acknowledged the same stereotype concerning African-Americans. But when it came to personal beliefs, the highly prejudiced subjects' beliefs were more congruent, and the low-prejudiced subjects beliefs more incongruent, with the cultural stereotype.
Devine and Elliot argued that stereotypes are automatically activated by the stimulus of an outgroup member (e.g., physically present or depicted in media). In highly prejudiced individuals, this stereotype is then translated into prejudicial or discriminatory behavior. In low-prejudice individuals, the automatic activation still occurs, but its translation into negative behavior is consciously controlled. However, this conscious control requires time, effort, and consumes cognitive capacity. So, even unprejudiced individuals may act on stereotypes, depending on the circumstances.
So stereotypes are social categories, but again, the question is how are they structured? Surely, nobody thinks that all Americans are industrious or all Germans scientifically minded, so the features associated with these categories are not singly necessary nor jointly sufficient to define the category. What does it mean when people stereotype Germans as "scientifically minded" and "extremely nationalistic"? Not, surely, that these features are true for all Germans. Nor even, perhaps, that they are true for most. Maybe they are typical -- but what, after all, does "typical" mean?
Some sense of
how features get associated with stereotypes was provided by
McCauley & Stitt (1978), following earlier analyses by
Brigham (1969, 1971). They took certain
characteristics of the German stereotype, and asked subjects
to rate them in two ways:
Employing Bayes' Theorem, a statistical principle that compares observed probabilities to base-rate expectations, they calculated the diagnostic ratio between p(T|G) and p(T). The result was that features such as efficient, nationalistic, industrious, and scientific, which are associated with the German stereotype, are thought to occur more frequently in Germans than in people at large (never mind whether they do -- we're dealing with cognitive beliefs here, not objective reality).
In other words, features are associated with a stereotype if they occur -- or are believed to occur -- relatively more frequently among members of the stereotyped group than in other groups. Stereotype traits need not be present in all group members -- remember that Lippman himself described stereotypes as over-broad generalizations; nor need they be present in a majority of group members. They may even be less frequent in group members than traits that are not part of the stereotype. In order to enter into a stereotype, traits need only be relatively more probable in group members compared to another group (e.g., an ingroup), or to the population as a whole.
Note, however, that the probabilities in question are subjective, not objective. They represent people's beliefs about stereotype traits, not objective reality -- including beliefs about the strength of association between the group and the trait. Viewed objectively, it may be that these traits really are more common in the stereotype group than in an ingroup, in which case the stereotype might have a "kernel of truth" to them. But it is not necessarily the case, and even if it is the case the beliefs may amplify objective reality. In either case, stereotype traits are believed to be more diagnostic than they actually are.
In the present context, the important thing is that the findings of McCauley & Smith (1978) are exactly what we would expect, if stereotypes are fuzzy sets of features that are only probabilistically associated with category membership, and summarized by a prototype that possesses a large number of central features of the category.
Where do stereotypes come from? To some extent, of course, they are a product of social learning and socialization. In the words of the old Rogers and Hammerstein song (from South Pacific), "You've Got to be Carefully Taught".
But stereotypes can be based on direct as well as vicarious experience -- which is probably where the notion of a "kernel of truth" came from.
Consider, for example, the stereotype that girls and women generally have poorer quantitative and spatial skills, and better verbal skills, compared to males.
It was this notion that, in 2005, led Lawrence Summers, then president of Harvard, to suggest that there were relatively few women on Harvard's math and science faculty because of innate gender differences in math and science ability. Of course, that doesn't explain why there were more tenured males than females in Harvard's humanities departments (you can look it up to see whether that is still the case; as of 2010, it was). The most parsimonious explanation is that, whatever "innate" differences there might be, there is systematic gender bias against women getting tenure at Harvard. The brouhaha over his comments led Summers to resign the presidency shortly thereafter -- but didn't prevent him from being appointed chief economic advisor in the Obama administration.
Along these same lines, somebody, somewhere, sometime, observed some outgroup member display some intellectually or socially undesirable characteristic, or engage in some socially undesirable behavior, some girl somewhere who had trouble with advanced mathematics, and that got the ball rolling. On the other hand, people who stereotype often have limited experience with those whom they stereotype.
continue the example of gender stereotypes concerning
mathematical, spatial and verbal abilities, what's the
evidence? Is there a "kernel of truth"?
So there is a little sex difference in mathematical ability favoring males, and a little sex difference in verbal ability favoring females. That's the "kernel of truth". But there's no evidence at all of sex differences of a magnitude high enough to warrant the gender stereotype of the math-challenged female. Even if Benbow and Stanley are right, that females are underrepresented in the highest echelons of mathematical ability, there are still enough of them to take up half the tenured chairs in mathematics at Harvard. And even if they're right, that doesn't mean that the under-representation of girls in their group of "mathematically precocious youth" reflects an innate gender difference. Just because boys and girls are physically located in the same elementary and junior-high classrooms, doesn't mean that their exposure to mathematics is the same.
And even if they're right, that's no reason to impose the stereotype on each individual girl or woman. Instead, each individual ought to be evaluated, and treated, on his or her own merits. To do otherwise is -- well, it's un-American. But I digress.
But most important, the fundamental fact about stereotypes is that they're essentially untrue. If it were really true that Germans were industrious, or that the average German is industrious, or that Germans are actually more likely to be industrious than non-Germans, or people at large, then -- well, it wouldn't be a stereotype, would it? So given that stereotypes involve false (or, at least, exaggerated) beliefs about Them, where do these beliefs some from.
One source of false belief is the illusory correlation, a term coined by Loren Chapman and Jean Chapman (1967, 1969). The stereotype that Germans are industrious, in this view, represents an illusory correlation between "being German" and "being industrious".
the illusory correlation comes in two forms:
correlations, in turn, have two sources:
experiment, members of each group were depicted as engaging
in a mix of desirable and undesirable behaviors, at a ratio
there was no actual correlation between group membership and
undesirable behavior. Nevertheless, when the subjects
were asked to estimate the frequency with which undesirable
actions had been displayed by each group, the subjects
underestimated the frequency of undesirable behaviors by the
majority group, and overestimated the frequency of
undesirable behaviors by the minority group.
Similarly, when asked to rate the traits of group members,
they rated the majority higher on positive traits, and the
minority higher on negative traits. In both ways, the
subjects perceived a correlation between undesirable
behavior, and undesirable traits, and minority-group
membership -- a correlation that was entirely illusory.
A second experiment employed reversed the ratio, with 4 desirable behaviors to 9 undesirable behaviors, and induced an illusory correlation between minority-group status and positive actions. In both studies, subjects perceived an illusory correlation between minority-group status and infrequent behaviors.
Similar results were obtained by Allison and Messick (1988), who pointed out that both humans and animals have particular difficulty processing nonoccurrences -- to continue the example, instances where members of the minority group did not engage in undesirable behavior, or majority-group members did not engage in desirable behavior.
The idea that stereotypes contain a "kernel of truth" leads us to wonder just how accurate stereotypes are. If the probability of being efficient if you're a German is greater than the baserate for efficiency in people generally, just how efficient are Germans, anyway? As with social perception in general, of course, the first problem in assessing accuracy is identifying an objective criterion against which stereotype accuracy can be measured. How do you measure efficiency, and where do you find a representative sample of Germans to apply it? But we're going to set that problem aside for the moment, and talk about the problem of stereotype accuracy in the abstract.
The most thorough analysis of stereotype accuracy comes from Judd and Park (1993), who argued that stereotypes should be assessed at the level of the individual perceiver. stereotypes may be held by members of one group (e.g., the French) about another group (e.g., Germans) but they "need not be consensually shared" (p. 110). The important thing is that stereotypes reside in the head of the individual, and shape his or her interactions with members of the stereotyped group. Viewed from this perspective, stereotype accuracy may take a variety of forms.
Here are some examples. Suppose we collect subjective ratings of some outgroup on a set of traits, some negative and some positive. The average values on each of these traits would constitute the stereotype on that group. But now suppose we also gather objective evidence of the actual standing of outgroup members on each of these traits. These would constitute the criterion against which the validity of the stereotype can be assessed. Here are some possibilities for the relationship between stereotype and reality.
|One possibility is that the stereotype
is accurate. That is, the average subjective estimates
correspond closely to the actual objective standings
of the group on each of these traits. In this
case, there would be a high correlation between the
stereotype and reality, and only very small
discrepancies between the two measures.
|Or, the stereotype could be a little more negative
In this case, the group's negative qualities are
believed to be just a little more negative, and its positive
qualities just a little less positive, than
is really the case. Note, however,
that the correlation between stereotype and
reality is still high. It's just that the
values are shifted
down just a bit.
|Or it could be a lot more
negative. Here, the mean values have shifted further in the
negative direction, but the correlation is still
|Here's a really, really
negative stereotype, which doesn't allow for any
positive qualities at all. All the stereotypic
values are below zero. But the correlation
between stereotype and reality is still high.
There are other possibilities, of course, including one where the correlation between stereotype and reality is essentially zero.
OK, but now let's
turn back to the thorny problem of the criterion by which
the various aspects of stereotype accuracy can be
assessed. Judd and Park (1993) considered several
Conditioning on the Consequent
Diagonal = 82%, phi = .64
Conditioning on the Antecedent
Diagonal = 55%, phi
With such a design in hand, Judd and Park (1993) argue that all sorts of accuracy assessments are possible, following the pattern set down by Cronbach (1955), as discussed in the lectures on Social Perception.
For purposes of demonstration, Judd and Park (1993) performed a study of stereotyping by Democrats and Republicans, using data from the 1976 National Election Study in which a representative sample of American voters stated their party affiliation or preference, and were also asked about their positions on 10 policy issues, such as school busing, aid to minorities, and government health insurance (in 1976!). They were also asked where Democrats and Republicans "as a whole" stood on these same issues. The actual positions of the Democrats and Republicans served as the criterion, for comparison with the respondents' impressions of the Democratic and Republican positions.
Judd and Park (1993)
showed that, given sufficient data, it is actually
possible to assess the accuracy of stereotypes, and to
determine whether there are individual or group
differences in the tendency to stereotype. But, as
they themselves point out, we really do not have a good
sense of how accurate stereotypes are concerning social
groups other than Democrats and Republicans -- largely
because of the lack of objective criteria, but also
because of the demands of the "full-accuracy"
design. Nor, for that matter, given the massive
changes in both the Democratic and Republican parties
since 1976, do we even have a good sense about
stereotyping by Democrats and Republicans!
the full-accuracy design to other social groups this is a
pretty tall order -- but essential, Judd and Park (1993)
argue (I think correctly, though Judd is a former
colleague and I may be biased), if we're going to get any
handle on stereotype accuracy.
A related, but slightly different, perspective on accuracy has been presented by Lee Jussim (2012, 2015). Jussim argues that there are four aspects of stereotyping that need to be distinguished:
Considering these four aspects of stereotyping yields four types of stereotype inaccuracy, depending on whether they refer to personal or consensual stereotypes:
As in the sample scatterplots depicted above, discrepancies can be substantial, even though correspondence remains high. But discrepancy and correspondence measures can yield quite different assessments of stereotype accuracy.
In fact, as Jussim points out, there have been very few studies of stereotype accuracy -- perhaps because, as he implies, most social psychologists have shared Lippmann's view that stereotypes are inherently false. And somewhat surprisingly, he argues -- contra Lippmann and the consensus among social psychologists -- that, when accuracy has been assessed, it turns out that stereotypes appear to be a lot more accurate than we would have thought.
A popular view of stereotyping is that stereotypes are automatically elicited by the presence of a member of the stereotyped group. I'll discuss automaticity in more detail in the lectures on "Social Judgment and Inference", but for now the argument is simply that the mere presence of an outgroup member may be sufficient to activate the stereotype of that group in the mind of the perceiver. It's like the "evocation" mode of the person-environment interaction, with a twist automatic processes are unconscious in the strict sense of the term: they operate outside of phenomenal awareness and voluntary control. Thus, the presence of a member of the stereotyped group can evoke the stereotype in a perceiver without the perceiver even realizing what is happening.
The automatic elicitation of stereotypes was nicely demonstrated in a study of race-based priming by Devine (1989). In a preliminary study, she employed a thought-listing procedure with white college students to elicit their stereotypes concerning blacks. The procedure generated such terms as poverty, poor education, low intelligence, crime, and athletics (sorry, but there's no way to describe this study without detailing the stereotype; and, let's face it, stereotypes about African Americans have a lot more relevance to contemporary American society than do stereotypes about Germans -- which is precisely why Devine did her experiment this way).
In her formal experiment, Devine asked a new set of subjects to perform a vigilance task in which they had to respond whenever they saw a target appear on a computer screen. At the same time, the screen flashed words associated with the black stereotype -- employing a "masking" procedure that effectively prevented the words from being consciously perceived. Some subjects received a high density of stereotype-relevant words, 80%; other subjects received a lower density, only 20%.
After performing this vigilance task, the subjects were asked to read the "Donald Story" (Srull & Wyer, 1979), which consists of a number of episodes in which the main character, named Donald, engaged in a number of ambiguous behaviors that could be described as hostile, or would be given a more benign interpretation.
After reading the story, the subjects
were asked to evaluate Donald on a number of trait
dimensions. The general finding was that, compared to
a control group that received all "race-neutral" primes,
subjects who were primed with words relating to the
African-American stereotype rated Donald as more socially
hostile -- and the more so, the greater the density of the
race-based primes. The results are especially
interesting because the primes themselves were presented
"subliminally", outside of conscious awareness. Thus,
the subjects could not consciously connect the primes to
Donald. Apparently, presentation of the negative
racial primes activated corresponding representations in
memory, and this activation spread to the mental
representation of Donald formed when the subjects read the
This unconscious race-based priming occurred even in subjects who scored low in racial prejudice, as measured by the Modern Racism Scale (caveat: the MRS is actually a pretty bad instrument for assessing racial prejudice; Devine knows this, but it was the only instrument of its kind available at the time). This raises the possibility that people can be consciously egalitarian, but nonetheless harbor unconscious racial (and other) stereotypes and prejudices. Unconscious prejudice is particularly difficult to deal with, because the stereotype operates automatically -- you just can't help thinking in terms of the stereotype; and the stereotype itself may not even be consciously accessible.
Based on studies like this, Anthony Greenwald, Mahzarin Banaji, and their colleagues have developed the Implicit Association Test (IAT), a procedure which, they claim, assesses unconscious attitudes, including unconscious prejudice toward various social groups. For the record, I'm skeptical that the IAT actually does this -- but that's a discussion for another time. Nonetheless, the IAT has become extremely popular in the study of stereotyping and prejudice.
Much, perhaps most, of the evidence bearing on the concept of implicit emotion comes from recent social-psychological work on attitudes, stereotypes, and prejudice. In social psychology, attitudes have a central affective component: they are dispositions to favor or oppose certain objects, such as individuals, groups of people, or social policies, and the dimensions of favorable-unfavorable, support-oppose, pro-anti naturally map onto affective dimensions of pleasure-pain or approach-avoidance. As Thurstone put it, "attitude is the affect for or against a psychological object" (1931, p. 261). Like emotions, attitudes are generally thought of as conscious mental dispositions: people are assumed to be aware that they are opposed to nuclear power plants, or favor a women's right to choose. Similarly, people are generally believed to be aware of the stereotyped beliefs that they hold about social outgroups, and of the prejudiced behavior that they display towards members of such groups. And for that reason, attitudes and stereotypes are generally measured by asking subjects to reflect and report on their beliefs or behavior. However, Greenwald and Banaji (1995) proposed an extension of the explicit-implicit distinction into the domain of attitudes. Briefly, they suggest that people possess positive and negative implicit attitudes about themselves and other people, which affect ongoing social behavior outside of conscious awareness.
another way, we have blindspot which prevents us
from being aware of our own prejudices.
Following the general form of the explicit-implicit distinction applied to memory, perception, learning, and thought in the cognitive domain, we may distinguish between conscious and unconscious expressions of an attitude:
Explicit attitude refers to conscious awareness of one's favorable or unfavorable opinion concerning some object or issue.
By contrast, an implicit attitude refers to any effect on a person's ongoing experience, thought, and action that is attributable to an attitude, regardless of whether that opinion can be consciously reported. From a methodological point of view, explicit attitudes would be assessed by tasks requiring conscious reflection on one's opinions; implicit attitudes would be assessed by tasks which do not require such reflection.
An early demonstration of implicit attitudes was provided by a study of the "false fame effect" by Banaji and Greenwald (1995). In the typical false fame procedure (Jacoby, Kelley, Brown, & Jasechko, 1989), subjects are asked to study a list consisting of the names of famous and nonfamous people. Later, they are presented with another list of names, including the names studied earlier and an equal number of new names, and asked to identify the names of famous people. The general finding of their research is that subjects are more likely to identify new rather than old nonfamous names as famous. In their adaptation, Banaji and Greenwald included both male and female names in their lists, and found that subjects were more likely to identify male names as famous. This result suggests that the average subject is more likely to associate achievement with males than with females -- a common gender stereotype.
Similarly, Blair and Banaji (1996) conducted a series of experiments in which subjects were asked to classify first names as male or female. Prior to the presentation of each target, the subjects were primed with a word representing a gender-stereotypical or gender-neutral activity, object, or profession. In general, Blair and Banaji (1996) found a gender-specific priming effect: judgments were faster when the gender connotations of the prime were congruent with the gender category of the name. This means that gender stereotypes influenced their subjects' classification behavior.
In the area of racial stereotypes, Gaertner and McLaughlin (1983) employed a conventional lexical-decision task with positive and negative words related to stereotypes of Blacks and whites, and the words "black" or "white" serving as the primes. There was a priming effect when positive targets were primed by "white" rather than "black", but no priming was found for the negative targets, and this was so regardless of the subjects' scores on a self-report measure of racial prejudice. thus, the effect of attitudes on lexical decision was independent of conscious prejudice.
Similarly, Dovidio Evans, and Tyler (1986) employed a task in which subjects were presented with positive and negative trait labels, and asked whether the characteristic could ever be true of black or white individuals. While the judgments themselves did not differ according to race (even the most rabid racist will admit that there are some lazy whites and smart blacks), subjects were faster to endorse positive traits for whites, and to endorse negative traits for blacks. Thus, even though conscious attitudes did not discriminate between racial groups, response latencies did.
These studies, and others like them (e.g., Devine, 1989), seem to reveal the implicit influence of sexist or racist attitudes on behavior. However, at present, interpretation of these results is somewhat unclear. In the first place, the logic of the research is that stereotype-specific priming indicates that subjects actually hold the stereotype in question -- that, for example, the subjects in Blair and Banaji's (1996) experiment really (if unconsciously) believe that males are athletic and arrogant while females are caring and dependent. However, it is also possible that these priming effects reflect the subjects' abstract knowledge of stereotypical beliefs held by members of society at large, though they themselves personally reject them -- both consciously and unconsciously. Thus, a subject may know that people in general believe that ballet is for females and the gym is for males, without him- or herself sharing that belief. Even so, this knowledge may affect his or her performance on various experimental tasks, leading to the incorrect attribution of the stereotypical beliefs to the subject.
Moreover, most studies of implicit attitudes lack a comparative assessment of explicit attitudes. Implicit measures of attitudes may be useful additions to the methodological armamentarium of the social psychologist, but in the present context their interest value rests on demonstrations of dissociations between explicit and implicit expressions of emotion. Accordingly, it is important for research to show that implicit measures reveal different attitudes than those revealed explicitly. Just as the amnesic patient shows priming while failing to remember, and the repressive subject shows autonomic arousal while denying distress, we want to see subjects displaying attitudes or prejudices which they deny having, and acting on stereotypes which they deny holding.
Wittenbrink, Judd, and Park (1997) performed a formal comparison of explicit and implicit racial attitudes. Their subjects, all of whom were white, completed a variety of traditional questionnaire measures of self-reported racial attitudes. They also performed a lexical-decision task in which trait terms drawn from racial stereotypes of whites and blacks were primed with the words black, white, or table. Analysis of response latencies found, as would be anticipated from the studies described above, a race-specific priming effect: white speeded lexical judgments of positive traits, while black speeded judgments of negative traits. However, the magnitude of race-specific priming was correlated with scores on the questionnaire measures of racial prejudice. In this study, then, implicit attitudes about race were not dissociated from explicit ones. Such a finding does not undermine the use of implicit measures in research on attitudes and prejudice (Dovidio & Fazio, 1992), but a clear demonstration of a dissociation is critical if we are to accept implicit attitudes as evidence of an emotional unconscious whose contents are different from those which are accessible to phenomenal awareness.
The Implicit Attitude Test
1998, Greenwald, Banaji, and their colleagues have
introduced the Implicit Attitude Test (IAT), which
is expressly designed to measure implicit
attitudes. The IAT consists of a series of
dichotomous judgments, which we can illustrate with a
contrived "Swedish-Finnish IAT" that might be used to
detect prejudice of Swedes against Finns (or
The logic of the IAT is based on a principle of stimulus-response compatibility discovered in early research on human factors by Small (1951) and Fitts and Posner (1953). The general principle here is that subjects can respond to a stimulus faster when the stimulus and response are compatible. So, for example, subjects will respond faster, and more accurately, with their left hand to a stimulus that appears on the left side of a screen, and with their right hand to stimuli on the right side. By analogy, subjects who like Swedes will respond faster when the "Swedish" category shares the same response with the "Good" category, and slower when the "Swedish" category shares the same response with the "Bad" category.
Just to make it perfectly clear:
If subjects are required to make the same response (i.e., push the same key) to Swedish names and positive words, faster responses imply an association between "Swedish" and "Good".
If subjects are required to make the same response to Finnish names and negative words, faster responses imply an association between "Finnish" and "Bad".
In this way, by comparing response latencies across the different conditions, Greenwald and Banaji proposed to measure unconscious prejudices and other attitudes, independent of self-report.
Link to demonstrations of the IAT on the "Project Implicit" website.
IAT is usually administered by computer, and most
often online. But there is a
paper-and-pencil version of it, developed for
purposes of classroom demonstrations. A
number of these are available, in various
formats. But before you take one, in an
attempt to find out if you're an unconscious
racist, read on.
Explicit-Implicit Dissociations (?)
For example, in one early study, Greenwald, Banaji, and their colleagues looked at white subjects implicit attitudes toward blacks. Subjects responded faster when stereotypically "White" names shared a response key with "Positive", than when stereotypically "Black" names shared a response key with "Positive", thus implying that they associated White with good, and Black with bad.
Greenwald et al. also measured their subjects' explicit racial attitudes with a standard technique known as the attitude thermometer, which is basically a numerical rating scale with one pole labeled "positive" and the other pole labeled "negative". The correlation (r) between IAT and the attitude thermometer varied from .07 to .30, depending on the sample.
In another study, they looked at attitudes among certain Asian ethnicities. Koreans responded faster when "Korean" names shared a key with "Positive", and "Japanese" names shared a key with "Negative". Japanese subjects did precisely the opposite, responding faster when "Japanese" and "Positive" shared a key than when 'Korean" and "positive" did. Thus implying that Koreans associated Japanese with bad, while Japanese made the same associations with Koreans.
Greenwald et al. also measured their subjects' explicit ethnic attitudes with a standard technique known as the attitude thermometer, which is basically a numerical rating scale with one pole labeled "positive" and the other pole labeled "negative". The correlation (r) between IAT and the attitude thermometer varied from -.04 to .64, depending on the sample.
By now a huge literature has developed in which the IAT has been used to measure almost every attitude under the sun. Nosek (2007) summarized this literature with a graph showing the average explicit-implicit correlation, across a wide variety of attitude objects. These correlations varied widely, but the median explicit-implicit correlation was r = .48.
Another review, by Greenwald et al. (2009), of 122 studies, showed that the IAT correlated with external criteria of attitudes about as well as did explicit assessments such as the attitude thermometer.
Critique of the IAT
Greenwald, Banaji, and their colleagues have claimed that the explicit-implicit correlations obtained between the IAT and the attitude thermometer and other self-report measures are relatively low, and this suggests that unconscious attitudes can, indeed, be dissociated from conscious ones. But there are some problems with this argument.
In the first place there are a number of potentially confounding factors, the most important of which may be target familiarity. Swedes may or may not like Finns, but by any standard a Finnish name like Aaltonen is going to be less familiar to him than a Swedish name like Eriksson, and this difference in familiarity, more than any difference in attitude, may account for the differences in response latency.
Similarly, it may be in some sense easier for a Swede to identify a Swedish name as Swedish; he may wonder whether a Finnish name is really Finnish, as opposed to Hungarian (they're related language groups) -- or, perhaps, belongs to a member of a group of Swedish-speaking Finns! So, the issue may be task difficulty rather than attitude.
There is also a confound with task order. In the Swedish-Finnish example, Swedish shares a response with Good in Phase 3, while Finnish shares a response with Good in Phase 4. So, Swedish is paired with Good before Finnish is paired with Good. This problem can be solved, over, many subjects, by counterbalancing the order of tasks and then averaging over many subjects. Some subjects do Swedish-Good before Finnish-Good, others do the reverse. But you can't counterbalance within a single subject. The upshot is that, because counterbalancing can eliminate the order confound across subjects, it might be possible to say that, for example, Swedes in general are prejudiced against Finns. But because counterbalancing can't eliminate the order confound within a single subject, it's just not possible to say that a particular Swede shares this prejudice -- or, perhaps, is a self-hating Swede who really likes Finns better.
There is also the problem of determining exactly what the person's attitude is. It is one thing for a Swede to actively dislike Finns, but it is another thing entirely for a Swede to like Finns well enough, but like Swedes better. The IAT cannot distinguish between these two quite different attitudinal positions. All it does is make an inference of relative attitude from relative reaction times.
There is also the issue of what the psychometricians count as construct validity. How well does the IAT predict some construct-relevant external criterion, such as a Swede's willingness to hire a Finn, or let him marry his daughter?
Another aspect of construct validity has to do with group differences. Koreans appear prejudiced against Japanese, and Japanese against Koreans, and that's what we'd expect if the IAT really measured prejudicial attitudes. But, for example, there is little evidence concerning other ingroup-outgroup differences in IAT performance. The problem is encapsulated in the title of a famous critique of the IAT, entitled "Would Jesse Jackson fail the IAT?". If, for example, African-American subjects also "favor" whites when they take the IAT, it would be hard to characterize a Black-White IAT as a measure of prejudice against African-Americans.
The biggest problem, however, is that correlation between explicit and implicit prejudice, which Nosek reports at a median r of .48. That's not a perfect correlation of 1.00, but it's also not a zero correlation. In fact, it's a big correlation by the standards of social-science research -- and, in fact, it's about as big as it can get, given the test-retest reliability of the IAT. If explicit and implicit attitudes were truly dissociable, we'd expect the explicit-implicit correlation to be a lot lower than it is. The fact that explicit-implicit correlations are typically positive, and in many cases quite substantial, suggests, to the contrary, that people's implicit attitudes are pretty much the same as their explicit attitudes. They're not dissociated, they're associated.
This last problem is confounded by the fact that improvements in the scoring procedure for the IAT have actually led to increases in the correlation between explicit and implicit attitudes. But if the IAT were really measuring unconscious attitudes, we'd expect psychometric improvements to decrease the correlation -- to strengthen the evidence that explicit and implicit attitudes are truly dissociable.
The fact that explicit and implicit attitudes are significantly correlated, and that the correlation increases with psychometric improvements, suggests instead that the IAT may be an unobtrusive measure of attitudes that are consciously accessible, but which subjects are simply reluctant to disclose -- something on the order of a lie-detector. But an unobtrusive measure of a conscious attitude shouldn't be confused with a measure of an unconscious attitude.
But even that isn't entirely clear, because of the technical problems with the IAT described earlier -- issues of target familiarity, task difficulty, distinguishing between unfavorable attitudes and those that are simply less favorable. For this reason, I think it's premature for its promoters to promote the IAT as a measure of any kind of attitude, conscious or unconscious.
The Psychologist's Fallacy
Frankly, the IAT brings us full circle, back into Freudian territory -- though without the lurid claims about primitive sexual and aggressive motives. Freud was quite content to tell people what their problems were -- that, for example, they loved their mothers and hated and feared their fathers. And when people would say it wasn't true, he would explain to them the concept of repression. And when they continued to resist, he'd tell them that their resistance only indicated that he was right. In much the same way, it's a little disturbing to find the promoters of the IAT using it to tell people that they're prejudiced, only they don't know it. Because it isn't necessarily so.
This is the problem of what William James and John Dewey called the psychologist's fallacy -- the idea that, first, every event has a psychological explanation; and, second, that the psychologist's explanation is the right one. Freud thought that he knew better than his patients what their feelings and desires were. The "IAT Corporation" (yes, there really is one, offering the IAT to government and corporate personnel and human-relations departments concerned about workforce diversity) claims to know better than you do whether your prejudiced against African-Americans, or Hispanics, or Japanese, or Koreans.
At this point it's important to be reminded of what William James wrote about the unconscious mind. It's critical that assessments of unconscious motivation and emotion, no less than unconscious cognition, be based on the very best evidence. Otherwise, unconscious mental life will become the "tumbling-ground for whimsies" that James warned it could be.
Whether consciously or unconsciously, whether accurate or inaccurate, stereotypes exist in the mind of the perceiver, and clearly affect the judgments that the perceiver makes about target members of stereotyped groups. Obviously, stereotypes have effects on the targets, as well.
And, of course, if Devine and others are right, all of this can happen automatically and unconsciously, without the perceiver, or the target, realizing what is going on.
doesn't stop there. Stereotyping can have a host of
other effects on the stereotyped individual, not all of
which can be considered either outright prejudice and
discrimination, or the self-fulfilling prophecy.
already discussed stereotype threat in the context of the
self-fulfilling prophecy, in the lectures on The Cognitive
Basis of Social Interaction. Here's a quick
An early demonstration of stereotype threat was reported by Steele and Aronson (1995). They recruited black and white Stanford undergraduates for a study of reading and verbal reasoning. The subjects completed a version of the GRE verbal reasoning test under one of three conditions:
- In the Diagnostic condition, the subjects were informed that the test results would provide a personal diagnosis of their own level of verbal ability.
- In the Nondiagnostic condition, there was no reference to the assessment of individual subjects' verbal ability.
- In the Nondiagnostic Challenge condition, there was also no reference to individual assessment, but the subjects were told that the test items were intentionally very difficult.
The intention was that black subjects in the Diagnostic condition would experience stereotype threat, by virtue of the stereotype that black college students have, on average, lower intellectual abilities than white students. And that's pretty much what happened. Even though the black and white groups had been carefully equated for verbal ability, based on their SAT scores, the black students underperformed, compared to the white students, in the Diagnostic condition but not in the Nondiagnostic condition. Although it initially appeared that blacks underperformed in the Nondiagnostic-Challenge condition as well, this difference disappeared after some statistical controls were added. A second study confirmed the essential findings of the first one, while a third showed that stereotype threat actually activated the stereotype in the minds of the black subjects, increased their levels of self-doubt, and decreased stereotype avoidance.
Stereotype threat has now been documented in a large number of different situations -- blacks and intelligence, women and math skills, even Asians stereotyped as the "model minority". It's a variant on the self-fulfilling prophecy, self-verification gone wildly wrong -- except that the "self" being verified is not the individual's true self, but rather an imagined self that conforms to the stereotype.
stereotype threat doesn't have to occur. A study by
Marx et al. (2009) compared black and white performance on
tests of verbal ability administered around the time that
Barack Obama was nominated for and elected to the
Marx et al. suggest that Obama provided a salient example, to the black subjects of a black person overcoming racial stereotypes, which in turn reduced the stereotype threat that would usually impair the performance of black subjects. Marx et al. dubbed this "the Obama effect".
Stereotypes have both a cognitive and an affective component. That is, they consist of beliefs that people have about certain groups; but these beliefs come attached to a (generally negative) emotional valence. This raises the question as to which has the stronger effect on the perception of (and thus behavior toward) individuals. Jussim et al. (1995) employed structural equation modeling, a variant on multiple regression, to evaluate a number of possibilities.
A series of studies, in which they controlled statistically for the effects of both belief and generally showed that affect was more important than any specific beliefs.
stereotypes can have such pernicious effects on social
interaction, including both dyadic and intergroup relations,
that psychologists have long sought ways to overcome or
conscious awareness of the stereotype is critical to
stereotype change. In a sense, a social stereotype is
a hypothesis, concerning the qualities associated with some
group, that is continually being tested as the perceiver
encounters members of that group. We may not be able
to avoid stereotyping entirely. But because there is
plenty of variability within any stereotyped group, a
perceiver who is aware of his or her stereotypes, and
attends to stereotype-disconfirming evidence, will
eventually weaken their hold on social cognition.
The issue of how we categorize other people has come to the fore with the "disability rights" movement, and the objection of people who have various disabilities to be identified with their disabilities (a similar issue has been raised in racial, ethnic, and sexual minority communities as well).
One important question is how to refer
to people with various disabilities. Put bluntly,
should we say that "Jack is a blind person"
or "Jack is a person who is blind"?
Or substitute any
label, including black, Irish, gay,
Dunn and Andrews (American Psychologist, 2015) have traced the evolution of models for conceptualizing disability -- some of which also apply to other ways of categorizing ourselves and others. The current debate offers two main choices:
Categorization, as the final act in the perceptual cycle, allows us to infer "invisible" properties of the object, and also how we should behave towards it. Just as Bruner has argued that Every act of perception is an act of categorization, so he has also noted that:
The purpose of perception is action (actually, this is also a paraphrase).
Some of these actions take the form of overt behavioral activity. Some of them take the form of covert mental activity, in terms of reasoning, problem-solving, judgment, and decision-making.
Natural categories exist in the real world, independent of the mind of the perceiver, and are reflected in the perceiver's mind in the form of concepts -- mental representations of categories.
Some social categories are, perhaps, natural categories in this sense. But other social categories are social constructs. They exist in the mind of the perceiver. But through the self-fulfilling prophecy and other expectancy confirmation processes, these social constructs also become part of the real world through our thought and our action.
This page was last revised 11/12/2021.